Kodi Community Forum
FreeNAS versus unRAID as the operating-system for a DIY NAS? - Printable Version

+- Kodi Community Forum (https://forum.kodi.tv)
+-- Forum: Discussions (https://forum.kodi.tv/forumdisplay.php?fid=222)
+--- Forum: Hardware (https://forum.kodi.tv/forumdisplay.php?fid=112)
+--- Thread: FreeNAS versus unRAID as the operating-system for a DIY NAS? (/showthread.php?tid=82811)

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17


- osirisjem - 2010-10-17

If my UNRAID had a drive that failed, and it is stored in the basement .... how would I know a drive died ?


- beckstown - 2010-10-17

gadgetman Wrote:@beckstown: thank you Smile that was exactly the point.

You are welcome Smile


PatrickVogeli Wrote:thanks! You made it a little more clear to me Smile

After reading this, I'm sure that if I ever build a NAS (and I will, sure), I'll go with unRaid. I'm a home user, unRaid is OK for me, and has a few advantages over Freenas with ZFS.

I can't consider having a 2 or 3 drive parity setup and I can't consider adding a drive and not getting its full capacity from the very begining.

Also, I'm curious to know how a 2/3 drive parity works... 1 parity drive is easy, you simply count how many bits are '1' over the drives and the parity bit will be '1' or '0' depending if it was even or uneven parity. How does that work when you have 3 parity drives?

Glad that the post was informative for you Smile

While writing the guide I actually asked myself the same question. I also don't know how several parity drives function. However, I am sure someone in this thread will have an answer for us.


- TugboatBill - 2010-10-17

osirisjem Wrote:If my UNRAID had a drive that failed, and it is stored in the basement .... how would I know a drive died ?

Most users will notice it when they do their monthly parity check. IIRC, you can also add the ability to have it email you.


- gabbott - 2010-10-17

osirisjem Wrote:If my UNRAID had a drive that failed, and it is stored in the basement .... how would I know a drive died ?

There are scripts that can be setup to notify you via email.


- froggit - 2010-10-17

beckstown Wrote:I have read the whole thread and it seems like the same concerns regarding Unraid and ZFS reappear quite often. So I will try to explain how they both work from what I gathered through this thread. I have to point out that my knowledge regarding ZFS nearly exclusively stems from this thread. I will edit the information if somebody points out flaws.

Similarities of how Unraid and ZFS function

...<SNIP>

Thanks for that good summary of features beckstown.

I have reviewed your original ZFS section, and made corrections and additions - see below. I am awaiting confirmation that the info in section e) is correct and will update this if it's incorrect:

2. ZFS

a) ZPOOL: With ZFS, on one NAS, you may have one or more 'zpools' (ZFS storage pools). It is usual to have one data spool, however.
Some users also create a separate spool for their Boot Environment, allowing multiple GRUB boot entries, and rollback of a failed OS upgrade if required.
The zpool is the top level component in ZFS storage. A zpool is composed of one or more virtual devices, or 'vdevs' as they are known.

b) VDEV: each vdev may employ varying types and levels of redundancy:
i) stripe with no parity.
ii) stripe with parity: (1) RAID-Z1 = capacity of one drive used for parity (single parity), (2) RAID-Z2 = capacity of two drives used for parity (double parity), (3) RAID-Z3 = capacity of three drives used for parity (triple parity).
iii) mirror using n drives. A drive in a mirror may be removed and read on another system.

c) ZFS can address virtually infinite amounts of data. In fact it's been proven that you would never be able to exceed ZFS' capacity limits: http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you

d) To read or write data, drives must be spinning. However, with a home NAS, there are often long periods of inactivity, and this is where the OS power management can be used to spin disks down to reduce power consumption. Faster than Unraid when it comes to read/write of files due to striping/mirrors.

e) Because parity data is distributed (striped) across all RAID-Z1/2/3 vdev drives, ZFS has no concept of a parity drive.
If you lose more drives in a vdev than you have parity, all the data stored in the pool will be lost! This is where backups are important. RAID != BACKUP. Due to this, some ultra-paranoid users insist on use of mirror vdevs, typically a 3-drive mirror vdev, to give the equivalent of double parity, but the added advantage is that drives can be removed and read on another system. However, 3-way mirrors quickly become very expensive for large amounts of data, so are best used only for critical irreplaceable data. And backups should always be done for irreplaceable data.

f) If you create a vdev with drives of different sizes, the smallest drive capacity will be the limit of capacity for the other drives.
Example: If you create a vdev with one 500 GB and three 2 TB drives, you will only be able to use 500 GB per drive. This is why people creating ZFS vdevs use same-sized drives.

g) Adding hard drives: You cannot add hard drives to an existing vdev. Instead, you can create a new vdev and add it your pool.
Example: You have already a vdev with RAID-Z3 and six 2 TB hard drives. This means you have 6 TB of storage and another 6 TB for parity. Now you buy 4 new 2 TB drives which you would like to add to your storage. For this, you have to create a new vdev. The question is then if you still want to use RAID-Z3 for the new vdev as that would mean you would only get 2 TB of new storage as 6 TB would be used for parity. Consequently, you either settle for less parity protection or buy more drives. For example, let us assume you create a RAID-Z2 vdev. So you now have a new vdev with 4 TB of storage and 4 TB of parity. In total you now have 5 disks used for storage and 5 disks used for parity. However, you don't have to lose more 5 disks to lose data. If you lose more than 2 drives on the on the RAID-Z2 vdev, you will lose all of its data. So while there is an additional safety in this setup, it is not as safe as 5 parity drives protecting all the data. Additionally, every time you add a new vdev, you have to pay extra for new parity disks and lose Sata ports due to them. This makes expanding storage more expensive in relation to Unraid.

h) Capacity expansion: As well as adding new drives to expand pool capacity (see g. above), it is also possible to expand an existing vdev by replacing each of its drives, one at a time, and doing a scrub after each drive is replaced. Once all drives are replaced, the new increased capacity is available.

i) Using ZFS snapshots provides protection against accidental deletion of any files. Snapshots are virtually instant and consume no space initially. Snapshots (1) prevent accidental deletion, and (2) allow full and incremental backups to be made to any ZFS-based file system by specifying its IP address. Snapshots also allow a rollback to a last known good state, assuming snapshots are taken regularly.

j) Scrubbing the pool once a month is recommended for home users. A report is displayed giving a list of read/write/checksum errors for all drives within each vdev for the pool. This gives the user clear information about drives which have read/write/bit-rot problems, enabling the user to determine when to replace drives proactively, rather than reactively when it may be too late. Data is still available during a scrub operation, but will be accessed more slowly, but fine for watching movies.

k) ZFS automatically self-heals any corrupted data it reads. Example: you watch a movie that has one or more dropped bits caused by bit rot. When ZFS reads the file, it will automatically detect and correct the corrupted data. This is also done during a scrub operation, which reads all the files and so detects and corrects all corrupted files.

l) Hot spares can be specified when the data pool is created, or added to the data pool later, and are used automatically if drive failure is detected. This minimises the chance of there being two failed drives at the same time, because as soon as a drive fails, it is automatically detected and rebuilt onto the spare drive without user intervention.

m) ZFS data pools can be shared via NFS, SMB/CIFS (Samba) and iSCSI.

n) ZFS has strong mechanisms to help ensure that what you intend to write to disk, is actually what gets written to disk, and read back again later. This is called end-to-end data integrity: http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data


- gadgetman - 2010-10-18

osirisjem Wrote:If my UNRAID had a drive that failed, and it is stored in the basement .... how would I know a drive died ?

There is a mod called unMenu which adds a more extensive webgui and package manager to unRAID. With this installed, you can easily setup email warning/reports. I have mine setup to warn me via email if there's drive failure or overheat (mine set to 40'c) with optional auto-shutdown, and aslo a daily report.


- poofyhairguy - 2010-10-18

osirisjem Wrote:If my UNRAID had a drive that failed, and it is stored in the basement .... how would I know a drive died ?

You check on the web interface. I look at both of mine at least one a week.


- osirisjem - 2010-10-18

poofyhairguy Wrote:You check on the web interface. I look at both of mine at least one a week.

Really ?
No instant messages or emails or something ?
I want like a firealarm to go off !


- TugboatBill - 2010-10-18

froggit Wrote:Thanks for that good summary of features beckstown.

I have reviewed your original ZFS section, and made corrections and additions - see below. I am awaiting confirmation that the info in section e) is correct and will update this if it's incorrect:

2. ZFS

a) ZPOOL: With ZFS, on one NAS, you may have one or more 'zpools' (ZFS storage pools). It is usual to have one data spool, however.
Some users also create a separate spool for their Boot Environment, allowing multiple GRUB boot entries, and rollback of a failed OS upgrade if required.
The zpool is the top level component in ZFS storage. A zpool is composed of one or more virtual devices, or 'vdevs' as they are known.

b) VDEV: each vdev may employ varying types and levels of redundancy:
i) stripe with no parity.
ii) stripe with parity: (1) RAID-Z1 = capacity of one drive used for parity (single parity), (2) RAID-Z2 = capacity of two drives used for parity (double parity), (3) RAID-Z3 = capacity of three drives used for parity (triple parity).
iii) mirror using n drives. A drive in a mirror may be removed and read on another system.

c) ZFS can address virtually infinite amounts of data. In fact it's been proven that you would never be able to exceed ZFS' capacity limits: http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you

d) To read or write data, drives must be spinning. However, with a home NAS, there are often long periods of inactivity, and this is where the OS power management can be used to spin disks down to reduce power consumption. Faster than Unraid when it comes to read/write of files due to striping/mirrors.

e) Because parity data is distributed (striped) across all RAID-Z1/2/3 vdev drives, ZFS has no concept of a parity drive.
If you lose more drives in a vdev than you have parity, all the data stored in the pool will be lost! This is where backups are important. RAID != BACKUP. Due to this, some ultra-paranoid users insist on use of mirror vdevs, typically a 3-drive mirror vdev, to give the equivalent of double parity, but the added advantage is that drives can be removed and read on another system. However, 3-way mirrors quickly become very expensive for large amounts of data, so are best used only for critical irreplaceable data. And backups should always be done for irreplaceable data.

f) If you create a vdev with drives of different sizes, the smallest drive capacity will be the limit of capacity for the other drives.
Example: If you create a vdev with one 500 GB and three 2 TB drives, you will only be able to use 500 GB per drive. This is why people creating ZFS vdevs use same-sized drives.

g) Adding hard drives: You cannot add hard drives to an existing vdev. Instead, you can create a new vdev and add it your pool.
Example: You have already a vdev with RAID-Z3 and six 2 TB hard drives. This means you have 6 TB of storage and another 6 TB for parity. Now you buy 4 new 2 TB drives which you would like to add to your storage. For this, you have to create a new vdev. The question is then if you still want to use RAID-Z3 for the new vdev as that would mean you would only get 2 TB of new storage as 6 TB would be used for parity. Consequently, you either settle for less parity protection or buy more drives. For example, let us assume you create a RAID-Z2 vdev. So you now have a new vdev with 4 TB of storage and 4 TB of parity. In total you now have 5 disks used for storage and 5 disks used for parity. However, you don't have to lose more 5 disks to lose data. If you lose more than 2 drives on the on the RAID-Z2 vdev, you will lose all of its data. So while there is an additional safety in this setup, it is not as safe as 5 parity drives protecting all the data. Additionally, every time you add a new vdev, you have to pay extra for new parity disks and lose Sata ports due to them. This makes expanding storage more expensive in relation to Unraid.

h) Capacity expansion: As well as adding new drives to expand pool capacity (see g. above), it is also possible to expand an existing vdev by replacing each of its drives, one at a time, and doing a scrub after each drive is replaced. Once all drives are replaced, the new increased capacity is available.

i) Using ZFS snapshots provides protection against accidental deletion of any files. Snapshots are virtually instant and consume no space initially. Snapshots (1) prevent accidental deletion, and (2) allow full and incremental backups to be made to any ZFS-based file system by specifying its IP address. Snapshots also allow a rollback to a last known good state, assuming snapshots are taken regularly.

j) Scrubbing the pool once a month is recommended for home users. A report is displayed giving a list of read/write/checksum errors for all drives within each vdev for the pool. This gives the user clear information about drives which have read/write/bit-rot problems, enabling the user to determine when to replace drives proactively, rather than reactively when it may be too late. Data is still available during a scrub operation, but will be accessed more slowly, but fine for watching movies.

k) ZFS automatically self-heals any corrupted data it reads. Example: you watch a movie that has one or more dropped bits caused by bit rot. When ZFS reads the file, it will automatically detect and correct the corrupted data. This is also done during a scrub operation, which reads all the files and so detects and corrects all corrupted files.

l) Hot spares can be specified when the data pool is created, or added to the data pool later, and are used automatically if drive failure is detected. This minimises the chance of there being two failed drives at the same time, because as soon as a drive fails, it is automatically detected and rebuilt onto the spare drive without user intervention.

m) ZFS data pools can be shared via NFS, SMB/CIFS (Samba) and iSCSI.

n) ZFS has strong mechanisms to help ensure that what you intend to write to disk, is actually what gets written to disk, and read back again later. This is called end-to-end data integrity: http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data

Are you sure you didn't miss anything?


- gadgetman - 2010-10-18

After reading more about this, i'm even more convinced that those who advocate zfs over unraid for strictly media collecting use ('replaceable' rip/downloaded medias... Not your one-of-a-kind home/work movies) are either unfamilar with unraid or they're employing protections against theoritical catastrophes with very remote chance of occuring.

Bitrot: very hard to find real stories about it. Theoritical occurence of 1bit every 12TB of transfers (manufacturer's data, hdd spec sheet), which has >99% of occuring NOT on a file's header thus making the file still viewable, with a less than 1/24th second (1 frame of 24fps) glitch in a minute part of the sceeen.

Multiple drives failure (>number of parity): you should monitor SMART info for early warnings to avoid drive failures due to age (use email autowarnings), and get drives from different manufacturing batches to avoid multiple drives failing together due to manufacturing defects. Worse come to worst, you will lose data only on those failed drives, not your whole raid array like on zfs.

Performance issues: unless you're running your media server to feed a small motel, then it shouldn't be a problem for simultaneous playback of full hd stream. These systems are generally optimized as write-once-read-many application, so you should avoid doing file processing directly on the nas anyway.

Protection against accidental file deletion: this should be implemented as 'recycle bin' on samba vfs, so you have fine grain control over every files to check/restore/flush. Zfs snapshots is the wrong tool for this.

Growth: after your initial 'library seeding', media files are usually acquired/collected in small batches. With unraid you can add one drive at a time, making your latest drive purchase to always be the biggest size you can afford w/ the best value (cost/Gig) as drive prices always come down and they come in bigger and bigger capacity.

Example: 1 year of buying 1TB every 2 months, starting off with 3 drives on january...

Unraid
Jan: 2TB data + 1TB parity = 2TB usable
Mar: add 1TB = 3TB usable
May: add 1TB = 4TB usable
Jul: add 1TB = 5 TB usable
Sep: add 1TB = 6TB usable
Nov: add 1TB = 7TB usable
Jan: add 1TB = 8TB usable

Zfs
Jan: 2TB data + 1TB parity = 2TB usable
Mar: buy 1TB = wait...
May: buy 1TB = wait...
Jul: add 2TB data + 1TB parity = 4TB usable
Sep: buy 1TB = wait...
Nov: buy 1TB = wait...
Jan: add 2TB data + 1TB parity = 6TB usable

Of course you can buy all the drives in advance and keep them spinning as empty space, which by the time you make use of them... There are bigger, faster and cheaper drives out there.


- froggit - 2010-10-18

TugboatBill Wrote:Are you sure you didn't miss anything?

It's quite possible - hee hee Wink


- froggit - 2010-10-19

gadgetman Wrote:After reading more about this, i'm even more convinced that those who advocate zfs over unraid for strictly media collecting use ('replaceable' rip/downloaded medias... Not your one-of-a-kind home/work movies) are either unfamilar with unraid or they're employing protections against theoritical catastrophes with very remote chance of occuring.

Personally, I'm unfamiliar with unRAID because I don't use it. I understand the way it works though from the descriptions here - i.e. it doesn't stripe, data just gets stored on a drive, when a drive gets full you add another one, one parity drive is used which stores all your parity data and must be as big or larger than your biggest data drive etc.

And as I've said before, I can see the advantage of using any-sized drives because it enables you to make use of any drives you have, or add different-sized drives to the mix later on as you acquire more drives.

However, and this is a personal thing, I never want to waste any time re-ripping stuff again, identifying movies so that they can be scraped accurately for XBMC's library mode to work. And with single parity, like unRAID uses, there is quite a good chance that I would at some point end up doing just that, and I'm not prepared to do that. Therefore I would rather spend a little more money having more parity and backups. It's that simple.

I understand that many unRAID users here don't share my aversion to loss, or think it won't happen, or if it does happen they will either be able to recover the failed drive, or failing that they say they don't mind re-ripping the lost movies (how do you know which ones?).

Quote:Bitrot: very hard to find real stories about it. Theoritical occurence of 1bit every 12TB of transfers (manufacturer's data, hdd spec sheet), which has >99% of occuring NOT on a file's header thus making the file still viewable, with a less than 1/24th second (1 frame of 24fps) glitch in a minute part of the sceeen.

If you're talking about bit errors during data transfers, you're probably talking about silent data corruption, and not bit rot. Anyway, here's a couple of interesting links for silent data corruption:
http://www.enterprisestorageforum.com/technology/features/article.php/3704666/Keeping-Silent-About-Silent-Data-Corruption.htm
http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191

Regarding operational failures & latent defects, which includes bit rot, probably the reason you can't find many stories about it are that before tools like ZFS scrub came along, most users had no easy means of detecting corrupted data, and therefore were usually unaware of the problem. Here's a couple of interesting links:
http://portal.acm.org/citation.cfm?id=1317394.1317403
http://portal.acm.org/ft_gateway.cfm?id=1317403&type=pdf&coll=GUIDE&dl=GUIDE&CFID=109327480&CFTOKEN=70952282

Quote:Multiple drives failure (>number of parity): you should monitor SMART info for early warnings to avoid drive failures due to age (use email autowarnings), and get drives from different manufacturing batches to avoid multiple drives failing together due to manufacturing defects. Worse come to worst, you will lose data only on those failed drives, not your whole raid array like on zfs.

SMART data is generally not considered very reliable for predicting failure, so unless you have no other means of being aware of your drives' state, it is probably best not to place too much faith in it. See here, and search for SMART:
http://portal.acm.org/ft_gateway.cfm?id=1317403&type=pdf&coll=GUIDE&dl=GUIDE&CFID=109327480&CFTOKEN=70952282
http://labs.google.com/papers/disk_failures.pdf

Google's paper above, entitle 'Failure Trends in a Large Disk Drive Population' states:
"Our analysis identifies several parameters from the drive’s self monitoring facility (SMART) that correlate highly with failures.
Despite this high correlation, we conclude that models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures."


Quote:Performance issues: unless you're running your media server to feed a small motel, then it shouldn't be a problem for simultaneous playback of full hd stream. These systems are generally optimized as write-once-read-many application, so you should avoid doing file processing directly on the nas anyway.

I agree. Performance should be fine unless multiple HD streams are played, which is probably not very likely.

Quote:Protection against accidental file deletion: this should be implemented as 'recycle bin' on samba vfs, so you have fine grain control over every files to check/restore/flush. Zfs snapshots is the wrong tool for this.

I don't know why you say snapshots are the wrong tool for this, as they work perfectly for guarding against accidental deletion. Could you explain?

Quote:Growth: ... <snip>

Personally, I plan my storage before buying it - i.e. how much do I need now, and how much do I anticipate needing in 2 or 3 years' time. Then I add a percentage for error, then add on the amount of redundancy I want, and finally I look for the best capacity/price ratio and buy what I need. So your buying model wouldn't apply for me.

However, as I said earlier, I do see the ability of unRAID to add drives ad hoc as a definite advantage, and you can't do that with ZFS so easily, as we've already stated.


- beckstown - 2010-10-19

froggit Wrote:Thanks for that good summary of features beckstown.

I have reviewed your original ZFS section, and made corrections and additions - see below. I am awaiting confirmation that the info in section e) is correct and will update this if it's incorrect:

snip
[/url]

Hey Froggit,

thanks for the feedback Smile

I will change my post and add your info. However, at the moment I am very busy and I dont think I will have time before next friday to thoroughly read your post and incorporate it into mine. You know, my post was already a monster where I was wondering if anybody really wanted to read that wall of text. So I will try to get the most important points from your post which really point out differences. Because what you said about sharing (NFS, SMB/Cifs) and zfs power management being able to spin down drives when not in use are features offered by both systems. And I really wanted to focus a bit more on the differences in my post.

But thank you very much for your post and the information about stripping. That is something which I obviously did not understand correct before Laugh


- jvdb - 2010-10-19

gadgetman Wrote:After reading more about this, i'm even more convinced that those who advocate zfs over unraid for strictly media collecting use ('replaceable' rip/downloaded medias... Not your one-of-a-kind home/work movies) are either unfamilar with unraid or they're employing protections against theoritical catastrophes with very remote chance of occuring.

I think it's unwise to make assumptions about other people's reasoning. We all have different experience and knowledge that prompts different decisions. I have seen bit rot on arrays that I maintain--degraded mirrors on 3ware arrays. Granted this has only happened a couple of times, and on drives that see much more use than the ones in my home server. The funny part is that I'm not all that concerned about losing my whole array. The really important stuff is backed up offline/offsite, the rest is replaceable--in fact much of it I would probably choose not to replace.

For me it comes down to this: I can have a free commercial grade solution -or- pay for a consumer grade solution.

Quote:you should monitor SMART info for early warnings to avoid drive failures due to age

Once again, I wouldn't rely on SMART.

http://www.google.com/url?sa=t&source=web&cd=1&sqi=2&ved=0CBIQFjAA&url=http%3A%2F%2Flabs.google.com%2Fpapers%2Fdisk_failures.pdf

Quote:Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. This result suggests that SMART models are more useful in predicting trends for large aggregate populations than for individual components.



- kaiser423 - 2010-10-19

jvdb Wrote:I think it's unwise to make assumptions about other people's reasoning. We all have different experience and knowledge that prompts different decisions. I have seen bit rot on arrays that I maintain--degraded mirrors on 3ware arrays. Granted this has only happened a couple of times, and on drives that see much more use than the ones in my home server. The funny part is that I'm not all that concerned about losing my whole array. The really important stuff is backed up offline/offsite, the rest is replaceable--in fact much of it I would probably choose not to replace.

For me it comes down to this: I can have a free commercial grade solution -or- pay for a consumer grade solution.



Once again, I wouldn't rely on SMART.

http://www.google.com/url?sa=t&source=web&cd=1&sqi=2&ved=0CBIQFjAA&url=http%3A%2F%2Flabs.google.com%2Fpapers%2Fdisk_failures.pdf

Exactly, neither unRAID, nor ZFS are the great solution to everyone's problems. Each has their own problem space that they address.

Hard drives do a very, very good job at detecting and correcting for bit rot (they use a similar checksumming system at the firmware level), and will probably continue to do so. So, bit rot is largely mitigated by the hard drives correcting for it (and bits do get flipped quite often at the hdd level, which is why SMART can have such a hard time determining when something is going bad). But I live in high altitude with lots of lightning, and lots of RF tests nearby. I see bit rot at work somewhat often. ZFS adds another level of protection to that chain, but I mainly use it at home because it's also just a really, really nice, robust, fast RAID implementation.

I bought a little NAS off of Craigslist for $50. It's actually a very fast, nice Intel NAS. I can fit 4 HD's in it. I have no idea why I'd use unRAID over FreeNAS/zfs/btrfs. I can't add hard drives (well, it does have external SATA, but that's no fun), so unRAID has no appeal.

If I had a tower that I could stick 12 hdd's in, I'd probably go unRAID, or simple JBOD + a backup HD somewhere, or just smartly use partitions and LVM to create an unRAID-like setup.

Different solutions for different jobs.

The funny part is that unRAID is probably going to migrate to btrfs as their underlying file system, which is largely an evolution on ZFS, because it just makes sense for them to do so. They'll get checksum redundancy, and all type of really neat features. Heck, it might even be such a good upgrade that they'll charge ya again for it Wink