SSD controllers and File Systems

ACM queue carries an interesting article on the effects of deduplication on file system reliability.

The main problem is that flash disk controller does deduplication to avoid writing the same block twice on the disk. This is beneficial because it saves space and also reduces the number of operations to the flash directly improving its life.

The glitch is that file systems which store redundant copies of superblock and other metadata blocks on the disk for reliability – in case one copy goes bad, the other copy can be read from. With flash controllers doing deduplication, only one physical copy is present on the disk. Which means if one copy goes bad, all logical copies go bad. Which is bad.

One possible solution to this problem would be to have something in the block that is different in the duplicate copies – this makes file system operations a bit slower, but still achievable, in my opinion.

So, hardware based deduplication has its own share of issues that needed to be tackled at the file system layer.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s