Thursday, October 18, 2007

Good De-Deuplication Questions

An anonymous reader posted the following comments about the De-Duplication post a few days ago which I thought were good enough to post on the main page and let everyone read them. In the future I hope more of you would post with your name so we can give credit where credit is due. Anyone currently using a De-Duplication product, we would love to hear from you.

I have serveral issues with Backup and DeDupe (most of which are TSM related).

First of, why are people retaining so much data within TSM (i.e. the retention period is increasing). TSM is something that is supposed to be used in response to a data loss event. In other words, data is lost either through hardware failure, logical failure or because of human error and we turn to TSM to recover it.. but an increasing number of people are using it as a filing cabinet for data placing infinite retention on data. I don't think TSM was truly designed to do this. I see this as more of data management function akin to HSM and Archiving.. Yes, TSM has archiving but I think it's pretty weak in terms of functionality, it really needs to be married to an application that can do better indexing and classification in order to make this powerful.

So.. if the data you are storing within TSM cannot truly be used to support a data recovery function, then why keep it? Are you really going to restore a file from 180days ago because someone suddenly discovered that they deleted a file 4 months ago that they now need. I haven't seen much of that, occurences are typically rare.. yet the outlay to stay consistent on such a policy could be expensive. Forget about just the cost of the media - there's much more to it than that.

DeDupe becomes more efficient when you retain more data in the backups, but more versions = bigger TSM DB which often means that you have to spawn another TSM server to keep things well maintained.

In TSM land we're very conscious of the TSM DB.. It's the heart of the system and we go to great lengths to improve it's performance and protect it. In the event that it does become corrupt we can roll it back using TSM DB backup and Reuse delay. The DeDupe engine must also have a index/db.. what do we do if it becomes corrupt? If it does, how do we insure that we can get it synched up with TSM again?

How well will DeDupe work when data is reclaimed? TSM rebuilds aggregates when data is reclaimed, so how much work is that for the DeDupe engine and what's the I/O pattern going to be like on the back end storage.

How does this work in terms of recovery both operationaly and in terms of a disaster? Single file restores, probably great. Recovery of lots of files, probably not too bad.. when recovering lots of small files the client is typically the bottleneck.. not sure that the dedupe engine would impact it much. What about recovery of a large DB? This one I am more skeptical of. We can get great performance from both tape or disk.. potentially the best performance from tape provided that we can get enough mount points and the client isn't bottlenecked in some way. But what if it's deduped on disk.. will the data stream from disk or will we get more random I/O patterns. If it's a 10TB that needs to be recovered, I think that still equates to 10TB that needs to be pushed through TSM, even if it's been deduped to 2TB on the physical disk behind the dedupe engine.

What about DR where you want to recover multiple clients at the same time. Good storage pool design can alleviate some of the issues with tape contention, disk may offer some advantages because the media supports concurrent access (but bear in mind that TSM may not - depending on how you configured it).. If that disk is deduped though, then potentially you have less spindles at your disposal. That could mean more I/O contention and perhaps more difficulty in streaming data.

No comments:

Post a Comment