Tuesday, October 09, 2007

Data DeDuplication - Been There Done That!

I just got off a pretty good NetApp webcast covering their VTL and FAS solutions. One of the items they discussed was the data deduplication feature with their NAS product. When the IBM rep spoke up they discussed TSM's progressive backup terminology and I find it interesting to contrast TSM's process with the growing segment of disk based storage that is the deduplication feature. The feature really helps save TONS of space with the competing backup tools since they usually follow the FULL+INC model causing them to backup files even when they haven't changed. Here deduplication saves them room by removing the duplicate unchanged files, but this shows how superior TSM is, in that it doesn't require this kind of wasted processing. What would be interesting is to see how much space is saved in redundant OS files, but that is still minor compared to the weekly full process that wastes so much space.

This brings us to the next item, disk based backup. This is definitely going to grow over time, but costs are going to have to come down for it to fully replace tape. The two issues I see with disk only based backups is in DRM/portability and capacity/cost. If you cannot afford to have duplicate sites with the data mirrored then you are left having to use a tape solution for offsite storage. Also with portability disk can be an issue. For example we are migrating some servers from one data center to another and we used the export/import feature. We have also moved TSM tapes from one site to another and rebuilt the TSM environment. To do this with disk is a little more time consuming, you would need the same disk solution and the network capacity to mirror the data (time consuming on slow connection) or have to move the whole hardware solution. Tape in this scenario is a lot easier to deal with. Now when it comes to capacity vs. cost there is a definite difference that will keep many on tape for years to come. Many customers want long term retention of their data, say 30+ days for inactive files and TDP backups (sometimes longer with e-mail and SARBOX data). So what is the cost comparison for that type of disk retention (into the PB) compared to tape. Currently it's no contest and tape wins in the cost vs. capacity realm, but hopefully that can someday change. So if any of you have disk based solutions or VTL solutions chime in I'd like to hear what you have to say and how it's worked for you.

5 comments:

  1. I have serveral issues with Backup and DeDupe (most of which are TSM related).

    First of, why are people retaining so much data within TSM (i.e. the retention period is increasing). TSM is something that is supposed to be used in response to a data loss event. In other words, data is lost either through hardware failure, logical failure or because of human error and we turn to TSM to recover it.. but an increasing number of people are using it as a filing cabinet for data placing infinite retention on data. I don't think TSM was truly designed to do this. I see this as more of data management function akin to HSM and Archiving.. Yes, TSM has archiving but I think it's pretty weak in terms of functionality, it really needs to be married to an application that can do better indexing and classification in order to make this powerful.

    So.. if the data you are storing within TSM cannot truly be used to support a data recovery function, then why keep it? Are you really going to restore a file from 180days ago because someone suddenly discovered that they deleted a file 4 months ago that they now need. I haven't seen much of that, occurences are typically rare.. yet the outlay to stay consistent on such a policy could be expensive. Forget about just the cost of the media - there's much more to it than that.

    DeDupe becomes more efficient when you retain more data in the backups, but more versions = bigger TSM DB which often means that you have to spawn another TSM server to keep things well maintained.

    In TSM land we're very conscious of the TSM DB.. It's the heart of the system and we go to great lengths to improve it's performance and protect it. In the event that it does become corrupt we can roll it back using TSM DB backup and Reuse delay. The DeDupe engine must also have a index/db.. what do we do if it becomes corrupt? If it does, how do we insure that we can get it synched up with TSM again?

    How well will DeDupe work when data is reclaimed? TSM rebuilds aggregates when data is reclaimed, so how much work is that for the DeDupe engine and what's the I/O pattern going to be like on the back end storage.

    How does this work in terms of recovery both operationaly and in terms of a disaster? Single file restores, probably great. Recovery of lots of files, probably not too bad.. when recovering lots of small files the client is typically the bottleneck.. not sure that the dedupe engine would impact it much. What about recovery of a large DB? This one I am more skeptical of. We can get great performance from both tape or disk.. potentially the best performance from tape provided that we can get enough mount points and the client isn't bottlenecked in some way. But what if it's deduped on disk.. will the data stream from disk or will we get more random I/O patterns. If it's a 10TB that needs to be recovered, I think that still equates to 10TB that needs to be pushed through TSM, even if it's been deduped to 2TB on the physical disk behind the dedupe engine.

    What about DR where you want to recover multiple clients at the same time. Good storage pool design can alleviate some of the issues with tape contention, disk may offer some advantages because the media supports concurrent access (but bear in mind that TSM may not - depending on how you configured it).. If that disk is deduped though, then potentially you have less spindles at your disposal. That could mean more I/O contention and perhaps more difficulty in streaming data.

    ReplyDelete
  2. Hey Chad, think of what Data DeDuplication could mean for TSM disk based storage pools.

    300 clients all with the same copy of files C:\WINDOWS (for example) storing into a disk storage pool w/Data Duplication = Single Instance Store on staging pools.

    ReplyDelete
  3. Yeah I agree Geoffrey that and DB's constantly doing fulls, block level DeDupe could save tons of space. The thing that kills me is how much the other backup tools already cause an increase in the need for DeDupe due to their use of Fulls+Incrementals.

    ReplyDelete
  4. the c:\windows sounds like a big savings in theory, but I think if you added up the backup occupancy of all your \\server\c$ filespaces and took that as a percentage of the total occupancy that it would be less impressive. My avg c:\windows directory is less than 4G. Across 200 servers that's 800Gig. If I could dedupe that at 20:1, then I'd go from storing 800Gig to 40G with a saving of 740Gig. But if the total onsite occupancy is 50TB, then your overal saving is less than 1/50th of the capacity. Anything is good I suppose?

    I think that most shops could achieve better savings just by putting more thought into retention policies or by deleting data from TSM that nobody cares about anymore.

    Chad is correct. DeDupe is going to be most beneficial in cases where the backup tool forces periodics full copies. For TSM, this could (but not necessarily be) DB Backups, NDMP backups and perhaps archives (especially people with poor data mgmt skills that archive the same files over and over again - oh wait that what the NetBackup guys do every quarter). The DB Backup one is interesting.. what happens if you do a defrag/re-org of the DB. The actual data doesn't change but the structure on the disk could change quite dramatically.

    ReplyDelete
  5. Problem with deleting is, of course, that people actually have to care about keeping their data. These days there are more and more regulations that demand data to be kept for years on end, including email.

    ReplyDelete