Monday, November 27, 2006

GPFS Revisited

Well I am still having issues with GPFS. It turned out the mmbackup wont work with the filesystem size either and a chat with IBM support was not encouraging. Here is what one of our System Admins found out:

The problem was eventually resolved by IBM GPFS developers. It turns out, they never thought their filesystem would be used in this configuration (i.e. 100,000,000 + inodes on a 200GB filesystem). During the time the filesystem was down, we tried multiple times to copy the data off to a different disk. Due to the sheer number of files on the filesystem, every attempt failed. For instance, I found the following commands would have taken weeks to complete:

# cd $src;find . -depth -print | cpio -pamd $dest
# cd $src; tar cf - . | (cd $dest; tar xf -)


Even with the snapshot, I dont think TSM is going to be able to solve this one. This will probably need to be done at the EMC level, where a bit level copy can be made.

So GPFS is not all it was thought to be. So pass it along and make sure you avoid GPFS for application that will produce large numbers of files.

Monday, November 06, 2006

Looking For Contributors

If you work in a large TSM environment and have experienced issues or tasks that you think passing along would help others please let me know. I am looking for contributors to keep this blog current and make it more open. Send me an e-mail with contribution ideas (you'll need more than 1) and what type of environment you work in.

CDP For Unix?

Has anyone heard of when (if ever) Continuous Data Protection for Files will be available on the Unix platform? I could really use this feature with my GPFS system. Since the application creates hundreds of meta data files daily and is proprietary (hence no TDP support) I am getting killed by the backup timeframe since the each volume has in excess of 4 million files already and incrementals take close to 48hrs. to finish. Anyone heard anything at symposiums or seminars?