Saturday, October 27, 2007

ACSLS Shared Library DRM Checkout

I was recently asked by a reader if I would show them how I do the checkout of my DRM tapes in an shared ACSLS library. If you are not aware, ACSLS does not really support true DRM checkout within a shared library environment. Unfortunately, what occurs when you run the checkout is that ACSLS tries to check the tapes out one at a time; prompting for a request response before checking out the next tape. I found this out when we implemented our STK L5510 library and the checkout process was broken. So I went searching for some kind of answer to the situation and found APAR IC45537 which states the problem has been known since March of 2005. IBM has not resolved the issue within TSM but has relied on a local fix to resolve this issue. So I decided to post the script I created here to help those people out who have an ACSLS library and are looking to share it between multiple TSM instances and don't want to have to deal with Gresham's software.

Here is a basic rundown of what the script does.

  • On each library client DRM checks the tapes out with a REMOVE=NO option and creates a file with a list of the tapes.
  • On the library manager the tapes are checked back into the library.
  • The Library manager then checks every library clients DRM tapes out.
I know it sounds complex but it's not. It also is not perfect so you will need to keep on top of it, but don't worry it is not so bad that it's a headache. Some explaination is required also; In my environment I have 7 TSM instances and the firsts is just a library controller. This is why the following script starts at TSMSERV2 and stops the loop after TSMSERV7. Also note that the pipe sysmbol will not show up for some reason so you need to add it before the grep statements. Before you use the script you will also notice that it uses a TSM macro. Create the file by putting the following command into a macro called move_drmedia.mac.

move drm * so=dbb wherestate=mo tostate=vault remove=no CMD=&VOL CMDFILE=/usr/tivoli/tsm/client/ba/bin/Vol_List APPEND=YES WAIT=YES
move drm * so=dbb wherestate=vaultr tostate=onsiter

=-=-=-=-= Below Starts Checkout Script =-=-=-=-=


cd /usr/tivoli/tsm/client/ba/bin

ADSMID=`cat /usr/local/scripts/ADSMID`
ADSMPASS=`cat /usr/local/scripts/ADSMPASS`

cp /usr/tivoli/tsm/client/ba/bin/Vol_List /usr/tivoli/tsm/client/ba/bin/Vol_List.bak

cat /dev/null > /usr/tivoli/tsm/client/ba/bin/Vol_List

printf "Use this list to determine tapes that are to go offsite report any discrepancies to the Recovery Services Team.\n\n" > $OFFSITE
printf " \n\n" >> $OFFSITE
printf "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n" >> $OFFSITE
printf " Tapes to be sent offsite\n\n" >> $OFFSITE
printf " Current as of: `date`\n\n" >> $OFFSITE
printf "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n" >> $OFFSITE

printf "Use this list to determine tapes that are to come back onsite from Iron Mountain for reuse. Report any discrepancies to the Recovery Service Team.\n\n" > $RETRIEVE
printf " \n\n" >> $RETRIEVE
printf "********************************************************\n\n" >> $RETRIEVE
printf " Tapes to be brought back onsite from Iron Mountain\n" >> $RETRIEVE
printf " and placed back into TSM library for scratch.\n\n" >> $RETRIEVE
printf " Current as of: `date`\n\n" >> $RETRIEVE
printf "********************************************************\n\n" >> $RETRIEVE

while [ $I -lt 8 ]

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=$S -dataonly=yes "select volume_name from drmedia where state='MOUNTABLE' " grep L[0-3] >> $OFFSITE

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=$S -dataonly=yes "select volume_name from drmedia where state='VAULTRETRIEVE' " grep L[0-3] >> $RETRIEVE

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=$S 'macro move_drmedia.mac'

sleep 120

I=$(( $I + 1 ))

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=TSMSERV-1 'CHECKIN LIBVOL TSMLIB search=yes stat=private checklabel=no vollist=FILE:/usr/tivoli/tsm/client/ba/bin/Vol_List'

sleep 180

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=TSMSERV-1 'CHECKOUT LIBVOL TSMLIB vollist=FILE:/usr/tivoli/tsm/client/ba/bin/Vol_List'

mail -s "TDC Daily Tape Checkout" $HN tape_rpt < $OFFSITE

mail -s "TDC Daily Tape Return" $HN tape_rpt < $RETRIEVE

Friday, October 26, 2007

Monthly Backups

Many people I have talked with have a requirement to keep a month end backup for an extended period of time. This is where backupsets can be very helpful, but what do you do when backupsets just wont cut it? Well I have a requirement to keep some month end data for extended periods but all other backups are kept for 30 days. How do I accomplish this? The answer is a shell script. Before the ehanced scheduler came about in TSM 5.3 the only way to get the backup to change to a different type at the end/beginning of the month was to have some script handle it or you could manually control it. So here is a good shell script that looks at the day of the month and if it is the 1st it runs a backup to an alternate nodename. In this example the nodename is the same as the default nodename with -MONTHLY added to the end. We also made a dsm.opt file called dsm.opt.monthly that references the stanza in the dsm.sys with the alternate name. Then schedule this script as a daily command script for the node and it will run the daily incremental to the default nodename every day except on the 1st of the month. I hope this is useful for some of you. Click Read More to see the script.

#set -x
export PATH=$PATH:.:/etc:/usr/sbin:/bin:/usr/local/lib:
export LIBPATH=$LIBPATH:/usr/odcs/bin
export DSM_LOG=/opt/tivoli/tsm/client/ba/bin
export DSM_DIR=/opt/tivoli/tsm/client/ba/bin
if [ -f $BK_SCRIPT_LOGFILE ]; then
printf "`date` - Starting Backup\n" >> $BK_SCRIPT_LOGFILE
printf " test " >> $BK_SCRIPT_LOGFILE
# Modified to use the monthly opt file on the 1st for monthly backups that are
# kept for a extended period of time.

case `date +%d` in
export DSM_CONFIG=/opt/tivoli/tsm/client/ba/bin/dsm.opt.monthly
export DSM_CONFIG=/opt/tivoli/tsm/client/ba/bin/dsm.opt
# Back up the filesystems
printf "`date` - Starting the backup of the filesystems.\n" >> $BK_SCRIPT_LOGFILE
# Set the success variable to zero

/opt/tivoli/tsm/client/ba/bin/dsmc incr >> $BK_SCRIPT_LOGFILE
# /opt/tivoli/tsm/client/ba/bin/dsmc q fi >> $BK_SCRIPT_LOGFILE INCRCODE=$?
if [[ $INCRCODE -ne 0 && $INCRCODE -ne 4 && $INCRCODE -ne 8 ]]
printf "`date` - Incremental backup failed - return code: $INCRCODE\n" >> $BK_SCRIPT_LOGFILE
exit 1111
printf "`date` - Incremental of $FS successful - return code: $INCRCODE\n" >> $BK_SCRIPT_LOGFILE
printf "`date` - Backup has completed successfully\n" >> $BK_SCRIPT_LOGFILE
echo $?

Wednesday, October 24, 2007

SpectraLogic Library Review

We recently acquired a Spectra Logic T950 library for one of our data centers and I thought I'd let you all know how it's been performing. Since I was with IBM previously my only experience was with STK's (which we were trying to push out the customers door ASAP) and IBM hardware. I can say I was no fan of the STK L700 and being an IBM'er at the time I touted the 3584 and 3494's like they walked on water (they don't). We frequently had to have maintenance on our IBM libraries. Was it due to the fact we were collocating over 2500 clients and mounts were through the roof? Probably!

The data center I worked at with IBM had a secondary library room that housed the 8 libraries (some were for mainframes) and tape shelves. The room was running out of floor space to accommodate another library or expansions to existing ones. This is where SpectraLogic has IBM and the competition beat, hands down. A single T950 frame can house up to 24 drives and a max of 950 tapes with a frame H 78.77 in, W 30.63 in, D 43.21 in (H 200.1 cm, W 77.8 cm, D 109.8 cm). That's a little taller but less deep than an IBM TS3500 frame at 70.9"H x 30.8"W x 47.7"D (1800 mm x 782 mm x 1212 mm), with the IBM L frame handling a max of only 12 drives and 287 tapes.

How does SpectraLogic get such great density? They go vertical with a twist. SpectraLogic libraries use "TeraPacks" that are 10 tape chassis that load the tapes so the barcode are vertical not horizontal like IBM and most other libraries. When the library needs a tape its robot removes the TeraPack and then the gripper mechanism grabs the tape. I can hardly tell if it adds more than a second or two to the mount time, but even if it does the density gain negates the ever so slightly increased mount time.

I could go on an on and mention every little thing that I like about this library, but one of the coolest features is the ability to add SATA RXT portable RAID media, making the T950 VTL capable. The RXT media (which stands for RAID eXchangeable TeraPack) fits in the SpectraLogic half inch tape drive openings and is composed of multiple SATA disks sealed in a enclosure capable of taking rugged handling with built in shock dampening technology. The TeraPacks range in size from 2TB to 1TB, but I'm sure you will see larger sizes in the very near future. The RXT media is compatible with all major backup applications and operating systems.

The final item that sealed the deal was of course price. This library came in at a great price point, lower than IBM and Sun by quite a bit. This, added up with all the other features/benefits it offers in expandability, made it a win/win. "So, how has it performed?" You ask. Well, so far it has performed above my expectations. I have shed my IBM favoritism and seen it for what it was "stubbornness". I would highly recommend considering SpectraLogic the next time you seek to buy new or refresh old equipment. They definitely have the features everyone is looking for available in their libraries, and with data center space at a premium you can count on the T950 to give you the capacity you need in less space than competing libraries.

The mount time is a a lit more excessive than thought. Because of the TeraPack having to be removed, then the tape grabbed and placed in the drive, then the terapack replaced; the mount times are quite a bit longer when you have a VERY busy queue. I think SpectraLogic needs to speed this process up somehow. I must say if they can speed it up the overall library density is great. It would be cool if there was some way they could utilize a tool that grouped all scratch together and tapes that are most frequently mounted. Almost like a tape slot reclamation. It might speed things along when say 5 systems are all waiting for scratch. I need to research this a little more. I'll see if I can get feedback from SpectraLogic.

NetApp Drive Definitions

I always forget the NetApp drive definitions so i thought I would post the example I found here so I can recall it later. If any of you wondered which definition TSM uses or would like to know why here is your answer:

rst4l - rewind device, format is: H Format 30 GB
nrst4l - no rewind device, format is: H Format 30 GB
urst4l - unload/reload device, format is: H Format 30 GB
rst4m - rewind device, format is: H Format 30 GB
nrst4m - no rewind device, format is: H Format 30 GB
urst4m - unload/reload device, format is: H Format 30 GB
rst4h - rewind device, format is: H Format 30 GB
nrst4h - no rewind device, format is: H Format 30 GB
urst4h - unload/reload device, format is: H Format 30 GB
rst4a - rewind device, format is: H Format 60 GB comp
nrst4a - no rewind device, format is: H Format 60 GB comp
urst4a - unload/reload device, format is: H Format 60 GB comp

The one is red is the definition TSM uses since it rewinds the tape, but leaves the unload/reload to TSM and not the device itself.

Friday, October 19, 2007

FilesX Xpress Restore & TSM

If you click the title of this post you can read the news article that explains this new tool that has been validated to work with TSM. It provides a block level disk backup of application data allowing TSM to work with the Xpress Restore repository enhancing the speed of backups...This is one tool I would like more information on so I will be looking into how it works with TSM.

Thursday, October 18, 2007

Good De-Deuplication Questions

An anonymous reader posted the following comments about the De-Duplication post a few days ago which I thought were good enough to post on the main page and let everyone read them. In the future I hope more of you would post with your name so we can give credit where credit is due. Anyone currently using a De-Duplication product, we would love to hear from you.

I have serveral issues with Backup and DeDupe (most of which are TSM related).

First of, why are people retaining so much data within TSM (i.e. the retention period is increasing). TSM is something that is supposed to be used in response to a data loss event. In other words, data is lost either through hardware failure, logical failure or because of human error and we turn to TSM to recover it.. but an increasing number of people are using it as a filing cabinet for data placing infinite retention on data. I don't think TSM was truly designed to do this. I see this as more of data management function akin to HSM and Archiving.. Yes, TSM has archiving but I think it's pretty weak in terms of functionality, it really needs to be married to an application that can do better indexing and classification in order to make this powerful.

So.. if the data you are storing within TSM cannot truly be used to support a data recovery function, then why keep it? Are you really going to restore a file from 180days ago because someone suddenly discovered that they deleted a file 4 months ago that they now need. I haven't seen much of that, occurences are typically rare.. yet the outlay to stay consistent on such a policy could be expensive. Forget about just the cost of the media - there's much more to it than that.

DeDupe becomes more efficient when you retain more data in the backups, but more versions = bigger TSM DB which often means that you have to spawn another TSM server to keep things well maintained.

In TSM land we're very conscious of the TSM DB.. It's the heart of the system and we go to great lengths to improve it's performance and protect it. In the event that it does become corrupt we can roll it back using TSM DB backup and Reuse delay. The DeDupe engine must also have a index/db.. what do we do if it becomes corrupt? If it does, how do we insure that we can get it synched up with TSM again?

How well will DeDupe work when data is reclaimed? TSM rebuilds aggregates when data is reclaimed, so how much work is that for the DeDupe engine and what's the I/O pattern going to be like on the back end storage.

How does this work in terms of recovery both operationaly and in terms of a disaster? Single file restores, probably great. Recovery of lots of files, probably not too bad.. when recovering lots of small files the client is typically the bottleneck.. not sure that the dedupe engine would impact it much. What about recovery of a large DB? This one I am more skeptical of. We can get great performance from both tape or disk.. potentially the best performance from tape provided that we can get enough mount points and the client isn't bottlenecked in some way. But what if it's deduped on disk.. will the data stream from disk or will we get more random I/O patterns. If it's a 10TB that needs to be recovered, I think that still equates to 10TB that needs to be pushed through TSM, even if it's been deduped to 2TB on the physical disk behind the dedupe engine.

What about DR where you want to recover multiple clients at the same time. Good storage pool design can alleviate some of the issues with tape contention, disk may offer some advantages because the media supports concurrent access (but bear in mind that TSM may not - depending on how you configured it).. If that disk is deduped though, then potentially you have less spindles at your disposal. That could mean more I/O contention and perhaps more difficulty in streaming data.

Tivoli's October "Valuable Support Info" E-mail

Are you subscribing to IBM/Tivoli's Valuable Support Information E-mail? If not take the time to subscribe because the links, notices, and information are worth the few minutes it takes. For those of you not subscribing yet here it is in its entirety.

Welcome to the IBM Tivoli Storage Manager (TSM) technical support information update. This communication is designed to help you derive maximum value from your TSM software by providing the most up-to-date technical information, answers to frequently asked questions, and links to other key information. Please take a moment to read through the materials provided below. We are sure that you will find answers to many of your questions. This month's mailing has five main sections:

  1. Frequently Encountered Situations and FAQs
  2. News & Technical Flashes
  3. Recent & Important Downloads
  4. Upcoming Events & Live Technical Trainings
  5. Problem-Solving Resources on our web site
You are receiving this notification because you are one of the IBM Tivoli Storage Manager customers, who have called for technical support in the past year.

To Unsubscribe: Please reply to this email and change the Subject field to 'unsubscribe.'
To Subscribe: Please reply to this email and change the Subject field to 'subscribe.'

Frequently Encountered Situations and FAQs

Title: TSM Journal Based Backup FAQ

Title: Library fails with ANR8840E and ANR8418E

Title: Replacing a damaged primary storage pool volume

Title: Windows 2003 Volume Shadow Copy Service (VSS) Hotfixes for Systemstate Backup

Title: IBM Tivoli Storage Manager Administration Center Requirements

Title: Overview - Tivoli Storage Manager Supported Operating Systems

Title: Upgrade Instructions - Tivoli Storage Manager (TSM) 5.2 to 5.4 or 5.2 to 5.3 Link:

Title: MustGather: Read First for Tivoli Storage Manager Products

Title: Guideline For Selecting The Appropriate IBM Tape Device Driver for Windows 2000

Title: IBM Ultrium Generation 4 (LTO-4) drive and drive encryption support

News & Technical Flashes

Title: URGENT Actions Required: Changes to Daylight Saving Time will affect IBM Tivoli Storage Manager Administrative Interfaces

Title: Backupsets should not be used to store Data Protection client or API client data

Title: Data loss may occur on TSM 5.4 servers when performing off-site reclamation of a copy storage pool and active-data pools are defined

Title: Two security vulnerabilities exist in the IBM Tivoli Storage Manager (TSM) client

Title: Featured Documents for Tivoli Storage Manager
Link: & Important Downloads

Title: Tivoli APAR notification page

Title: Recommended TSM Client and Server Fixes

Upcoming Events & Live Technical Trainings

Tivoli's Support Technical Exchange web seminars allows you to participate in live discussions on topics such as deployment, trouble-shooting tips, common issues, problem solving resources and other Support and Services recommendations.

To see the connection information for these seminars, as well as ALL Tivoli products just use the main link below.


Mark your calendars: Here are the current listings for the FREE seminars though December, 2007

10/18/07 TSM Backup Considerations with Nseries
10/23/07 Overview, setup, and usage of NDMP operations
10/23/07 Understanding TSM HSM for Windows

11/06/07 LVSA / Open File Support
11/15/07 Tivoli Storage Manager Update
11/15/07 Ask the Experts call in session: TSM devices for Windows
11/20/07 Installing the Integrated Solutions Console (ISC) and Administration Center
11/29/07 Tuning Disk for use with Tivoli Storage Manager (TSM)

12/06/07 Exploiting Disk Technology with Tivoli Storage Manager

Help us choose future topics for our calls!
If you have ideas for a future discussion topics for TSM or to report problems with viewing content, please send feedback to:

Problem-Solving Resources Online

Visit the IBM Tivoli Storage Manager product support page
Link: IBM Education Assistant for TSM

Short, task-based audio and visual presentations on pertinent topics.

Free IBM Support Assistant product plug-in for TSM A desktop tool to aid in your TSM administration.

Self help support info for new Tivoli Storage Manager Administrators

Complimentary TSM Concepts Poster & Complimentary Online Self-Paced Training Classes

Participate in the and ADSM/L forums:

Wednesday, October 17, 2007

Backupsets & TCopy

OK, I am going to throw a question out into the TSM world and see what you people think. I was asked at work if a Backupset could be copied using the UNIX tcopy command? I thought about it and since Backupsets are independent of the TSM server I would think, in theory, it should. The only issue I could see would be needing to define the copy to TSM, since it would have no record of it. The benefit would be that I would not need to mount the tapes all over again to run the Backupset. The reason I ask is that a customer wanted two copies of a full backup of a server to archive forever. Anyone ever tried it, any theories? They are all welcome!

New! Free training for TSM v5.4 ISC

If you use the ISC (even though it wastes more time than it saves) you can get some free training from IBM. The trainings cover:

  • Installing the Admin Center
  • Using the Admin Center
  • Managing ISC users and TSM admins
If you click on this posts title it will take you to the page or you can click here to be taken directly to the IBM Education Assistant webpage.

TSM 5.1 Flash!

This flash was posted by IBM on Sept. 25, 2007:

Customers running IBM Tivoli Storage Manager (TSM) 5.1.6 Windows, HP, Sun, AIX and Linux and using devices other than 3570, 3590, or IBM LTO may have problems with the automatic labeling of tapes by the TSM server.

You can click on the title of this post to see the IBM page.

Tuesday, October 16, 2007

Table Of Contents Added

I added a Table Of Contents widget on the left toolbar for you to easily list all the post since the site began. The TOC is sortable by clicking on the header of each column. Hope it adds to your experience with the site. If there is something you would like see on the site let me know and I'll see what I can do to accommodate your request.

Monday, October 15, 2007

TSM Client Install Help

I found a good set of How-To's and documentation from IBM while looking for client overviews of installing the TSM client on UNIX platforms. It is an IBM Education Assistant covering installation, migration, and upgrading. It's always been part of the Infocenter but I never noticed it. Duh! There is a nice slide show walk-through covering installation of the TSM client on different platforms. Also some PDF's and even a Silent Install how-to. The main page can be found here and the installation help can be found here.

Tuesday, October 09, 2007

Data DeDuplication - Been There Done That!

I just got off a pretty good NetApp webcast covering their VTL and FAS solutions. One of the items they discussed was the data deduplication feature with their NAS product. When the IBM rep spoke up they discussed TSM's progressive backup terminology and I find it interesting to contrast TSM's process with the growing segment of disk based storage that is the deduplication feature. The feature really helps save TONS of space with the competing backup tools since they usually follow the FULL+INC model causing them to backup files even when they haven't changed. Here deduplication saves them room by removing the duplicate unchanged files, but this shows how superior TSM is, in that it doesn't require this kind of wasted processing. What would be interesting is to see how much space is saved in redundant OS files, but that is still minor compared to the weekly full process that wastes so much space.

This brings us to the next item, disk based backup. This is definitely going to grow over time, but costs are going to have to come down for it to fully replace tape. The two issues I see with disk only based backups is in DRM/portability and capacity/cost. If you cannot afford to have duplicate sites with the data mirrored then you are left having to use a tape solution for offsite storage. Also with portability disk can be an issue. For example we are migrating some servers from one data center to another and we used the export/import feature. We have also moved TSM tapes from one site to another and rebuilt the TSM environment. To do this with disk is a little more time consuming, you would need the same disk solution and the network capacity to mirror the data (time consuming on slow connection) or have to move the whole hardware solution. Tape in this scenario is a lot easier to deal with. Now when it comes to capacity vs. cost there is a definite difference that will keep many on tape for years to come. Many customers want long term retention of their data, say 30+ days for inactive files and TDP backups (sometimes longer with e-mail and SARBOX data). So what is the cost comparison for that type of disk retention (into the PB) compared to tape. Currently it's no contest and tape wins in the cost vs. capacity realm, but hopefully that can someday change. So if any of you have disk based solutions or VTL solutions chime in I'd like to hear what you have to say and how it's worked for you.