Wednesday, December 23, 2009

Update: Weird Volhist Records (Part Two)

IBM did get back to me about my library manager/library client volume history devclass issue and I found it interesting how development handled it. Initially IBM support said the following:

Our server development has confirmed with our design, it is possible that the remote volhist entry can be different. Development believe this is their design. However, the existing document did not document this clearly. They agreed to open a Doc. APAR to properly document this for command query volhist.

In other words they felt that no action was needed other than to document that this possible occurrence within the TSM Admin and Reference guides. So I asked it be escalated since it is definitely a flaw and finally heard back from support and was told the following:

They agree what you observed or reported in this case is incorrect even though it does not cause any function lose. They have agreed to take the APAR IC65048 as a defect ( instead of Document ). However, this APAR would take a big code change to "fix" this issue and after first evaluate, they will not be able to deliver a fix in the service stream. They request to open a DCR ( design change record ) so development can make this change on a release boundary so that there is sufficient testing for a code change.
So it looks like IBM will eventually fix this. Thanks to IBM support for helping me get development to at least go beyond the Doc. APAR.

Monday, December 07, 2009

Data Loss Top 10

Ontrack, which handles data recovery, compiled a list of the strangest reasons data was lost. You can read their list here. I remember when I was teaching TSM some students had some of the craziest stories for data loss (many were disgruntled employee stories).

One story I heard from a student occurred when a manager was stupid enough to fire an employee, but require him to work the rest of the week if he wanted his pay. So the employee worked the rest of the week and on his way out (unescorted) he went into the server room and proceeded to urinate into the mainframe. Supposedly he shorted out the mainframe and caused all kind of havoc. If you have a story you'd like to share feel free to leave it in the comments section.

Thursday, October 29, 2009

Update: TSM 2009 Symposium

I received a response to my query on the cost of the 2009 TSM Symposium presentations USB stick. The following will provide you with the needed purchase information:

The USB-stick with all symposium presentations and materials in PDF-format is
available for 50 EUR plus shipping costs (Germany 3 EUR, European Union 5 EUR,
US 8 EUR).


Payment is accepted only by Paypal. Please transfer the respective amount by
using the Paypal service (http://www.paypal.com/) to
nc-kallecl(at)netcologne.de. Add your post address as optional message for the
payment or mail it to tsm2009@uni-koeln.de.

Pay Pal will make the necessary conversion from dollars to euros and expect to pay around $86 with current conversion rates. If you'd like to check the conversion rates try Google's currency conversion tool.

Wednesday, October 21, 2009

TSM 2009 Symposium

I went looking for the presentations given at the 2009 TSM Symposium and discovered that the presentations will be provided on a USB stick for around 50 Euro plus shipping. Previously the presentations had been posted for the community to access, but it looks like we wont have that opportunity going forward. I didn't see anything for U.S. cost and will post an update when I find out if/how we can purchase the symposium presentations. I would recommend you talk your company into purchasing it since I have always found the presentations to be very informative, and with 6.1 out it might help you with any planned migration/upgrade.

Thursday, October 08, 2009

Tivoli User Community

I had the opportunity this week to attend the Arizona Tivoli User Group and gain some valuable information on TSM Fastback, TSM 6.1, and Tivoli Storage Virtualization Management. They covered these products in the morning session and had a hands on demonstration of TSM 6.1 Admin interface and reporting features in the afternoon. What was even better was they offered a free certification exam so I got caught up on my TSM certification. Not every group will be able to afford free certifications, but it is definitely worth your time to check out your local or regional group for contacts and information. To find the Tivoli User Group in your are check out the Global Tivoli User Community website. If there is no group within your area you can still join the global group and attend webinars and participate in online activities.

Thursday, October 01, 2009

Update: Weird Volhist Records

Upon further investigation I found that my other shared library environment, that has multiple NAS devices defined to the library manager, is also showing the NAS device class for all LTO-3 volumes used by the library clients. If anyone has a NAS attached to a shared library environment please compare your volhist devclass to those of your library clients and tell me if you see the same situation. CRAZY THING IS, it doesn't seem to affect the library clients from using their tapes or acquiring scratch. What my concern is, is that I am slowing losing use of tapes that should be marked back as scratch, but due to the volhistory settings is unable to be released. I am wondering if this is a bug in TSM that could be affecting others.

Wednesday, September 30, 2009

Weird Volhist Records

I am reviewing my Volume History file on my ACSLS library controller and it shows volumes as REMOTE and owned by the library clients but the weird thing is it shows the DEVCLASS as a device class that is not even defined on the library clients. Why would TSM do this? So I have a device class called IBM-LTO-3 on all my TSM instances (8 total including the library manager) and on the library manager there is a device class called NAS-DEV-CL. I show tons of tape volumes using the NAS-DEV-CL devclass on the library manager's volume history but not on the other instances (since they don't even have the NAS-DEV-CL devclass defined). Why or how would this occur? I tried an audit library on one of my library clients but it is not completing (I've had issues with audit library commands from library clients before). Any ideas?

Here is an example of what I see:

tsm: PD-703-S-AITSM-1>select * from volhistory where volume_name='L40202'

DATE_TIME: 2009-08-19 03:33:07.000000
UNIQUE: 0
TYPE: REMOTE
BACKUP_SERIES:
BACKUP_OPERATION:
VOLUME_SEQ:
DEVCLASS: NAS-DEV-CL
VOLUME_NAME: L40202
LOCATION: PD703-UAX007
COMMAND:


When I query the volume on the library client I see:

Volume Name: L40202
Storage Pool Name: NP-STD-TAPE
Device Class Name: IBM-LTO-3
Estimated Capacity: 1.6 T
Scaled Capacity Applied:
Pct Util: 0.4
Volume Status: Filling
Access: Read-Only
Pct. Reclaimable Space: 26.1
Scratch Volume?: Yes
In Error State?: No
Number of Writable Sides: 1
Number of Times Mounted: 14
Write Pass Number: 1
Approx. Date Last Written: 08/26/09 11:01:12
Approx. Date Last Read: 09/22/09 11:19:00
Date Became Pending:
Number of Write Errors: 0
Number of Read Errors: 0
Volume Location:
Volume is MVS Lanfree Capable : No
Last Update by (administrator): CSMALL
Last Update Date/Time: 09/14/09 13:14:21
Begin Reclaim Period:
End Reclaim Period:
Drive Encryption Key Manager: None


So the question is how did this volume history record on the library manager end up with the devclass that doesn't exist on the library client? It doesn't seem to affect the volumes or processing daily tasks, but I am worried its not freeing up the tapes to return to a scratch status when they are reclaimed.

Thursday, September 17, 2009

Personal Backups

A friend of mine lost a bunch of family pictures and video when his hard drives died (he lost both at the same time). As backup people we are all well aware that the current home computer user does not have an easy and safe way to backup all their data easily. Oh, sure they can buy a USB drive but those can also be destroyed in a fire, flood, or even stolen. There are some online options but wanted to ask those out there what their experiences are with them and any suggestions. I currently have 200+GB I need protected so what are the safest options...and no I wont be buying a tape drive. ;-)

Thursday, August 20, 2009

Scalar i2000 Frustrations

So for any Scalar i2000 users out there, why is it at every D.R. test I have to reboot the library to get the robot to discover correctly? Is there an inherent issue with the internal switch? I have done 3 D.R. tests with i2000's and each time I have to get the D.R. site people to reboot the darn thing before I can detect the robot. What's up with that? How do those of you who use them daily like them? I get a bad taste in my mouth when I hear I have to use one. At least it had IBM drives in it.

Tuesday, August 11, 2009

We found this bug

We found this bug in Hungary. Click for details!

Thursday, August 06, 2009

TSM 5.5 to 6.1 Video

If you don't already subscribe to IBM's TSM Information Update & Storage Newsletter then you might not be aware of the following video IBM has posted to their website. They have provided a video tutorial upgrading TSM 5.5 to 6.1. Check it out here.

Thursday, July 23, 2009

TSM Server Scripts Sleep Option

I have been frustrated with the lack of a sleep command with TSM scripts for a long time and just today a co-worker sent me this link which I think sums up how we've had to get around the it within TSM server scripts. The WAIT=YES option is only good for commands that allow it, and is not exactly what many of us need in certain situations. Since this document's modified date shows as 7-23-2009, I will take it that there is no respite from the issue with TSM 6.1. Basically IBM recommends you break the script in two and where you need the sleep you have the script schedule a onetime schedule of the secondary script for X number of minutes in the future. It does work. I've use this process on some scripts, but you'd think they would have added a sleep option by now. So I assume when it's absolutely needed many of you resort to a regular shell script to execute tasks, any alternate processes than that? I'd like to hear about it.

Friday, July 17, 2009

TSM's VTL DeDupe Reclaim Problem

So a friend of mine sent me this link to an article from Scott Waterhouse's "The Backup Blog" that discusses a known issue with TSM not reclaiming VTL tapes when dedupe is in use. I would recommend you check it out if you are using dedupe in your environment since it looks like it affects TSM no matter what dedupe product is in use when you use it with a Virtual Tape Library. The problem is solved with release 5.5.2 or higher.

Tuesday, July 07, 2009

Still Looking For TSM Admins In Boulder, CO

Hello,
My name is Arjun and I'm a recruiter at Artech.

Artech has an urgent contract for one of our direct clients:

Job Title: TSM Administrator
Location: BOULDER, CO
Job Description:
Required Skill: TSM support

If you are qualified, available, interested, planning to make a change, or know of a friend who might have the required qualifications and interest, please call me ASAP at (973) 993-9383 Ext.3319, even if we have spoken recently about a different position. If you do respond via e-mail please include a daytime phone number so I can reach you. In considering candidates, time is of the essence, so please respond ASAP.

Artech is a global IT Consulting company with over 30 Fortune 500 customers. You may visit our website at http://www.artechinfo.com/ to learn more about us.

Thank you.
Sincerely yours,
Arjun Dheer
(973) 993-9383 Ext.3319
arjun_dheer@artechinfo.com

Monday, June 15, 2009

Rosetta Stone for UNIX

Know AIX? Ever wonder what that simple command in AIX is on Solaris or HP/UX? Need to know what the command for tcp/ip management is in Linux compared to OS/X? Now you can! Check out this helpful website Rosetta Stone for UNIX. My boss passed this along a while ago and I thought I had passed it along to the community, but I cannot remember if I did. So, here is a great website for UNIX admins and those learning UNIX to help you transition from one vendor to another.

Friday, June 12, 2009

TSM Checkout and ACSLS Question

At Infocrossing we have a tape operations group and they handle the physical opening of the cap and insert returned tapes and removing tapes for offsite. Because of this I didn't realize (until I had to handle the checkout myself) that TSM is checking out my tapes by filling the cap and pausing, waiting for the operations people to take the tapes out, then resuming the eject. OK, normal process except I have two 40 tape cap doors. When an eject has more than 40 tapes I figured that TSM would use the next cap door when I leave a specific cap identifier out of the checkout command, but it does not use the alternate cap. Both caps are set to a priority higher than zero so I find it frustrating TSM only uses one cap door. ACSLS itself allows you to use 0,0,* and it will checkout to all available caps above priority zero, but TSM does not accept the *. So what am I missing here? Is it possible with TSM to use both cap doors and not have the checkout pause for the one cap to be emptied? I didn't see anything suggesting REMOVE=BULK is any different from REMOVE=YES. I searched Google but didn't fine what I was looking for, and didn't find anything on ADSM.org so any ideas? Here's an example of my command:

CHECKOUT LIBVOLUME LIBRARY1 REMOVE=YES VOLLIST=FILE:/tsm/logs/chkout.list

Whether I specify a cap door or not it only uses one of the two.

Thursday, June 04, 2009

Windows 2003 Volume Shadow Copy Service (VSS) Hotfixes

If you are need of VSS hotfixes checkout this page from IBM. I plan to add it to the link list...might start a list of links to patches and updates, but that could be difficult to keep current. I know this is an older reference but I needed the fixes recently and it seems the VSS issues don't go away, they just keep coming back with each patch of Windows.

Tuesday, June 02, 2009

Need a job? Willing to move to Boulder, CO?

So with the state of the economy and many people out of work it's good to know IBM is hiring. IBM is looking for solid TSM candidates for various accounts, the only caveat is that the positions require you work from the Boulder, CO IBM facility. So if you are willing to relocate yourself, consider contacting a friendly IT recruiter from Artech or CDI and let the hiring begin!

Saturday, May 30, 2009

TSMExpert Changes

So I have recently made numerous changes to the site and hope you all like it. I was finally able to get the inline comment piece working and made changes to the categories list and template. So what I have realized is the following:


  1. Blogger is somewhat restrictive when your site outgrows normal blogging

  2. WordPress.com is inadequate in its customization to use as a alternative

  3. I need to consider a host that offers WordPress for more functionality
    (GoDaddy! probably)

  4. I need to learn XHTML and CSS

  5. This took to much of my spare time
    (I have 4 kids, I don't have spare time!)


I hope you like the changes and if you have any suggestions or comments I will add those shortly. Also the contributors list will be back up when I figure how they changed the widget.

Friday, May 22, 2009

Calculating Active Data

I was recently asked to calculate the amount of active data in TSM storage for file system backups, not TDP's and had some interesting results. If you search TSM active data in Google the first result will be this IBM support doc that explains how to calculate active data to help size an Active Data Storage Pool. IBM recommends that you use the EXPORT NODE command with PREVIEW=YES to determine the amount of active data. This, in theory, should work well but for TSM to process this request it has to analyze the backups table and who knows what else to gather the data. I have 10 instances I needed to gather the information from, they all vary in TSM DB size, and the amount of managed data stored. My smallest DB is a new instance that is 25GB and my largest is 155GB and size did not matter when it came to how fast the information was calculated. The TSM instance with the largest DB completed the taks in over two days (YES TWO DAYS!). Two TSM instances were still running the EXPORT NODE query after THREE DAYS and they had moderate to large sized DB's.

So what caused this problem? It all comes down to the number of files TSM has to inspect. The two instances that never completed the query have large numbers of Windows nodes and have the most registered nodes overall. These two instances seemed to be crawling through the process and where they should have calculated into the ten to twenty TB, as the next largest instance did after just over two days, the problem two were still in the 7 to 6 TB range and increasing slowly. My only explanation (and this is a guess) is that due to the fact that Windows servers tend to have hundred of thousands if not millions of files which TSM gets bogged down trying to inspect them all. I didn't notice a performance impact but IBM claims that it is a resource intensive task and should be run during non-peak hours.  How can you do that when it runs 24hrs or more?

Finally after three days, and no end in sight, I canceled the processes and now have to figure out some other way to calculate the amount of active data they have stored. I could calculate (i.e. guesstimate) by summing the amount of space used per file space.

Example:

select cast(sum(capacity*(pct_util/100)) AS decimal (18,2)) As Total_Used_Space from filespaces where node_name in (select node_name from nodes where domain_name like '%STD%') 

The problem is that this will not be accurate and will probably cause me to oversize any Active Data Pool I create. Now that's not a horrible thing (more space is always better than to little) but the whole process seems to time consuming for something that on the surface should be fairly easy to calculate.  This where I hope the new DB2 database can help, but until someone has it up and running and can try this process we can't say if there is a reasonable solution to find the total active data in TSM within larger instances.


Addendum

Are wondering why I used the '%STD%' filter in the select statement above? The team here at Infocrossing separates the file system backups and application (TDP) backups with different domain names using a standardized naming process. This is great because it also allowed us to run the EXPORT NODE command using wild cards for the specific domains to include in the export query.

EXPORT NODE * DOMAINS=*STD* FILEDATA=ALLACTIVE PREVIEW=YES

I highly recommend you follow a similar process when creating domains and even schedules to make it easier to process groups of nodes. So you could create a WIN-STD-DOM domain or like we do for our TDP's WIN-APP-DOM. These are examples but they can make life easier. 

Thursday, May 21, 2009

New Look!

I am trying a new look for the site. I made the change to resolve some problems with commenting and items that broke over the course of the last year. I could have used the old template but thought I would try something new. Let me know if you like it. If not I can always go back to the clean and simple template I was using. If anyone has a suggestion for look and feel leave a comment. I love new ideas.

Thursday, May 14, 2009

TSM 6.1 Upgrade - Need To Know!

So In researching the TSM 5.5 to 6.1 upgrade I have come across a number of issues that should have been compiled into a complete list to keep admins informed. So here it goes.

Things to know about TSM 6.1:
  • Although stated that the 6.1 DB should be the same size as the 5.5 DB the TSM community is claiming as much as 4x the space is required
  • It does not support RAW volumes for the TSM DB and Log
  • It has added archive logging to the existing log process (i.e. more disk space required to run)
  • It cannot generate or define backupsets at this time
  • It does not support NAS or Backupset TOC's at this time
  • It will not allow a DB upgrade if TOC's or Backupsets exist on the current TSM server
  • NQR (No Query Restore) is now slower due to how 6.1 processes the data before sending it to the client
I have been hearing of upgrades taking an extremely long time so be aware that if doing an upgrade the TSM source instance has to be down to allow the upgrade when doing it across the LAN or on the same server.  Even with the media method your source instance has to be down to perform the DB dump since 6.1 cannot use a standard TSM DB backup.


Tuesday, May 05, 2009

TSM 6.1 Upgrade - FYI

For those of you looking to upgrade your current TSM instances to 6.1 take note of this issue with upgrading the DB.

At this time a database containing backup sets or tables of contents (TOCs) cannot be upgraded to V6.1. The database upgrade utilities check for defined backup sets and existing TOCs. If either exists, the upgrade stops and a message is issued saying that the upgrade is not possible at the time. In addition, any operation on a V6.1 server that tries to create or load a TOC fails.

When support is restored by a future V6.1 fix pack, the database upgrade and all backup set and TOC operations will be fully enabled.


I haven't heard if this will be fixed in the first patch of 6.1 but keep it in mind when considering upgrading your system rather than starting from scratch.

Friday, April 24, 2009

TSM 6.1 Debian Client Request

Well TSM 6.1 has officially been out for about a month now and I have already received a request for a Debian client. Maybe Harry can help out in that area. Anyone want to try their hand at putting one together?

Wednesday, April 22, 2009

ACSLS Checkout Script Update

I Modified the ACSLS checkout script due to an Admin with BYU pointing out it doesn't work when the library clients reside on seperate servers. It was changed due to how TSM processes the CMDFILE= option in my MOVE DRM macro, plus if your TSM server names vary the while loop is not as easy to setup. So here is the new script and macro. I switched it to use an array for the servernames for easier processing when names don't follow a similar convention. Also you might need to modify the grep if your volumes don't start with L0 thru 4. If your volser is a different range just modify the grep so it will work for you.

<---------------------MOVE DRM MACRO------------------------>

move drm * so=dbb wherestate=mo tostate=vault remove=no WAIT=YES
move drm * so=dbb wherestate=vaultr tostate=onsiter

<------------------------Cut Below-------------------------->

#!/bin/ksh

cd /usr/tivoli/tsm/client/ba/bin

ADSMID=`cat /usr/local/scripts/ADSMID`
ADSMPASS=`cat /usr/local/scripts/ADSMPASS`

OFFSITE=/usr/tivoli/tsm/client/ba/bin/OFFSITE/offsite.txt
DRVOLS=/usr/tivoli/tsm/client/ba/bin/OFFSITE/Vol_List
RETRIEVE=/usr/tivoli/tsm/client/ba/bin/OFFSITE/retrieve.txt

set -A SERV TSM-1 TSM-2 TSM-3 TSM-4

cp $OFFSITE "$OFFSITE.BAK"

mv /usr/tivoli/tsm/client/ba/bin/OFFSITE/Vol_List /usr/tivoli/tsm/client/ba/bin/OFFSITE/Vol_List.bak

printf "Use this list to determine tapes that are to go offsite report any discrepancies to the Recovery Services Team.\n\n" > $OFFSITE
printf " \n\n" >> $OFFSITE
printf "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n" >> $OFFSITE
printf " The following tapes will checkout from the library\n\n" >> $OFFSITE
printf " and should be sent offsite\n\n" >> $OFFSITE
printf " Current as of: `date`\n\n" >> $OFFSITE
printf "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n" >> $OFFSITE

printf "Use this list to determine tapes that are to come back onsite from Iron Mountain for reuse. Report any discrepancies to the Recovery Service Team.\n\n" > $RETRIEVE
printf " \n\n" >> $RETRIEVE
printf "********************************************************\n\n" >> $RETRIEVE
printf " Tapes to be brought back onsite from Iron Mountain\n" >> $RETRIEVE
printf " and placed back into TSM library for scratch.\n\n" >> $RETRIEVE
printf " Current as of: `date`\n\n" >> $RETRIEVE
printf "********************************************************\n\n" >> $RETRIEVE

i=0
while [ $i -lt 4 ]
do
dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=${SERV[$i]} -dataonly=yes "select volume_name from drmedia where state='MOUNTABLE' and volume_name not in (select volume_name from drives where volume_name is not NULL) " grep L[0-4] >> $DRVOLS

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=${SERV[$i]} -dataonly=yes "select volume_name from drmedia where state='VAULTRETRIEVE' " grep L[0-4] >> $RETRIEVE

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=${SERV[$i]} 'macro /usr/tivoli/tsm/client/ba/bin/move_drmedia.mac'

sleep 120

i=$(( $i + 1 ))
done

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=
CHECKIN LIBVOL search=yes stat=private checklabel=barcode vollist=FILE:$DRVOLS

sleep 180

dsmadmc -id=$ADSMID -password=$ADSMPASS -servername=
CHECKOUT LIBVOL checkl=no remove=bulk vollist=FILE:$DRVOLS

cat $DRVOLS sort >> $OFFSITE

mail -s "Daily Tape Rotation - Tapes to be sent to Iron Mountain" tape_rpt < $OFFSITE mail -s "Daily Tape Return" tape_rpt < $RETRIEVE

Friday, April 17, 2009

TSM Admin Needed In Bahrain!

I could use some help here in Bahrain.
Need some certified Tivoli people ASAP.

Please answer all the questions below as best you can upfront as it will save us both time and get you submitted quicker to the MGR/Client.

Sometimes the client will cover expenses and sometimes they won't, so give me a rate for both below.

THIS HELPS ME SPEED UP THE PROCESS!!!

CANDIDATE:
POSITION:
YOUR CURRENT LOCATION:
CONTACT #'s :
EMAIL:
RATE ALL INCLUSIVE NO EXPENSES COVERED (Corp to Corp or 1099) :
RATE + EXPENSES:
CITIZENSHIP:
IF YOUR AN H-1 PROVIDE/(EMAIL & PHONE NUMBERS) OF THE ACTUAL COMPANY WHO CURRENTLY HOLDS YOUR H-1, AND WHO I SHOULD ASK FOR ABOUT YOUR SERVICES?
Does the person have any Pending offers? If so, when do you need an answer by from me?
How many years does this person have left on his H-1 in the US before it expires?
AVAILABILITY:

Please tell me how many years you have with EACH Skill listed in the job as best you can and rank yourself with each skill from 1 to 5--meaning advanced/expert as the top end.


SUMMARY OF WHY YOU FIT THIS ROLE , BE SPECIFIC:


Forrest Murphy
Sr Information Technology/Engineering Recruiter
ERP-Oracle, SAP, Peoplesoft, JDE
972-607-9897 Direct Dial
214-207-2478 Cell
972-496-2398 Fax
850-205-2185 Fax for Corp
509-277-2478 Email Fax
fmurphy@csifl.com
IM Address: fwnynj@yahoo.com

Consulting Solutions International
3512 Maclay Blvd, South
Tallahassee, FL 32312
www.csifl.com

Tuesday, April 14, 2009

New TSM Management Tool

I have been informed of a new TSM management tool called TSM Studio. It looks pretty slick, but I won't lie, I haven't had a chance to use it yet. If you are looking for a TSM tool check it out. I plan on posting a quick overview of all the TSM tools in the near future. If you have experience with any TSM tool (i.e. TSMManager, TSM Explorer, TSM Studio, AutoVault, Servergraph, Aptare, Bocada, or EMC's DPA) and are willing to write up a review of the product, let me know and we'll publish it here for others to use when deciding on a tool for their own environment.




Don't Miss - TSM V6.1 DB Upgrade Webcast

If you are looking to move to TSM V6.1 in the near future don't miss this webcast on April 28th at 11:00am EST.  This is part one in what I believe is a two part webcast (could be more than two). You can either click on the title of this post or find the sign up here.

Friday, March 27, 2009

TSM 6.1 Docs Online

I found this link listed on ADSM.org and thought I would pass it along. Here is a link that will take you to the TSM 6.1 documentation. Supposedly the upgrade doc alone is over 200 pagers. Not sure if any of you are "dying" to try 6.1, but I am planning on testing it soon.

Friday, March 20, 2009

IBM Considers Buying Sun!

Did I.Q.'s just drop substantially around the world? Go sell crazy somewhere else! Why would IBM even consider buying Sun? Definitely not for their Server technology, it's gotta be for their software portfolio. Most people are not aware that IBM's makes more money from their software division than hardware. (At least they used to...still think they do). But still, I gotta think Sun would be better served by a buyout from HP or Cisco...maybe Dell, but Dell seems uninterested from what I have read.

Thursday, March 12, 2009

Question About Active Data Pools

Geoffrey Huntley recently asked me what situation would an Active Data pool be useful? To be honest, I couldn't think of a good situation. Since I don't use an Active Data Pool I thought I would throw the question out to our fellow readers to get some feedback as to how you might be using ADPs. My biggest reason for not using them was the PIT restore issue. You can read my full gripe here. Basically if TSM wont utilize it when doing a PIT then what's the benefit? I'd say more than half, if not 75%, my restores are from older dates.

Active Data Pool - What's The Point?

To go along with the ADP story I have moved this older post up for easy reference.

With the release of TSM 5.4 Tivoli has added the ability to create an active data storage pool to allow for faster restore times. We have been looking into using them at work when I stumbled upon this interesting factoid in the description of the active data pools limitations:

  1. The server will not attempt to retrieve client files from an active-data pool during a point-in-time restore. Point-in-time restores require both active and inactive file versions. Active-data pools contain only active file versions. For optimal efficiency during point-in-time restores and to avoid switching between active-data pools and primary or copy storage pools, the server retrieves both active and inactive versions from the same storage pool and volumes.
So my question is why don't they allow the client to restore all the active data first then restore the inactive, or why didn't they implement a multi-session restore process when they added active data pools to the product, thereby speeding up the restore process? With the amount of P-I-T restores I do, this issue makes the whole active data pool useless not just for me but I'm sure for many of you.

Thursday, February 26, 2009

TSM 6.1 Technical Overview Presentation

IBM gave a Technical Overview of TSM 6.1 today and I thought I would post the links to the presentation for everyone. To go to the IBM presentation page click here.








To replay the presentation you can click here.

Tuesday, February 17, 2009

Who Did What?

This weekend I had an issue with a backup failing on a client using LAN-Free. When checking the TSM actlog the error I kept seeing was:

ANR9999D (Session: 265500, Origin: OD0BG-UAX001-STA) mmsshr.c(3874): ThreadId<13> Unable to obtain model type for '/dev/rmt23', rc = 46(SESSION: 265500)

So I had a good idea that the client and TSM Storage Agent were having issues communicating. I knew this because I had seen this error before when the TSM client and TSM Storage Agent were not at the same release levels. Checking the TSM client I found it at 5.4.0.0 and the Storage Agent at 5.3.4.0. Turns out an admin upgraded the client without knowledge of the Storage Agent, causing the backup failures. This is an example of what can happen when someone tries elbow their way into another groups area without the knowledge to do it right.

Wednesday, January 14, 2009

ART: Restore Testing software for TSM

We have a new product that does something unusual.

ART (Automated Restore Testing for TSM) test-restores a random sample of files. And it does this for every node at your site, automatically, on a schedule.

We don't think anyone has ever done this kind of comprehensive restore testing before.

ART has uncovered problems at every customer site it has tested. The problems are usually operational, and often easy to fix.

If you'd like to give the free trial a whirl (it's full-featured but limited to 20% of your nodes), go to www.tsmworks.com/download. We appreciate feedback from TSM experts out there.
Thanks!

Wednesday, January 07, 2009

TSM & File System Support

First off I hope everyone had a good holiday season. Now that we can focus on work again I wanted to discuss a topic of file system types. I just had an incident where a Solaris server had ZFS used for some newer file systems. The admins had added them without consulting us, and we didn't catch it because TSM didn't even attempt to back them up. Our client level was 5.4.1.0 and ZFS support was added with the 5.4.1.2 update. Once I updated the client the file systems were backed up successfully and show the correct format. We did see one file system was returning a type of UNKNOWN and that should have alerted us, but we were not receiving errors or failures on the backup of the server in question.

So here is the question, how do you keep something like this from happening in the future as new, more bleeding edge file system types are added? Obviously you need to inform your Unix Admins to work with you whenever they add a newer file system type, but if they don't alert you, and TSM doesn't report failures, how would you know? It's bound to happen as the Linux community adds newer, more robust file system types. Other than stay as current as possible with my TSM client levels (which wont always be the fix) what would you suggest?