TSM Topics Feed

Saturday, May 30, 2009

TSMExpert Changes

So I have recently made numerous changes to the site and hope you all like it. I was finally able to get the inline comment piece working and made changes to the categories list and template. So what I have realized is the following:


  1. Blogger is somewhat restrictive when your site outgrows normal blogging

  2. WordPress.com is inadequate in its customization to use as a alternative

  3. I need to consider a host that offers WordPress for more functionality
    (GoDaddy! probably)

  4. I need to learn XHTML and CSS

  5. This took to much of my spare time
    (I have 4 kids, I don't have spare time!)


I hope you like the changes and if you have any suggestions or comments I will add those shortly. Also the contributors list will be back up when I figure how they changed the widget.

Friday, May 22, 2009

Calculating Active Data

I was recently asked to calculate the amount of active data in TSM storage for file system backups, not TDP's and had some interesting results. If you search TSM active data in Google the first result will be this IBM support doc that explains how to calculate active data to help size an Active Data Storage Pool. IBM recommends that you use the EXPORT NODE command with PREVIEW=YES to determine the amount of active data. This, in theory, should work well but for TSM to process this request it has to analyze the backups table and who knows what else to gather the data. I have 10 instances I needed to gather the information from, they all vary in TSM DB size, and the amount of managed data stored. My smallest DB is a new instance that is 25GB and my largest is 155GB and size did not matter when it came to how fast the information was calculated. The TSM instance with the largest DB completed the taks in over two days (YES TWO DAYS!). Two TSM instances were still running the EXPORT NODE query after THREE DAYS and they had moderate to large sized DB's.

So what caused this problem? It all comes down to the number of files TSM has to inspect. The two instances that never completed the query have large numbers of Windows nodes and have the most registered nodes overall. These two instances seemed to be crawling through the process and where they should have calculated into the ten to twenty TB, as the next largest instance did after just over two days, the problem two were still in the 7 to 6 TB range and increasing slowly. My only explanation (and this is a guess) is that due to the fact that Windows servers tend to have hundred of thousands if not millions of files which TSM gets bogged down trying to inspect them all. I didn't notice a performance impact but IBM claims that it is a resource intensive task and should be run during non-peak hours.  How can you do that when it runs 24hrs or more?

Finally after three days, and no end in sight, I canceled the processes and now have to figure out some other way to calculate the amount of active data they have stored. I could calculate (i.e. guesstimate) by summing the amount of space used per file space.

Example:

select cast(sum(capacity*(pct_util/100)) AS decimal (18,2)) As Total_Used_Space from filespaces where node_name in (select node_name from nodes where domain_name like '%STD%') 

The problem is that this will not be accurate and will probably cause me to oversize any Active Data Pool I create. Now that's not a horrible thing (more space is always better than to little) but the whole process seems to time consuming for something that on the surface should be fairly easy to calculate.  This where I hope the new DB2 database can help, but until someone has it up and running and can try this process we can't say if there is a reasonable solution to find the total active data in TSM within larger instances.


Addendum

Are wondering why I used the '%STD%' filter in the select statement above? The team here at Infocrossing separates the file system backups and application (TDP) backups with different domain names using a standardized naming process. This is great because it also allowed us to run the EXPORT NODE command using wild cards for the specific domains to include in the export query.

EXPORT NODE * DOMAINS=*STD* FILEDATA=ALLACTIVE PREVIEW=YES

I highly recommend you follow a similar process when creating domains and even schedules to make it easier to process groups of nodes. So you could create a WIN-STD-DOM domain or like we do for our TDP's WIN-APP-DOM. These are examples but they can make life easier. 

Thursday, May 21, 2009

New Look!

I am trying a new look for the site. I made the change to resolve some problems with commenting and items that broke over the course of the last year. I could have used the old template but thought I would try something new. Let me know if you like it. If not I can always go back to the clean and simple template I was using. If anyone has a suggestion for look and feel leave a comment. I love new ideas.

Thursday, May 14, 2009

TSM 6.1 Upgrade - Need To Know!

So In researching the TSM 5.5 to 6.1 upgrade I have come across a number of issues that should have been compiled into a complete list to keep admins informed. So here it goes.

Things to know about TSM 6.1:
  • Although stated that the 6.1 DB should be the same size as the 5.5 DB the TSM community is claiming as much as 4x the space is required
  • It does not support RAW volumes for the TSM DB and Log
  • It has added archive logging to the existing log process (i.e. more disk space required to run)
  • It cannot generate or define backupsets at this time
  • It does not support NAS or Backupset TOC's at this time
  • It will not allow a DB upgrade if TOC's or Backupsets exist on the current TSM server
  • NQR (No Query Restore) is now slower due to how 6.1 processes the data before sending it to the client
I have been hearing of upgrades taking an extremely long time so be aware that if doing an upgrade the TSM source instance has to be down to allow the upgrade when doing it across the LAN or on the same server.  Even with the media method your source instance has to be down to perform the DB dump since 6.1 cannot use a standard TSM DB backup.


Tuesday, May 05, 2009

TSM 6.1 Upgrade - FYI

For those of you looking to upgrade your current TSM instances to 6.1 take note of this issue with upgrading the DB.

At this time a database containing backup sets or tables of contents (TOCs) cannot be upgraded to V6.1. The database upgrade utilities check for defined backup sets and existing TOCs. If either exists, the upgrade stops and a message is issued saying that the upgrade is not possible at the time. In addition, any operation on a V6.1 server that tries to create or load a TOC fails.

When support is restored by a future V6.1 fix pack, the database upgrade and all backup set and TOC operations will be fully enabled.


I haven't heard if this will be fixed in the first patch of 6.1 but keep it in mind when considering upgrading your system rather than starting from scratch.