Got this info from Dmitry Dukhov - creator of TSMExplorer
Procedure of required registration for getting free license and
TSMExplorer for TSM version 5 are available on
http://www.s-iberia.com/download.html
Sunday, December 08, 2013
Tuesday, October 22, 2013
Archive Report
Where I work we have a process that bi-monthly generates a mksysb then archives it to TSM. Recently an attempt to use an archived mksysb found that sometimes the mksysb process does not create a valid file, but it is still archived to TSM. So the other AIX admins asked me to generate a report that would show the amount of data that was archived and on what date it occurred. Now I would have told them it was impossible if they had asked for data from the backup table, but our archive table is not as large as the backups so I gave it a go.
First problem was determining the best table(s) to use. I could use the summary table, but it doesn't tell me what schedule ran and some of these UNIX servers do have archive schedules other than the mksysb process. The idea I came up with was to query the contents table and join it with the archive table using the object_id field. Here's an example of the command:
select a.node_name, a.filespace_name, a.object_id, cast((b.file_size/1048576)as integer(9,2))AS SIZE_MB , cast((a.ARCHIVE_DATE)as date) as ARCHIVE from archives a, contents b where a.node_name=b.node_name and a.filespace_name='/mksysb_apitsm' and a.filespace_name=b.filespace_name and a.object_id=b.object_id and a.node_name like 'USA%'
This select takes at least 20 hours to run across 6 TSM servers. I guess that I should be happy it returns at all, but TSM is DB2! It should be a lot faster, so I am wondering if I could clean up the script or add something that would make the index the data faster??? I am considering dropping the "like" and just matching node_name between the two tables. Would putting node_name matching first then matching object_id be faster? Would I be better off running it straight out of DB2? Suggestions appreciated.
First problem was determining the best table(s) to use. I could use the summary table, but it doesn't tell me what schedule ran and some of these UNIX servers do have archive schedules other than the mksysb process. The idea I came up with was to query the contents table and join it with the archive table using the object_id field. Here's an example of the command:
select a.node_name, a.filespace_name, a.object_id, cast((b.file_size/1048576)as integer(9,2))AS SIZE_MB , cast((a.ARCHIVE_DATE)as date) as ARCHIVE from archives a, contents b where a.node_name=b.node_name and a.filespace_name='/mksysb_apitsm' and a.filespace_name=b.filespace_name and a.object_id=b.object_id and a.node_name like 'USA%'
This select takes at least 20 hours to run across 6 TSM servers. I guess that I should be happy it returns at all, but TSM is DB2! It should be a lot faster, so I am wondering if I could clean up the script or add something that would make the index the data faster??? I am considering dropping the "like" and just matching node_name between the two tables. Would putting node_name matching first then matching object_id be faster? Would I be better off running it straight out of DB2? Suggestions appreciated.
Monday, August 12, 2013
TSM Command Processing Tip
I am constantly having to run a large list of commands and sometimes just don't want to deal with running them through a shell script. So whats the best way to run a list of commands without having to deal with TSM prompting for a YES/NO. I can using a batch command with the -NOPROMPT option from a admin command-line, but sometimes thats more work than I want to deal with. There's got to be a better way. Well the simple answer is to define the TSM server to itself and use it in the command when you run it. Here's an example....I have to delete empty volumes from storage pools rather than wait for the 1 day delay.
select 'ustsm07:del vol', cast((volume_name)as char(8)) as VOLNAME, from volumes where pct_utilized=0 and devclass_name <> 'DISK'
RESULTS:
Unnamed[1] VOLNAME
---------------- ---------
ustsm07:del vol K00525
ustsm07:del vol K00526
ustsm07:del vol J00789
ustsm07:del vol J00197
ustsm07:del vol J00303
ustsm07:del vol J01172
ustsm07:del vol J01233
ustsm07:del vol J00850
ustsm07:del vol J00861
ustsm07:del vol K00018
ustsm07:del vol J01613
ustsm07:del vol J01624
ustsm07:del vol J01671
ustsm07:del vol J01687
ustsm07:del vol K00116
ustsm07:del vol K00130
ustsm07:del vol K00340
ustsm07:del vol K00348
tsm: USTSM07>USTSM07:del vol K00525
ANR1699I Resolved USTSM07 to 1 server(s) - issuing command DEL VOL K00525 against server(s).
ANR1687I Output for command 'DEL VOL K00525' issued against server USTSM07 follows:
ANR2208I Volume K00525 deleted from storage pool TAPE_A.
ANR1688I Output for command 'DEL VOL K00525' issued against server USTSM07 completed.
ANR1694I Server USTSM07 processed command 'DEL VOL K00525' and completed successfully.
ANR1697I Command 'DEL VOL K00525 processed by 1 server(s): 1 successful, 0 with warnings, and 0 with errors.
So I copy the data and paste it into my command line and because I am using server routing (even to the same server I am on) TSM does not prompt for confirmation. So make sure you have defined your TSM servers to themselves so you can take advantage of this simple feature. Also note that TSM wont delete a tape with data, so I leave the "DISCARD=YES" option off so only EMPTY tapes are deleted.
select 'ustsm07:del vol', cast((volume_name)as char(8)) as VOLNAME, from volumes where pct_utilized=0 and devclass_name <> 'DISK'
RESULTS:
Unnamed[1] VOLNAME
---------------- ---------
ustsm07:del vol K00525
ustsm07:del vol K00526
ustsm07:del vol J00789
ustsm07:del vol J00197
ustsm07:del vol J00303
ustsm07:del vol J01172
ustsm07:del vol J01233
ustsm07:del vol J00850
ustsm07:del vol J00861
ustsm07:del vol K00018
ustsm07:del vol J01613
ustsm07:del vol J01624
ustsm07:del vol J01671
ustsm07:del vol J01687
ustsm07:del vol K00116
ustsm07:del vol K00130
ustsm07:del vol K00340
ustsm07:del vol K00348
tsm: USTSM07>USTSM07:del vol K00525
ANR1699I Resolved USTSM07 to 1 server(s) - issuing command DEL VOL K00525 against server(s).
ANR1687I Output for command 'DEL VOL K00525' issued against server USTSM07 follows:
ANR2208I Volume K00525 deleted from storage pool TAPE_A.
ANR1688I Output for command 'DEL VOL K00525' issued against server USTSM07 completed.
ANR1694I Server USTSM07 processed command 'DEL VOL K00525' and completed successfully.
ANR1697I Command 'DEL VOL K00525 processed by 1 server(s): 1 successful, 0 with warnings, and 0 with errors.
Labels:
Administration,
command routing,
commands,
noprompt
Wednesday, July 31, 2013
IBM P7 Strange Behaviour
We have a P7 frame that has 4 LPARs that are used as TSM storage agents from which snapshots of our SAP DB's are mounted for backup. They have always had great performance until one LPAR had a bad HBA that phoned home and was replaced. After it was replaced performance for backups dramatically decreased from 800MB/s to 150MB/s and overall performance of the server would drastically drop. When the DB requiring backup is over 25TB that is a huge hit, and we could not find the root cause. At first IBM said it was our Hitachi disk that was the problem. We eliminated that right away, so we then replaced the new HBA, checked our fiber, and then checked the GBIC and nothing seemed to fix the situation. During the first week I asked the IBM service technician if we could possibly have a bad drawer or slot and he emphatically said "No! If you did you would have errors all over the place." So we checked firmware, we moved cards within the frame (again), we double checked the fiber, now we were going into the third week. So I kept asking if something could be wrong with the drawer/slots and I kept getting the same answer. The reason I suggested it was due to previous experience. I have seen hardware go bad without totally going "out". So after exhausting everything other than the replacing the slots, IBM finally replaced the slots. Viola! Backup speeds went back to normal and system degradation during the backup disappeared. So the slots/drawer was the issue. No errors relating to a slot/drawer hardware issue occurred but something caused the slots to degrade performance. It took almost a month to resolve the issue, I wouldn't say that IBM support was very thorough and at times tried to push off the problem to other vendors (i.e. Hitachi). I can only suggest in the future you trust your instincts and push the CE's to follow down every avenue. My headache is over, but now the RCA begins.
Friday, May 31, 2013
TSM 7
I recently attended a technical briefing from IBM of various storage related topics which included TSM. While they did have an NDA I can say that some of the items we discussed showed promise. I'll be able to discuss more after IBM Pulse this month, but what I can say is that the new Admin Center is pretty slick. It has some nice features and will finally make up for the folly that was the ISC. IBM stressed that they are listening to users and taking their requests and suggestions to try and develop a tool everyone will find useful. That was surprising news seeing as how the majority of people complained about the ISC and it took 7+ years to finally get a replacement. I will say this in defense of the TSM developers, a lot of the ISC push came from above and they were somewhat forced into that fiasco. TSM 7 DB will scale larger and handle more objects and they are really ramping up the capabilities of the client deployment module. More info to come in the next couple weeks.
One item that did come up was the issue of Export and Backup Set tapes being unencrypted from TSM due to the key issue. What I suggested was that they allow TSM servers to backup each others keys and also utilize them so Exports and Backup-Sets could be encrypted, but still shared between TSM servers. Hope they find some way to add that capability.
We did have a Protectier review and it has a lot of promise. I know I have been a Data Domain fanboy for some time. While I didn't see anything that integrated Protectier DeDupe with TSM directly it did show some nice growth capabilities. I'm excited to see how well it works, but I'm fighting a study that shows tape still is the more cost effective backup solution.
I'll post more once PULSE is complete (mid-June) so stay tuned!
One item that did come up was the issue of Export and Backup Set tapes being unencrypted from TSM due to the key issue. What I suggested was that they allow TSM servers to backup each others keys and also utilize them so Exports and Backup-Sets could be encrypted, but still shared between TSM servers. Hope they find some way to add that capability.
We did have a Protectier review and it has a lot of promise. I know I have been a Data Domain fanboy for some time. While I didn't see anything that integrated Protectier DeDupe with TSM directly it did show some nice growth capabilities. I'm excited to see how well it works, but I'm fighting a study that shows tape still is the more cost effective backup solution.
I'll post more once PULSE is complete (mid-June) so stay tuned!
Labels:
Admin Interface,
Backupset,
De-Duplication,
Export,
ProtecTIER,
TSM
Monday, March 25, 2013
Cleaning tape cycles CLI
mtlib -l -q L | grep 3592
#IBM 3584, A.K.A. IBM TS3500#
/opt/java6/bin/java -jar TS3500CLI.jar -a -viewCleaningCartridges -u -p | awk -F',' '{total=total+$9;}END{print total}'
Insert it into your own morning TSM report script! ;-)
Wednesday, March 06, 2013
Data Domain vs. Protectier
Where I am currently employed we are looking to replace our 3592 based library with a deduplication solution. Currently the higher ups are leaning towards IBM ProtecTIER without having thoroughly investigated any other solutions. Having previously used Data Domain solutions at my previous employ I was somewhat concerned that the ProtecTIER solution would be a bad fit for our environment. I have had some run ins with people who have used IBM's ProtecTIER solution and when compared to those who have used Data Domain (including myself) you immediately see the difference in how they talk about the two products. So I was hoping to find a good write-up showing in-depth details comparing the two solutions and it took a blogger like me to provide a great comparison. If you would like a good overview of how Data Domain and ProtecTIER stack up against one another in technology and performance check out the following link. It's very informative and solidifies why I would prefer using Data Domain.
Deduplication: Data Domain Vs. ProtecTIER Performance
One item that was not covered was the NFS capabilities of both. While I used VTL functionality with Data Domain, I was a HUGE NFS proponent. You can save a lot of money over a TSM TDP + LAN-Free solution using NFS with 10Gb Ethernet for your DB backups (since IBM's licensing costs are still questionable). When I was first exploring ProtecTIER they did not yet have NFS capabilities, so I'd like to see a NFS performance comparison between the two products.
Deduplication: Data Domain Vs. ProtecTIER Performance
One item that was not covered was the NFS capabilities of both. While I used VTL functionality with Data Domain, I was a HUGE NFS proponent. You can save a lot of money over a TSM TDP + LAN-Free solution using NFS with 10Gb Ethernet for your DB backups (since IBM's licensing costs are still questionable). When I was first exploring ProtecTIER they did not yet have NFS capabilities, so I'd like to see a NFS performance comparison between the two products.
Labels:
Backup,
Data Domain,
De-Duplication,
Disk,
Performance,
ProtecTIER,
VTL
Thursday, January 31, 2013
Tivoli Storage Manager Next Administrative Interface / Beta Programme
It'll be called: Operations Center (a bit sounds like admincenter ;-)
I hope they didn't only change the design!
Thursday, January 24, 2013
Upgrading A TSM 5.5 Library Manager to 6.x
I just helped (sort of) perform an upgrade of two TSM library managers from TSM 5.5 to TSM 6.2. First off I'd like to say that the process involved was really not worth the time it took. Our library manager had a 1GB DB and contained no client data. When the library controller contains no client data you can easily move from 5.x to 6.x without all the headaches of a DB upgrade through the extract and insert process (which took 1 hour to complete once we started the insert). Here are the basic steps to easily upgrade a TSM library manager:
NOTE: This only works if you do not perform ANY backups (Client or NAS) to the library manager.
- Backup the TSM library manager DB
- Backup Volhist and Devconfig
- Copy all define statements from devconfig into a TSM macro
- Uninstall TSM 5.5
- Install TSM 6.x
- Follow the steps to create a new TSM 6.x server
- Start the TSM 6.x server
- Run the macro to redefine all the servers, devclasses, libraries, drives, and paths
- Check-in the tapes to the library
- Run audit library from each of the library clients
NOTE: This only works if you do not perform ANY backups (Client or NAS) to the library manager.
Wednesday, January 09, 2013
Do Large Corporations Need Tape?
I am dealing with a situation where I have to gone from a tapeless TSM environment to the standard TSM tape model and I have to wonder why you would use tape when you have multiple data centers? If you have multiple data centers why not backup to disk and replicate the data on disk to a disk solution at the alternate DC? I did this with Data Domains and it made life so much easier. Multiple DR tests showed it was efficient and successful, of course this also utilized deduplication so disk usage and costs didn't get out of hand. So I ask why is any large corporation still using tape?
Subscribe to:
Posts (Atom)