Showing posts with label Reports. Show all posts
Showing posts with label Reports. Show all posts

Wednesday, January 16, 2019

Q STATUS UPDATE!

So it turns out that TSM is calculating Front-End usage automatically when you install Ops Center or have SET STATUSMONITOR ON. I had installed Operations Center when I installed the latest SP and wasn't familiar witht he front-end licensing model until now. So lucky me it show's the data our management wants without me having to go to each application server to manually gather the data. Note that it is supposed to show the total in MB but from the command line it appears as if SP didn't calculate correctly. If you go into Ops Center and check licensing then it shows that the 

Front-End Capacity (MB): 148.21 

According to the Front-End licensing model this instance ACTIVE DATA is acutally 148TB.  If you you want to know the overall ACTIVE DATA the SP instance is protecting then look at the Front-End numbers. When you do a Q STATUS, if it's not reporting in TB than it probably is expriencing the bug that needs correcting by IBM, but that value is probably correct but in TB not MB.  Ops Center will also tell you how many nodes are not reporting and you can select that and view the list to see if any should be reporting.

Monday, January 14, 2019

Q STATUS Question

The company to which I am employed was recently acquired by another company that is a heavy Netbackup user. They bill not based on overall occupancy but on the "Front End Usage" of their servers. I don't know how that works for accurate billing but it's what they do and now we are being asked to provide "Front End Capacity" numbers so they can compare. The problem is ADSM/TSM/SP has always had issues with reporting "active only" data. Just recently I was working with an select IBM provided to pull the active for file systems but they also stated we had to go to the individual application servers to pull DB data. Not fun. So when I recently looked at all the detail of the Q STATUS I noticed the following near the bottom.

                 SUR Occupancy (TB): 703                
            SUR Occupancy Date/Time: 2019-01-07, 10:23:18
            Front-End Capacity (MB): 148.21             
             Front-End Client Count: 408                   
            Front-End Capacity Date: 2019-01-07, 10:23:18
                   Product Offering: IBM Spectrum Protect

According to IBM these last few statements are:

SUR Occupancy (TB)
If you have an IBM Spectrum Protect Suite (SUR) license, this field specifies the SUR occupancy on the server. The SUR occupancy is the amount of space that is used to store data that is managed by the IBM Spectrum Protect products that are included in the SUR bundle.
SUR Occupancy Date/Time
Specifies the date and time when SUR occupancy data was last collected.
Front-End Capacity (MB)
Specifies the amount of primary data that is reported as being backed up by clients. Clients include applications, virtual machines, and systems. This value is used for the front-end licensing model.
Front-End Client Count
Specifies the number of clients that reported capacity usage based on the front-end licensing model.
Front-End Capacity Date
Specifies the date and time when front-end capacity data was last collected.
Product Offering
Specifies a product offering.

So what exactly is the SUR license and does the EE license cover that? How is the data generated because my server reports front-end data at 148MB but if it is actually TB then that would fit more closely with overall capacity as front-end would be about 21% of overall occupancy. IBM obviously has some function to gather front-end numbers so why do they also have us going down the rabbit hold of gathering the data application data from the clients?

Thursday, December 13, 2018

Active Data Report

So me an a coworker have been hounding IBM for a way to generate an active data report for management. For various reasons they want to be able to see how much data a server would require for rebuild/restore and also to guage any growth. As many of you might be aware this has been something the TSM/Spectrum Protect community has been requesting for some time. So IBM created a perl script that when analyzed issues the following SQL select:

(I've added continuation dashes to make the select more readable)

select -
(sum(bk.bfsize )/1048576) as front_end_size_mega_byte, -
count(bk.bfsize ) as number_of_objects -
from -
 backups b, backup_objects bk -
where -
 b.state='ACTIVE_VERSION' -
 and -
 b.object_id=bk.objid -
 and -
 b.filespace_id in -
   ( select f.filespace_id -
     from filespaces f -
     where -
      b.node_name=f.node_name -
     and -
      f.filespace_id=b.filespace_id -
     and -
      f.filespace_type not like 'API:%' -
     and -
      f.filespace_type not like 'TDP%' -
   ) -
   and -
    b.node_name in -
   ( select node_name -
     from nodes -
     where repl_mode not in('RECEIVE','SYNCRECEIVE') -
    )

You will notice, however, that the select excludes node replicas and any API or TDP data. This is purely for file system backups. Supposedly they have something for TDP/API backups but I have not worked with it yet. I will post it as soon as I have a chance to review it and make it readable.

NOTE: This command can take a somewhat considerable amount of time to run. I have seen it take upwards of 10+ minutes to complete when run during our non-backup window times. Be patient!

Friday, July 27, 2018

DB2 Select Example In TSM/SP

While trying to create a specific report that a manager requested I realized I needed something more powerful. Well now that TSM/Spectrum Protect runs on DB2 we have all the power we need. Here is an example of generating a report with an Inner and Outer Join.

====================================
Inner and Outer Join Example
====================================

Part 1 - Will return the columns from two tables into a third temporary table

Select -
 varchar(o.node_name,45) ENTITY, -
 cast(s.SUM_AFFECTED as dec(12,0)) AFFECTED,
 cast(s.SUM_BYTES/1073741824 as decimal(12,2)) GB_BACKED_UP, -
 cast(o.SUM_NUM_FILES as dec(15,0)) TOTAL_FILES_STORED, -
 cast(o.SUM_LOGICAL_MB/1024 as dec(12,2)) GB_TSM_OCCUPANCY -
 from -


Part 2 - Gather two columns from occupancy

select node_name, -
    sum(NUM_FILES) SUM_NUM_FILES, -
    sum(LOGICAL_MB) SUM_LOGICAL_MB -
  from occupancy -
    group by node_name


Part 3 - Gather three columns from summary for BACKUP and ARCHIVES for last 30 days

select varchar(entity,45) ENTITY, -
    sum(bytes/1073741824) SUM_BYTES, -
    sum(affected) SUM_AFFECTED -
  from summary  -
   where -
    end_time>=(current_timestamp - 30 days) -
    and activity in ('BACKUP','ARCHIVE') -
   group by entity


select -
 varchar(o.node_name,45) ENTITY, -
 cast(s.SUM_AFFECTED as dec(12,0)) AFFECTED, -
 cast(s.SUM_BYTES as decimal(12,2)) GB_BACKED_UP, -
 cast(o.SUM_NUM_FILES as dec(15,0)) TOTAL_FILES_STORED, -
 cast(o.SUM_LOGICAL_MB/1024 as dec(12,2)) GB_TSM_OCCUPANCY -
 from -
 (select node_name, -
    sum(NUM_FILES) SUM_NUM_FILES, -
    sum(LOGICAL_MB) SUM_LOGICAL_MB -
  from occupancy -
    group by node_name) o, -
 (select entity, -
    sum(bytes/1073741824) SUM_BYTES, -
    sum(affected) SUM_AFFECTED -
  from summary  -
   where -
    end_time>=(current_timestamp - 30 days) -
    and activity in ('BACKUP','ARCHIVE') -
   group by entity) s -
 where ENTITY in (select node_name from nodes where domain_name<>'MN-STD-POL-DOM') -
  and o.node_name=s.entity -
  order by ENTITY

The generated report shows the total amount of files backed up, GB backed up for the last 30 days while also showing the total files stored in TSM and the nodes current total occupancy in GB.

ENTITY              AFFECTED        GB_BACKED_UP      TOTAL_FILES_STORED      GB_TSM_OCCUPANCY
----------   ---------------     ---------------     -------------------     -----------------
DORPORA05                920            47513.00                    8354              73263.60
IXPFLDNAS              60792               27.00                15443947               9434.27
ILDNASJ                12680                5.00                 1972680                715.60
ILDNASK                    8                0.00                   83004                197.48
ILDNASL                10739                4.00                 1649434                574.51
ILDNASM                45448               25.00                 1221367               1722.03
ILDNASM                 4386                7.00                 1122826                557.64
ILDNASN                    5                0.00                   99769                 46.52
ILDNASO                 1688                0.00                  459683                138.16
ILDNASP                 9369               11.00                 1402332               1623.92
ILDNASR                28787               29.00                  462479                457.15
ILDNASS                    5                0.00                   36771                 17.51
ILDNAST                 6329               21.00                  982728               1038.08
ILDNASU                 6848               19.00                 1548369                671.68
ILDNASV                18197               48.00                  999905               1333.92
ILDNASW                    5                0.00                  245366                116.89
RSD1W030S              59979                7.00                 5207060                547.23
RSD1W031S               3444                2.00                 5070450                473.15
RSD1W241S               6879               47.00                  517358               1782.92
RSD1W242S               6478               51.00                  995006               2109.45
RSD1W266S               4377                7.00                  492810                184.69
RSD1W267S               5700                3.00                  328391                100.95
RSD1W320C              40881               12.00                 2692541                576.50
RSD1W341C               8194               55.00                  456567               1120.11
RSD1W342C              17003                7.00                  625487                281.84
RSD1W343C               4628                2.00                  298699                 79.63
RSD1W504SV              1941               12.00                   42444                551.35
RSD1W505SV              1079              105.00                   38524               4033.06
RSD1W5067S               683               64.00                   22389               1343.82
RSD1W506SV               489                0.00                   14349                  4.53
RSD1W508SV              2662               31.00                   79340                699.12
RSD1W509SV              4017               74.00                  139073               1617.35
RSD1W510SV               698              544.00                   18086               9161.23
RSD1W511SV              1365              295.00                   65303               6588.06

Wednesday, November 22, 2017

Spectrum Protect 8.1 New DB Functions Supported - UPDATE to the UPDATE!

UPDATE (11/22/2017):

I have found another interesting select option I was previously unaware of that coincides with the previous LOCATE option. We have an interesting host naming convention where I work and we use the hostname for the NODE NAME when defining the servers to TSM/SP. It is as follows:

<servername>.<datacenterID>.<OS_type>.<customer>.<domain.net>
Example:   CQSDBS01.US01.ULX.BS.US32118.NET
The issue I have is with trying to seperate out the data is a report by customer. Of course each customer has their own domains but I wanted a report that broke out the report by the customer ID. I wanted something like this:
Customer      Occupancy GB
----------    ------------
UDC2                 38144
903                  33358
CFG                  30321
ULT01                 1942
MAQ                    639
MSM                   1557
MX                       9
MX3                      0
ELO                   6741
GUS                      4
BFG                  11234
VS                     166
PMTX                    80
POS                  14148
SAM                   8031
SUX                  18562
The problem was not with breaking out the customer ID because the LOCATE select worked perfectly for extracting the Customer IDs with their various lengths. The issue was how to sum each customers data without experiencing an error requiring I use GROUP BY which caused more headaches because I could not figure out how to group by the NODE NAME from which I was getting the customer ID. So I googled GROUP BY in DB2 with SUBSTR since I was using a SUBSTR to extract the customer IDs. As turns out the GROUP BY is more flexible than I originally thought. Check this out:

Select -                           
 VARCHAR(SUBSTR(NODE_NAME, -        
 LOCATE('.',NODE_NAME, -           
 LOCATE('.', NODE_NAME, -          
 LOCATE('.', NODE_NAME)+1)+1)+1, - 
 LOCATE('.',NODE_NAME, -           
 LOCATE('.',NODE_NAME, -           
 LOCATE('.', NODE_NAME, -          
 LOCATE('.', NODE_NAME)+1)+1)+1)- -
 LOCATE('.',NODE_NAME, -           
 LOCATE('.', NODE_NAME, -          
 LOCATE('.', NODE_NAME)+1)+1)-1),8) -
 as CUSTOMER, -                    
 sum(TOTAL_MB)/1024 AS TOTAL_GB -   
from auditocc -                    
 where node_name like '%.%.%.%.%' - 
group by VARCHAR(SUBSTR(NODE_NAME, -
 LOCATE('.',NODE_NAME, -           
 LOCATE('.', NODE_NAME, -          
 LOCATE('.', NODE_NAME)+1)+1)+1, - 
 LOCATE('.',NODE_NAME, -           
 LOCATE('.',NODE_NAME, -           
 LOCATE('.', NODE_NAME, -          
 LOCATE('.', NODE_NAME)+1)+1)+1)- -
 LOCATE('.',NODE_NAME, -           
 LOCATE('.', NODE_NAME, -          
 LOCATE('.', NODE_NAME)+1)+1)-1),8) 

In DB2 I could actually group by the SUBSTR of the value I was extracting, even with it being complex with the LOCATE options in use. MIND BLOWN! 

mind blown wat animated GIF

UPDATE (5/9/2017):

I posted on StackOverflow (a great developer/scripting/DB) listserv asking how I could get similar output with the functions available in versions prior to DB2 11 and a user was able to help me out with a script that could return the same data as the one below although it's a little more complicated.  So as you know the SUBSTR function works like this:


Select SUBSTR(FILESPACE_NAME, <START POSITION>,<LENGTH>)

The issue with the earlier DB2 versions is I could only get the LOCATION function to play nice with my script. I was previously only able to figure out a select that got close:
Select -                                                                        
 SUBSTR(FILESPACE_NAME, LOCATE('\',FILESPACE_NAME, LOCATE('\',FILESPACE_NAME, - 
 LOCATE('\', FILESPACE_NAME, -                                                  
 LOCATE('\', FILESPACE_NAME)+1)+1)-1)+1) -                                      
 as FSNAME -                                                                    
from filespaces -                                                               
 where node_name='TEST-AP-DAG'
Which output this:

FSNAME                                                           
-----------------------------------------------------------------
TEST-MB-08\6bc391ef-a370-49a1-8f05-b1bed9e5ad55                  
Mailbox Database 1143943276\55ac1670-efc9-4301-ac5c-beb5cd2d77cb 
TEST-MB-05\11dedbd4-1757-45c8-8991-f0f7134ef210                  

I could not figure out a way to get the length. So a kind reader at StackOverflow helped me understand how to get the length by subtracting the 4th '\' location from the 3rd '\' -1. This gives the correct length and allows for the length size to vary rather than be fixed.
Select -
 SUBSTR(FILESPACE_NAME, -
 LOCATE('\',FILESPACE_NAME, -
 LOCATE('\', FILESPACE_NAME, -
 LOCATE('\', FILESPACE_NAME)+1)+1)+1, -
 LOCATE('\',FILESPACE_NAME, -
 LOCATE('\',FILESPACE_NAME, -
 LOCATE('\', FILESPACE_NAME, -
 LOCATE('\', FILESPACE_NAME)+1)+1)+1)- -
 LOCATE('\',FILESPACE_NAME, -
 LOCATE('\', FILESPACE_NAME, -
 LOCATE('\', FILESPACE_NAME)+1)+1)-1) -
 as FSNAME -
from filespaces -
 where node_name='TEST-AP-DAG'
Here it is color coded:
(Select SUBSTR(FILESPACE_NAME, <START POSITION>,<LENGTH>)
Select -
 SUBSTR(FILESPACE_NAME, LOCATE('\',FILESPACE_NAME, LOCATE('\', 
 FILESPACE_NAME, LOCATE('\', FILESPACE_NAME)+1)+1)+1, -
 LOCATE('\',FILESPACE_NAME, LOCATE('\',FILESPACE_NAME, LOCATE('\', 
 FILESPACE_NAME,  LOCATE('\', FILESPACE_NAME)+1)+1)+1)-  -
 LOCATE('\',FILESPACE_NAME, LOCATE('\', FILESPACE_NAME,  -
 LOCATE('\', FILESPACE_NAME)+1)+1)-1) -
 as FSNAME -
from FILESPACES -
 where - 
  NODE_NAME='TEST-AP-DAG'
So the final script looks like this
Select -
 VARCHAR(NODE_NAME,12) as NODE, -
 VARCHAR(SUBSTR(FILESPACE_NAME, -
  LOCATE('\',FILESPACE_NAME, -
  LOCATE('\', FILESPACE_NAME, -
  LOCATE('\', FILESPACE_NAME)+1)+1)+1, -
  LOCATE('\',FILESPACE_NAME, -
  LOCATE('\',FILESPACE_NAME, -
  LOCATE('\', FILESPACE_NAME, -
  LOCATE('\', FILESPACE_NAME)+1)+1)+1)- -
  LOCATE('\',FILESPACE_NAME, -
  LOCATE('\', FILESPACE_NAME, -
  LOCATE('\', FILESPACE_NAME)+1)+1)-1),30) -
  as Exch_DB,
 date(backup_end) AS LAST_GOOD_BACKUP -
from filespaces -
 where - 
  node_name='TEST-AP-DAG' -
 order by Exch_DB, backup_end asc
The results are exactly like the script that only works with TSM 8.1.


NODE              EXCH_DB                          LAST_GOOD_BACKUP
-------------     ------------------------------- -----------------
TEST-AP-DAG       \TEST-MB-01                            2017-05-08
TEST-AP-DAG       \TEST-MB-02                            2017-05-08
TEST-AP-DAG       \TEST-MB-03                            2017-05-08
TEST-AP-DAG       \TEST-MB-04                            2017-05-08
TEST-AP-DAG       \TEST-MB-05                            2017-05-08
TEST-AP-DAG       \TEST-MB-TEMP                          2017-05-08
TEST-AP-DAG       \MISC-MB-TEMP                          2017-05-08
TEST-AP-DAG       \Mailbox Database 1128394465           2017-05-08
TEST-AP-DAG       \Mailbox Database 1397248650           2017-05-08



(4/20/17):
I upgraded some of my 6.3.4 servers to TSM 8.1 recently and have found the newer DB2 version (v11.1) offers some added functions for SQL queries. I have been trying to build a report to track an Exchange DAG servers backups. The problem is that the easiest way to track the backups of DB's is to view the last completed backup for each of nodes file spaces. That's the easy part. The hard part is that the file space name is huge.

TEST-AP-DAG\Microsoft Exchange Writer\{76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}\TEST-MB-05\0e41e645-9acf-4b80-bc85-d606e04fe4d8

TEST-AP-DAG\Microsoft Exchange Writer\{76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}\Mailbox Database 1223544393\2bc06db2-1966-4fd7-9545-f667102b0b7d

So how to extract the DB name, in this case TEST-MB-05 and Mailbox Database 1397248650 from the name when the length of the DB name changes? After some investigation and trial and error I found that the LOCATE_IN_STRING function works in v8.1. Whether it works in 7.x (DB2 v10.5) I can't say since I don't have a 7.x server to run it against. If any of you out there try this and it works let me know. So here's the script. I think you can read it and see what I am doing. If not let me know in the comments and I'll explain...

tsm: TSMSERV>q script DAG-EXCH-RPT f=raw

select -
 VARCHAR(NODE_NAME,12) as NODE, -
 varchar(substring(filespace_name, LOCATE_IN_STRING(filespace_name, '\', 1, 3), -
 LOCATE_IN_STRING(filespace_name, '\', 1, 4) -  LOCATE_IN_STRING(filespace_name, '\', 1, 3)),30) as Exch_DB, -
 date(backup_end) AS LAST_GOOD_BACKUP -
from FILESPACES -
 where -
  node_name='TEST-AP-DAG' -
 order by Exch_DB, backup_end asc


Here is sample output:

NODE              EXCH_DB                          LAST_GOOD_BACKUP
-------------     ------------------------------- -----------------
TEST-AP-DAG       \TEST-MB-01                            2017-04-18
TEST-AP-DAG       \TEST-MB-02                            2017-04-18
TEST-AP-DAG       \TEST-MB-03                            2017-04-18
TEST-AP-DAG       \TEST-MB-04                            2017-04-18
TEST-AP-DAG       \TEST-MB-05                            2017-04-18
TEST-AP-DAG       \TEST-MB-TEMP                          2017-04-18
TEST-AP-DAG       \MISC-MB-TEMP                          2017-04-18
TEST-AP-DAG       \Mailbox Database 1128394465           2017-04-18
TEST-AP-DAG       \Mailbox Database 1397248650           2017-04-18

If another function would work better feel free to correct the script and leave it for everyone in the comments. Now to go and find more functions I can use to manipulate my data!

Wednesday, April 08, 2015

Q MOUNT Research

So in yesterday's post I spoke of a "better Q MOUNT" using a script I wrote. The problem is the Q MOUNT command does not reference a accessible TSM table. It appears to gathers the information on the mount with the associated process or session from various tables (as seen with Q MOUNT F=D). So I became more intrigued with how TSM gathers the info and decided to bypass TSM all together and research on the DB2 side. Well out of 779 tables in the TSM 7.1 DB2 database I did not see any with MOUNT in the name (you can find a list of all the TSM DB 7.1 tables here). There definietle some tables that appear in the list that are hidden from TSM or unused, but I queried a number of tables looking for any that had fields (columns) that correlated to the Q MOUNT command. The closest I came was the MMS_DRIVE table which has the following fields.

$ db2 "select char(COLNAME,24) as COLNAME,char(TYPENAME,24)as DATATYPE,LENGTH,SCALE from syscat.columns where tabname='MMS_DRIVES' and tabschema='TSMDB1'"

COLNAME                  DATATYPE                 LENGTH      SCALE
------------------------ ------------------------ ----------- ------
ACSDRVID                 VARCHAR                          126      0
CLEANFREQ                INTEGER                            4      0
DEVICE                   VARCHAR                           65      0
DEVTYPE                  INTEGER                            4      0
DEV_INQ                  SMALLINT                           2      0
DRIVENAME                VARCHAR                           31      0
ELEMENT                  INTEGER                            4      0
INQUIRY                  VARCHAR                         1025      0
KBYTES_PROC              BIGINT                             8      0
KBYTES_PROC_HI           INTEGER                            4      0
LIBNAME                  VARCHAR                           31      0
ONLINE                   INTEGER                            4      0
OWNER                    VARCHAR                           65      0
OWNERVOL                 VARCHAR                         1025      0
RD_FORMAT                BIGINT                             8      0
SERIAL                   VARCHAR                           65      0
UPDATE_DATE              TIMESTAMP                         10      6
UPDATOR                  VARCHAR                           65      0
WR_FORMAT                BIGINT                             8      0
WWN                      VARCHAR                           17      0


There should be a table that would show the drive, what tape is mounted, what process or session it's assigned and who owns it, just like a Q MOUNT F=D. I haven't found it yet.

NOTE: Of the 779 tables I listed a more refined search shows 154 tables that correspond to what we seem to be able to access from the TSM admin command line.

Tuesday, March 24, 2015

A Better Q MOUNT?

I was playing around with my QDRV script which I feel gives a better showing than Q DRIVE and realized that I could technically produce a better Q MOUNT display than the built in TSM command provides. FYI - Unlike the Q MOUNT command my QMNT script shows the device definition for the current DRIVE_OWNER, so a storage agent or library client.  I have provided both my scripts for you, let me know what you think. Suggestions are welcome.

QDRV Script

select cast((library_name)as char(15)) as LIBRARY_NAME, -
cast((DRIVE_NAME)as char(16)) as DRIVE_NAME, -
cast((drive_state)as char(10)) as DRIVE_STATE, -
cast((volume_name)as char(8)) as VOL_NAME, cast((online)as char(10)) as ONLINE, -
cast((ALLOCATED_TO)as char(20)) as DRV_OWNER from drives order by library_name, drive_name







QDRV Macro

DEFINE SCRIPT  QDRV DESC="Show tape drive status"
UPDATE SCRIPT  QDRV "select cast((library_name)as char(15)) as LIBRARY_NAME, -"
UPDATE SCRIPT  QDRV "cast((DRIVE_NAME)as char(16)) as DRIVE_NAME, -"
UPDATE SCRIPT  QDRV "cast((drive_state)as char(10)) as DRIVE_STATE, -"
UPDATE SCRIPT  QDRV "cast((volume_name)as char(8)) as VOL_NAME, cast((online)as char(10)) as ONLINE, -"
UPDATE SCRIPT  QDRV "cast((ALLOCATED_TO)as char(20)) as DRV_OWNER from drives order by library_name, drive_name"





QMNT Script

select varchar(a.library_name,15) as LIB_NAME, -
cast((a.DRIVE_NAME)as char(16)) as DRIVE_NAME, -
cast((a.drive_state)as char(10)) as DRIVE_STATE, -
cast((a.volume_name)as char(8)) as VOL_NAME, -
varchar(b.device,12) as device, cast((a.online)as char(10)) as ONLINE, -
cast((a.ALLOCATED_TO)as char(20)) as DRV_OWNER from drives a, paths b -
where a.library_name=b.library_name and a.drive_name=b.destination_name -
and b.source_name=a.ALLOCATED_TO order by a.drive_name



QMNT Macro

DEFINE SCRIPT  QMNT DESC="Show tape mount status"
UPDATE SCRIPT  QMNT "select varchar(a.library_name,15) as LIB_NAME, -"
UPDATE SCRIPT  QMNT "cast((a.DRIVE_NAME)as char(16)) as DRIVE_NAME, -"
UPDATE SCRIPT  QMNT "cast((a.drive_state)as char(10)) as DRIVE_STATE, -"
UPDATE SCRIPT  QMNT "cast((a.volume_name)as char(8)) as VOL_NAME, -"
UPDATE SCRIPT  QMNT "varchar(b.device,12) as device, cast((a.online)as char(10)) as ONLINE, -"
UPDATE SCRIPT  QMNT "cast((a.ALLOCATED_TO)as char(20)) as DRV_OWNER from drives a, paths b -"
UPDATE SCRIPT  QMNT "where a.library_name=b.library_name and a.drive_name=b.destination_name -"
UPDATE SCRIPT  QMNT "and b.source_name=a.ALLOCATED_TO order by a.drive_name"

Monday, January 12, 2015

SQL: CASE and CONCAT

SO I was trying to build a better report for TSM Client levels replacing the crappy windows OS level with the correct version using CASE but was worried that case with two fields being concatenated would work. Well it does and quite well. The only issue was that if the platform_name is longer than the varchar setting then you will receive a warning error at the end of the select (the select runs successfully but will truncate any results for platform_name which is easily fixed).

select case -
  when varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) ='WinNT 5.00' then 'WinNT 2000' -
  when varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) ='WinNT 5.02' then 'WinNT 2003' -
  when varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) ='WinNT 6.00' then 'WinNT 2008' -
  when varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) ='WinNT 6.01' then 'WinNT 2008 R2' -
  when varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) ='WinNT 6.02' then 'WinNT 2012' -
  when varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) ='WinNT 6.03' then 'WinNT 2012 R2' -
  else varchar(platform_name,10) || ' ' || cast(client_os_level as char(14)) -
end -
  AS platform_name, -
cast(client_version as char(1)) || '.' || cast(client_release as char(1)) || '.' || cast(client_level as char(1)) || '.' || cast(client_sublevel as char(1)) as TSM_Version, count(distinct tcp_name) AS COUNT from nodes where LASTACC_TIME>(CURRENT_TIMESTAMP - 70 DAYS) and node_name like '%SU%' group by platform_name, client_os_level, client_version, client_release, client_level, client_sublevel


The results were exactly what I wanted.

PLATFORM_NAME          TSM_VERSION           COUNT
------------------     -----------     -----------
SUN SOLARIS 5.9        5.2.2.0                   4
WinNT 2000             5.3.0.0                   1
WinNT 2000             5.3.6.0                   1
WinNT 2003             5.3.0.0                   4
WinNT 2003             5.3.2.0                   6
WinNT 2003             5.3.4.0                   6
WinNT 2003             5.4.0.2                   3
WinNT 2003             5.4.1.4                   2
WinNT 2003             5.4.2.0                   2
WinNT 2003             5.4.3.0                   2
WinNT 2003             5.5.0.4                   8
WinNT 2003             5.5.1.0                   1
WinNT 2003             5.5.2.0                   1
WinNT 2003             5.5.3.0                   2
WinNT 2003             6.1.3.0                   1
WinNT 2008             5.5.0.4                   1
WinNT 2008 R2          6.1.4.0                   3
WinNT 2008 R2          6.2.4.0                   2
WinNT 2008 R2          6.3.0.0                   2
WinNT 2012             6.4.1.0                   1

Tuesday, October 22, 2013

Archive Report

Where I work we have a process that bi-monthly generates a mksysb then archives it to TSM. Recently an attempt to use an archived mksysb found that sometimes the mksysb process does not create a valid file, but it is still archived to TSM. So the other AIX admins asked me to generate a report that would show the amount of data that was archived and on what date it occurred. Now I would have told them it was impossible if they had asked for data from the backup table, but our archive table is not as large as the backups so I gave it a go.

First problem was determining the best table(s) to use. I could use the summary table, but it doesn't tell me what schedule ran and some of these UNIX servers do have archive schedules other than the mksysb process. The idea I came up with was to query the contents table and join it with the archive table using the object_id field.  Here's an example of the command:

select a.node_name, a.filespace_name, a.object_id, cast((b.file_size/1048576)as integer(9,2))AS SIZE_MB , cast((a.ARCHIVE_DATE)as date) as ARCHIVE from archives a, contents b where a.node_name=b.node_name and a.filespace_name='/mksysb_apitsm' and a.filespace_name=b.filespace_name and a.object_id=b.object_id and a.node_name like 'USA%'

This select takes at least 20 hours to run across 6 TSM servers. I guess that I should be happy it returns at all, but TSM is DB2! It should be a lot faster, so I am wondering if I could clean up the script or add something that would make the index the data faster??? I am considering dropping the "like" and just matching node_name between the two tables. Would putting node_name matching first then matching object_id be faster? Would I be better off running it straight out of DB2? Suggestions appreciated.