Using the Universal Agent Script Data Provider
Note that this does not actually delete the metafile from the metafiles directory, it just removes the entry from the KUMPCNFG file.
Situations
With the script data now being collected, situations can be created to report problems with the cluster. Part 2 of this tip will go through the process of creating a situation that detects problems with cluster resources.
,
cd %CANDLEHOME%TMAITM6 kumpcon delete windows_cluster.mdl
,
If the change involved a modification to an attribute or attribute group, the version of the application will automatically be incremented e.g. hostname:WINDOWS_CLUSTER01will become hostname:WINDOWS_CLUSTER02. The previous version of the application will now be greyed out in the TEP, see here for information about how to remove obsolete applications from the TEP.
Deleting a metafile
To stop an application from running, the corresponding metafile needs to be deleted.
,
kumpcon refresh
,
Modifying the metafile
During the development process it is likely that you will need to make changes to the metafile. Once a change has been made to a metafile, it needs to be refreshed before the changes are activated.
,
Under that should be items representing the three attribute groups,GROUP_STATUS,RESOURCE_STATUSandNODE_STATUS. A default report workspace will be assigned to each attribute group showing the data returned by the script.
,
By drilling into the Universal Agent, you should now see a Navigator item called hostname:WINDOWS_CLUSTER01. WhereWINDOWS_CLUSTERis the name of our application, as defined in the metafile, and01is the version of the metafile.
,
The import process adds the metafile to the %CANDLEHOME%TMAITM6workKUMPCNFG
file so that it is automatically started the next time the Agent starts.
Assuming the import was successful, you should now see that there is a Navigator change pending in the TEP.
,
cd %CANDLEHOME%TMAITM6 kumpcon import windows_cluster.mdl
,
As long as you created the metafile in the metafiles directory, there is no need to specify the full path of to the metafile. This default location of metafiles is controlled by the KUMP_META_PATH variable defined in %CANDLEHOME%TMAITM6KUMENV
.
Assuming the file contains no syntax errors, one of the last lines of output should be;
KUMPV000I Validation completed successfully
At this point, you can select the Import option (by typing import) that will import the metafile so that the application is activated. Alternatively you can cancel and perform the import at a later date.
Importing the metafile
Once a metafile has been successfully validated, it needs to be imported to activate it. Again, this is done via the kumpcon
command.
,
cd %CANDLEHOME%TMAITM6 kumpcon validate windows_cluster.mdl
,
Note that we are running the same cluster_status.pl script for each attribute group but are passing a different argument each time.
Validating the metafile
Before we load the metafile it is good practice to validate it to make sure that there are no syntax errors. This can be done via the kumpcon
command.
,
//APPL Windows_Cluster //NAME Group_Status K 300 AddTimeStamp Interval=180 //SOURCE SCRIPT c:perl inperl c:cluster_status.pl "group" //ATTRIBUTES ',' Group D 128 KEY Node D 16 Status Z 255 //NAME Node_Status K 300 AddTimeStamp Interval=180 //SOURCE SCRIPT c:perl inperl c:cluster_status.pl "node" //ATTRIBUTES ',' Node D 20 KEY NodeID N 3 Status Z 255 //NAME Resource_Status K 300 AddTimeStamp Interval=180 //SOURCE SCRIPT c:perl inperl c:cluster_status.pl "resource" //ATTRIBUTES ',' Resource D 128 KEY Group D 128 Node D 16 Status Z 255
,
The ','
indicates that a comma should be used as the delimiter for the data returned by the script.
This creates an Attribute called Group
, which is of type D
which is DisplayString (a series of characters) and we are setting a maximum expected size of 128
characters. The KEY
parameter sets this attribute as the unique key in the attribute group and as such it will be used to correlate the data collected. As the information we are collecting is state based e.g. a resource is either online or offline, we only want to see the current status displayed in the TEP, as opposed to a new data row at each sample point.
The data type Z
takes all the remaining data from the source, up to the maximum size.
We’ll now repeat the exercise to create attribute groups for the resource and node data.
The complete metafile now looks like:
,
//ATTRIBUTES ‘,’ Group D 128 KEY Node D 128 Status Z 255
,
The SCRIPT
parameter identifies the form of the data source
c:perl inperl
is the script interpreter, required in this instance as we are running a perl script.
c:cluster_status.pl
is the script that is passed to the interpreter
"group"
is an argument passed to the c:cluster_status.pl script
Lastly we need to define the Attributes:
,
//SOURCE SCRIPT c:perl inperl c:cluster_status.pl “group”
,
The K
parameter identifies this attribute group as keyed, which means that the data will be correlated using one of the subsequent attribute defintions as the key.
The 300
parameter is the TTL (Time To Live) of the sampled data. It defines how long the data will be available for report viewing and situation evaluation. It is recommended that the TTL is set to a figure greater than the sampling interval to ensure that the Agent always a sample of data. If not specified, a default of 300 is applied.
The AddTimeStamp
parameter results in an automatically created attribute that contains a timestamp of when the data was collected.
The Interval=180
parameter identifies the sampling interval in seconds, i.e. how often our script should be run.
Now we’ll define the source of the data:
,
//NAME Group_Status K 300 AddTimeStamp Interval=180
,
Then we’ll define an Attribute Group that will be used to collect information about the cluster groups:
,
//APPL Windows_Cluster
,
The perl script that re-formats the cluster.exe output is available here.
Applications and Metafiles
To start collecting data with the ASFS data provider, we need to create an Application. The application is defined by a metafile. A metafile typically identifies the name of the application, the attribute groups and attributes that the collected data is mapped to. The metafile also defines the source of the data, which in this instance will be the script or command to be run.
Metafiles are stored on the Agent in the %CANDLEHOME%TMAITM6metafiles
directory.
Building the Metafile
Now we have the script that will gather the cluster status information, we need to create the metafile that defines our new application and it’s corresponding attribute groups and attributes.
The convention is that metafiles should be named with a .mdl
extension, therefore we will create a metafile called windows_cluster.mdl
in %CANDLEHOME%TMAITM6metafiles
.
The first step is to define the Application Name, which we will call Windows_Cluster:
,
C:>cluster_status.pl group Disk Group 1,NODE1,Online Cluster Group,NODE1,Online A Very Long Named Group,NODE1,Online
,
The most significant problem with the output returned by the cluster command is that there is no fixed field delimitation. The fields are separated by non-tab white space, the length of which is determined by the length of the previous field. As can be seen in the last row returned, the fields are separated by a single space in order to accommodate the long Group name. The delimitation problem is compounded by the fact that it is valid for Group names to include spaces. As attribute delimiters only support fixed strings (no regular expressions allowed here unfortunately), defining a delimiter that is valid for all instances appears to be impossible.
To overcome this, we will use a perl script to run the cluster commands and to reformat the output using a fixed delimiter, as can be seen below:
,
–FILTER={SCAN(0,Listing status)}
,
The first problem is that the command returns header data, which will need to be stripped out.
Fortunately, attribute definitions can include a filter specification that can be used to include or exclude data.
For example, to filter out the first line returned by the command, we could add the following filter definition to the group attribute:
,
C:>cluster group Listing status for all available resource groups: Group Node Status -------------------- --------------- ------ Cluster Group NODE1 Online Disk Group 1 NODE2 Online A Very Long Named Group NODE1 Online
,
The Universal Agent needs to be recycled to activate any changes made to the KUMENV file.
Determining the cluster status
Windows 2000 provides a command line utility called cluster.exe that can be used to view cluster group, node and resource information. We will use this command to determine the state of the cluster and the underlying resources.
While the command gives us all the information we require to monitor the cluster, the format of the output does cause a few problems, as shown by the following example.
,
KUMA_STARTUP_DP=ASFS,SNMP
,
Multiple data providers should be separated by commas e.g. to start both the ASFS and SNMP data providers the syntax would be:
,
KUMA_STARTUP_DP=SCRP
,
Introduction
The Universal Agent script (SCRP or ASFS) data provider is analogous to the Classic Distributed Monitoring String or Numeric script monitor and the CustomScript Resource Model in IBM Tivoli Monitoring 5.1. As the name suggests it provides the capability of using external commands and scripts to gather data about a particular component or service. The data captured can then be displayed in the TEP, referenced by situations and recorded in the data warehouse for future analysis in the same way as data collected by the standard Agents.
In this tip, we will use the Universal Agent script data provider to monitor the status of a Windows 2000 server cluster. For the purposes of the rest of this tip it is assumed that the Universal Agent has been installed on one of the nodes in the cluster. All commands mentioned should be executed on the computer that the Agent is running on.
Enabling the Script Data Provider
The data providers enabled when the Universal Agent starts are controlled by the KUMA_STARTUP_DP variable in the file %CANDLEHOME%TMAITM6KUMENV
. By default the ASFS data provider, which is a consolidation of the API, SOCKET, FILE and SCRP data providers, is enabled at start up. As we are looking to use the SCRP data provider, this is sufficient for our purposes, however we could alternatively remove the ASFS provider and enable the SCRP provider by making the following change to the KUMENV file;
,Using the Universal Agent Script Data Provider
Hits: 51