IBM Tivoli Analytics for Service Performance

With the explosion of Big Data the field of Analytics has boomed in recent years. One discipline in this field that is gaining more prominence and momentum is that of IT Operations Analytics and, in particular, predictive analytics. A presentation at the recent Gartner IT Infrastructure & Operations Management Summit forecast that amongst Global 2000 companies the rate of adoption of IT Operations Analytics as a central component in their IT infrastructure and service monitoring architecture will increase rapidly, eventually accounting for 10% of all expenditure in the IT Operations Management market.

Whilst IBM have had a presence in the BI orientated analytics market for some time, predominately through their SPSS platform, they have been slow to put together an offering targeted specifically at operations. The recent open beta release of the IBM Tivoli Analytics for Service Performance marks their entrance to this market and is the first step in closing the gap with competitors such as Netuitive, who have been active in this space for a number of years.

The IBM Tivoli Analytics for Service Performance (TASP) product is built on InfoSphere Streams technology which is used to provided the framework for acquiring and analysing metric data. Metric data can be sourced from either a database or flat files, providing plenty of scope for integrating with a variety of monitoring tools. The TASP analytics engine discovers relationships between the metrics and learns the values of those metrics that should be expected during “normal” operation. Any deviation from that “normal” profile results in an anomaly alert being generated.

The visualisation of metric data and detected anomalies is provided via a portlet in the Tivoli Integrated Portal. TASP primarily uses Netcool/OMNIbus as the destination for anomaly alerts and offers a number of Web GUI alert tools to provide an in-context launch capability. This re-use of other IBM owned technology seems to be par for the course these days and whilst I understand the rationale for it, you do sometimes wonder if somebody has just had a rummage around the spare parts bin and stuck various bits together with the code equivalent of gaffer tape. That would be an unfair accusation in this instance though giving the quality of the underlying components. The coupling of multiple components in this fashion can provide a challenge from a support perspective though.

For the purposes of the beta I’ve installed TASP and configured it to pull metric data from one of the ITM implementations in our office (via raw data stored in the WAREHOUS database).

TASP

The analytics engine requires 4 weeks’ worth of data to complete the training/learning process before anomaly detection will start in earnest, so it’ll be a little while before I see any results. I’ll blog again in a few weeks with an update.

In the meantime, if you are interested in the beta, the following links may be of interest:

  • Wiki (including access to the beta code)
  • Forum

Visits: 13

Author:

Ant Mico