Skip to main content

IBM Pulse 2008 - Review

I spent Monday-Wednesday at IBM Pulse in Orlando. It was a good show, but quite a few of the sessions were full when I arrived. It was frustrating because they didn't offer them more than once. The morning sessions were mostly pie in the sky, and not very useful to me. I got to spend a lot of time with senior people in engineering, architecture, and acquisitions/strategy. I also got to meet people I knew from online or other dealings with IBM. Overall, the show was a good use of my time, and I found it enjoyable.

Here are some of my highlights:

  • ITM 6.2.1 improvements including agentless capabilities and such.
  • New reporting framework based on BIRT which will be rolling forward.
  • New UI which is being pushed and was on display from TBSM 4.2.
  • Hearing about what other customers are up to (mostly bad decisions from what I've seen).
  • Affirmation of ITNM (Precision) as a best of breed tool, with a excellent roadmap.

Some things which are bad and make no sense:

  • Focus on manufacturing (due to MRO).
  • Pushing the MRO platform as IT, when it's clearly not ready for prime time.
  • Pushing TPM as datacenter automation. TPM is a worst of breed product in every way, and is years behind HP SAS and Bladelogic. The product will never catch up.
  • Allowing people to abuse Omnibus by turning it into a monitoring tool, and not a MOM or event manager.
  • Lack of clarity in strategy for Webtop vs TBSM (which customers seem to build BSM on Webtops).
  • ITCAM is total junk, and is years behind competing products from HP (Mercury).


Anonymous said…
Hi Jonah,

I was wondering if you could elaborate more on these two points:

"ITM 6.2.1 improvements including agentless capabilities and such."

"Allowing people to abuse Omnibus by turning it into a monitoring tool, and not a MOM or event manager."

I didn't get a chance to go to Pulse. I work for a National Telco up north and we use the Netcool SSM agents for server monitoring, and I know we'll eventually have to migrate over to ITM. We also use Omnibus as our MOM for the past 6 or so years.


Unknown said…
They are adding a host of new features into 6.2.1, including the ability to use an ITM agent to monitor agentlessly on another host. We currently use Sitescope for this, but it would be good for IBM to introduce some agentless capabilities. They have no agentless product.

The use of SSM/ISM and other "probes" from Netcool are not scalable or a good idea to use extensively. If you talk to any major netcool customer (pre IBM, such as we are), they will explain how bad of an idea this is.

Aside from the scalability issue, you don't want a MOM to have any contact with endpoints. There should always be a pre-filtering occurring at a lower manager level. The allows for better control, pre-filtering, and scale. If you are in fact using Netcool Omnibus and probes on enpoints, then Omnibus is just your event management layer, its not a true MOM by design. We even pre filter all SNMP/syslog in the datacenter before forwarding to Omnibus.
Anonymous said…
Hi Jonah.
Can you comment a bit the following:
>ITCAM is total junk, and is years behind competing products from HP (Mercury).
1. What ITCAM (there are many "for.." products) do you mean ?
2. What is the big advantage of HP/Mercury here ?
Unknown said…
No problem. Specifically I'm talking about RT and ISM. They are quite far behind what is offered by HP as well as many other vendors who have a proper real user and blended approach to web monitoring.

If we add in for Web and Websphere, you will find much more advanced offerings from CA (Wiley), HP Diagnostics, as well as from Precise software (Symantec I3) which I believe to be the market leader.

HP/Mercury has a much more integrated and sophisticated toolset, allowing for complex measurement and distributed polling.

Compuware is another company offering very good solutions in this space.
Unknown said…
hello ,

In your statement "No problem. Specifically I'm talking about RT and ISM." what do you mean by ISM and also what is it's weakness comapred to HPs BAC products?
Unknown said…
The ITCAM for RT product is very far behind most of the competition. ITCAM in general is quite a basic product, which doesn't seem to compete well.

The ISM probes were built by Netcool to have some way to do checks with their products. Its not a good idea for a MOM such as Netcool to do any active probing or polling. IBM has stated they will be discontinuing the ISM probes, which is the right thing to do.

We have been using Netcool products for over 6 years at my company, and we would never have allowed the use of ISMs, since they are a violation of a proper MOM architecture. The use of ISMs will also cause issues around pre filtering before the events are inserted into Omnibus. Without that pre-filtering ability you will run into scalability issues, or general event clutter.

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later.
Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not?
Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting with an API, t…

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): (registration is free)
# Check if there is a JSESSIONID cookie
if {[HTTP::cookie "JSESSIONID"] ne ""}{
# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, maybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments.
The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off components or …