Skip to main content

Industry Insights: Gartner Survey Analysis Critical Dimensions in APM

Gartner analyst Cameron Haight did a survey Survey Analysis: End-User Experience Monitoring Is the Critical Dimension for Enterprise APM Consumers (Gartner clients only) over the summer across users about their APM implementations. There are several key areas where the data is quite telling.

Users main concerns

Users of APM tools based on this Gartner survey, and in my experience indicate 3 primary issues with APM tools.
  • APM tools are too expensive
This is a valid concern, APM is not inexpensive, but when compared to the cost of APM 7 years ago, the license costs fallen over 75%. Typical APM tools would cost $15,000 – $20,000 per server. Today we are looking at pricing closer to $4,000. These are list prices, which we know people do not necessarily pay. Additionally the time to value, implementation time, and spend for services which used to be typical in the past are a fraction of what they were once. I would estimate the cost of deployment and maintenance with a modern APM tool is 10% of the work of a legacy tool. This cost savings is not only hard dollar amount but results in faster ROI, and a shorter time to value.
  • APM tools are not well integrated
While this complaint is certainly the norm in the industry, at AppDynamics our unified monitoring approach addresses this concern. Building a single user interface and single installation for the platform which is leveraged across the monitoring capabilities. Additionally, by building integrations and facilitating an open ecosystem, connecting systems together becomes easier than the past. With modern software applications this has changed significantly.
  • Tools are too complex
Deployment, and usability within tools has been a major issue. At AppDynamics, we address this issue with our modern web based UI, easy deployment, and platform administration. This is coupled with a much smaller footprint for our controller (the monitoring server we require), requiring less resources to deploy (or just use SaaS and you don’t have to deal with the controller at all).

Breadth of APM implementation

25% of the 207 respondents say that APM is on 25% or more of their applications. I would have guessed this number to be much smaller. The question was not very clear if it was targeting critical applications or all applications. Users also have misconceptions about what APM is. When a server monitoring product can capture metrics about applications or application health it doesn’t make the tool an APM tool. Similarly, the use of older synthetic monitoring tools often qualifies for most as APM, when it does nothing to monitor the performance of users or transactions.

APM Buyers

APM tools are bought by a lot of different people in most organizations, we see everything from single user buying, departmental buying, and enterprise-wide buying. For many of the reason outlined above the enterprise motions tend to happen over time as APM gains traction. Most APM tools 5+ years ago were bought by landing big enterprise deals, but today most land in smaller deals and expand across  enterprises. There have been many swings of the pendulum between suites of tools, mini-suites of tools, and best of breed. We’ve been in a best of breed cycle for the last 5 years, and this is likely due to the level of importance technology is playing within businesses as they digitally transform.
Based on Gartner survey data, buyers are opting for best of breed 59% of the time, which seems surprisingly low based on my conversations, and the fact that suite providers are significantly behind market leaders within APM. The only explanation for this is that those taking the survey may not currently have active APM initiatives, or do not understand what is and what is not APM. Haight then goes on to state “Enterprise APM consumers should deploy best-of-breed approaches as skills and finances dictate, but make sure to account for the potentially higher costs of integration.”. This is another area which large suite vendors sell on powerpoint, but upon implementation the integration is extremely challenging. The integration within suites of tools is incredibly expensive and complex to deploy and maintain, most organizations require heavy services outlay to implement and maintain integrations, even with suites of tools. These large suite vendors provide illusions of well integrated tooling, but based on first hand knowledge this is not the case.
The buyers are broken down as follows, which is accurate based on my time as an analyst and my time at AppDynamics.
  • IT Operations = 67%
  • App Support = 11%
  • App Dev = 8%
  • Other teams = 15%
The App support function is often part of IT Operations, hence differentiating between these two teams is often difficult.
Users also express concerns for legacy APM solutions to deal with large transformational shifts occurring through digital transformation. Once again, many vendors which sell legacy tooling are now messaging towards digital initiatives, when their tooling is not suited to accomplish these things.
Major shifts in the way software is designed, implemented, and analyzed are also shifting. The Gartner survey points out shifts most concerning to users include IoT, Cloud (SaaS), Mobile, and Microservices. There is clearly a large market opportunity for visibility in these areas, especially SaaS and IoT where APM vendors have limited ability to support these environments today. Although AppDynamics has many great customer use cases in Microservices, Mobile, IoT, and Cloud there is still a lack of technology with APM vendors to handle deep monitoring of SaaS along with having limitations when instrumenting IoT devices built on microcontroller architectures. These all present themselves as areas where our innovation and focus will create new ways to create visibility.
The key takeaway is that APM still has a long way to go to be as penetrated as it needs to be, and there are areas where major innovation has yet to occur to deal with new computing platforms, new connected devices, and new software architectures which will drive the next evolution of computing, data capture, and analytics.

Comments

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, m aybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments. The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off component

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging): http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free) when HTTP_REQUEST { # Check if there is a JSESSIONID cookie if {[HTTP::cookie "JSESSIONID"] ne ""}{ # Persist off of the cookie value with a timeout of 2 hours (7200 seconds) p