Skip to main content

Market Share or Market Adoption?

End user experience monitoring is the most important dimension of APM. In order for IT to become business and user aligned they must understand user experience, user journey, and their customer and user constituency. Gartner’s recent APM survey “Survey Analysis: End-User Experience Monitoring Is the Critical Dimension for Enterprise APM Consumers” by Cameron Haight finds the same thing; 46% of survey respondents ranked end user experience monitoring as #1. I found the same thing across the thousands of end user calls I took as an analyst.


We’ve seen massive shifts in open source over the last decade, driven by highly robust projects driven by dedicated companies and individual contributors. Open source causes issues when trying to judge market opportunity. The analyst firms focus on looking at revenue, but there is an untracked ecosystem out there.


The best way to analyze market share and market opportunity, and many times what execution should be is to analyze what is actually in use versus what is likely sitting on a shelf collecting dust. Thankfully today’s user experience products can actually be measured by crawling the web, which is exactly what companies like Datanyze, Netcraft, Builtwith, and Similartech do.


The best open source resource for technology usage on the web is the http archive due to the fact that it’s backed by leading companies like Google, but increasingly is being sponsored by APM companies. The data is also publicly available and can be analyzed by anyone. I use this resource often when testing hypotheses about the market as it provides an easy way to get questions answered about technologies in use, trends, and other changes on the web.


This past weekend I decided to do some trend analysis on the “market leading” APM companies out there to see what the adoption trends are. While many vendors sell a lot of software, much of it is unfortunately shelfware. End users increasingly pay for it due to bundles which are common with vendors like IBM, CA, BMC, and HP.


I began by looking at various javascript libraries included on various publicly accessible websites. My interest was in analyzing usage of the following products:
  • AppDynamics Browser
  • BMC Truesight EUEM
  • CA Technologies APM (Specifically BRUM)
  • Dynatrace Application Monitoring
  • Dynatrace Gomez Real User Monitoring
  • Dynatrace Ruxit Real User Monitoring
  • HP Diagnostics
  • IBM Tealeaf and APM (which has the same Javascript used for both APM and Marketing use cases)
  • New Relic Browser


CA Technologies
I was unable to find a consistent way CA products were deployed for JavaScript instrumentation. The CA website itself doesn’t even use their own product (that should tell you something). If someone shares the secret here, I can do the analysis. CA still doesn’t have much in terms of SaaS delivered real user monitoring or transactional APM.


HP Diagnostics
Similar to CA, while there are specific filenames in use for the HP products; there are none which are found in the data within HTTP archive. This either means no one is using the products publicly (which wouldn’t surprise me), or they are not doing it via JavaScript includes which would be a typical best practice. HP still lacks SaaS delivered real user monitoring and transactional APM is in beta.


HP and CA both have customers using packet analysis for APM, but most continue to move off those platforms based on large hardware expenditures (packet aggregation, switching, and server hardware to do the analysis). Additionally these technologies don’t work in modern applications (web or mobile based), especially those behind a CDN, hosted on public cloud, or in highly virtualized or container based infrastructure.


Dynatrace
Dynatrace as a company has it’s own set of issues with loads of overlapping technologies. You can even see in the single use case of end user experience monitoring, they have 3 distinct technologies and product offerings showing the level of portfolio fragmentation. Where will they invest? Which one is the right choice? The portfolio is an increasing mix of overlapping technologies, which must be corrected if they wish to remain competitive with leading vendors with a unified strategy (AppDynamics and New Relic).


Here are the charts showing the remaining vendors analysis. We have no way to differentiate between paid, trial, and freemium offerings here, so please keep that in mind.




New Relic has a massive installed base, since they launched their browser product in 2013. You can see momentum has slowed throughout 2015. In the last earnings call New Relic only had 5,285 customers paying more than $5,000 per year. The base New Relic Browser product costs $2,388 per year for 500,000 page views, which is quite a small site.


If we remove the New Relic numbers we get this graphic.




You’ll firstly notice that legacy vendor technologies are all on the decline, this includes BMC, Dynatrace Gomez, IBM Tealeaf. The clear investment is in the newer technologies such as AppDynamics, Ruxit (which is new, but gaining traction), and some Dynatrace installs.


If anyone wants the queries I used to collect this data, it’s all open data available on Google BigQuery, or you can download the data, and load it into your own MySQL database.

Please leave comments!

Update : 2/6/16
Here are some added players in the final graph. I included Pingdom, Akamai RUM, and SOASTA. Acceleration by SOASTA, slowdowns of Pingdom (Similar to what we're seeing with New Relic), and Akamai is still rather small.

I've also posted the source code for my queries here : https://github.com/jkowall/APM-BigQuery 

Comments

Anonymous said…
Very interesting post, thank you! Are the queries publicly available (couldn't find them in bigqueri.es) or can you send them to me?

Thanks, Tobias
Unknown said…
I actually did some additional analysis (added other EUM only vendors) and the result is this graph: https://twitter.com/jkowall/status/695597845215383552

The code is uploaded to my github : https://github.com/jkowall/APM-BigQuery

Popular posts from this blog

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged. Dynatrace, recently used the Gartner name and brand in a press release. In Its First Year as an Independent Company, Gartner Ranks Dynatrace #1 in APM Market http://www.prweb.com/releases/2015/06/prweb12773790.htm I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data: Dynatrace says in their press release: “expand globally with more than three times the revenue of other new generation APM vendors” First, let’s look at how new the various technologies are: Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was cr

Vsphere server issues and upgrade progress

So I found out that using the host update tool versus Vcenter update manager is much easier and more reliable when moving from ESXi 3.5 to 4.0. Before I was using the update manager and it wasn't working all that reliably. So far I haven't had any issues using the host update tool. I've done many upgrades now, and I only have 4 left, 3 of which I am doing this weekend. Whenever I speak to vmware they always think I'm using ESX, when I prefer and expect that people should move to the more appliance model of ESXi. With 4.0 they are pretty much on par, and I'm going to stick with ESXi. On one of my vsphere 4.0 servers (virtualcenter) its doing this annoying thing when I try to use the performance overview:   Perf Charts service experienced and internal error.   Message: Report application initialization is not completed successfully. Retry in 60 seconds.   In my stats.log I see this.   [28 Aug 09, 22:28:07] [ERROR] com.vmware.vim.stats.webui.startup.Stat

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi