Monday, October 10, 2016

That's too Expensive, the Pricing Battle

Pricing is a tricky beast. I've seen a lot of models out there, and each has pros and cons. Firstly it depends on who are your target customers. My experience is in Enterprise software, which typically has a larger transaction price and volume, yet the number of deals is smaller. When focusing on SMBs as a target, the models change along with the selling motion. For good reasons most software companies want to have both models, but I haven't seen many companies able to execute this strategy. You end up with vendors adopting one model and causing fractures in the way tooling is licensed, many times the economics don't equate to good business decisions for the end user or the vendor. Here are the various licensing models I've seen in the IT Operations Management space:
  • Application footprint or infrastructure footprint based pricing
    • Per node, per CPU, per application server, per JVM, per CLR, per runtime
  • User-based pricing
    • Per concurrent user, per named user, per monthly active user, per page view
This model can track the users being monitored (in the case of end-user experience), or the users using a tool. For example, in Service Desk it could be the number of help desk agents.
  • Storage based pricing
    • Per gigabyte consumed, per event consumed
The measurements become more challenging in highly dynamic or microservices environments which cause additional issues regarding usage of specific technologies. Most apps consist of both legacy and modern technologies; hence the value is different from a solution to manage them.
Then there are the terms of the license which can be anywhere from monthly billing to 5-year commitments. These commitments can be for a minimum and/or a burst model. I've had some great discussions with analysts around value-based pricing. Although this is a very loosely defined term, building a pricing model based on the value someone gets from the software sounds perfect, in theory. How many problems are you detecting? Solving? Although this makes sense, calculating the "value" is a guess in most cases. With APM you can determine the amount of time/money/revenue saved, but it's still a challenge to build and measure clear ROI. Measuring ROI becomes even harder with other less customer-centric technologies.

In my career I’ve now seen pricing from three distinct angles, I’m going to summarize what I’ve found in each of my roles. These are personal experiences, and your mileage may vary.

End User

As an end user, I bought tens of millions of dollars’ worth of software over my 17 years as a practitioner. I always tried various tricks to ensure I was getting the best deal possible for my employer. Licensing and pricing is a challenge. How do I pay as little as possible for the best solution for my needs? When can I afford to buy an inferior product to meet my budgetary needs? When should I request more budget to select a technology that will differentiate us as a business? Regardless, the net is that everything is overpriced, and I could never get the pricing low enough to remain satisfied. It doesn't matter if I were paying based on application, infrastructure, consumption, or on-demand pricing. Although my technology providers were my partners, I also felt I needed to extract the most value for the least amount of capital from them to serve our shareholders.

Analyst

As I transitioned over to an analyst role, I learned yet more tricks around pricing and deal negotiation. Most technologies go through a cycle of immaturity to mainstream, and finally into obsolesce. Gartner uses two models to describe this; one is the Hype Cycle which shows the technology trigger through it being a productive technology. Additionally when coupled with a Market Clock the lifecycle is visible. Market Clocks address the lifecycle of a market and how technology becomes standardized, commoditized, and eventually replaced. These constructs are useful to both end users and vendors to understand how technologies mature, and the related pricing and competition to be expected within a particular technology market. I often gave advice to vendors due to the number of licensing and pricing models I had seen, and what end users were asking. Clearly, there was always complaints by end users about any pricing model aside from open source. Everyone loves the idea of free software, yet there are many hidden costs to take into account. Which technology providers can deliver results is often more important than the licensing model.

Vendor

I was fortunate enough to run the AppDynamics pricing committee for about six months, and I learned a lot about how to license and price. During this experience was the first time I had studied margins. AppDynamics software is available both on-premises and via SaaS delivery which makes the model especially interesting. Each delivery model has different margins and costs to consider. These have to be taken into account when we determine a pricing model and our discounting. I also learned, first hand, the struggles customers had with our pricing model. The net result is that regardless of how you price a solution the users complain about the pricing. There is no way to solve this problem that I have found, and it was rather depressing. The opinions are strategies across sales, product management, and marketing are all different, and each group has a differing perspective which is very challenging to rationalize. I am not sad to have moved along from that responsibility :)

My Take

Every model is flawed; pricing models are inflexible and software costs too much. If you don't bundle products, then quoting and licensing become complex, yet if you bundle then, you end up with shelfware. End users want to pay for what they use, and yet they don't want commitments, they want "on-demand" pricing. Without commitments, most vendors have issues predicting revenue or demand. This is often an issue if you are using traditional hosting for the product, which most SaaS companies do to some extent for cost reasons. Software, unlike hardware, doesn't have the same type of fixed cost to deliver. The margins are different. End users and salespeople also want the model to be simple to understand, calculate, and rationalize.

I may share more secrets later, maybe around how to negotiate licenses and different ways to get leverage. Leave your comments here or via twitter @jkowall on what interests you on this topic.

Tuesday, September 20, 2016

Industry Insights: The Cycle of Innovation: The Rise and Fall of HP Software

I find the cycles in technology fascinating, and it's an unfolding lesson of historical cycles. Although we believe our industry moves at a rapid pace there are many macro cycles which occur over decades, the patterns do not change much. The first and current example is HP Software (with more focus on IT Operations). Let's rewind to the foundational pieces of HP Software, which came from the acquisition of Mercury Interactive in 2006. HP spent $4.5b to purchase Mercury and built a large well-established business off the platform in both Quality Assurance (QA) and IT Operations. Over time HP failed to invest, in what at one point was the market dominance of QA and a substantial footprint in ITOps, these once large market shares eroded as technologies commoditized and the buying shifted to best-of-breed. HPs solution set became difficult to implement (even for HP engineers), and ongoing management is hard requiring consulting and many resources. Having managed this portfolio at scale in the past, I felt this pain first hand. I speak to customers regularly today who use these tools, and none of them are satisfied users. The inability for these technologies to meet buyer demands and the burden of maintaining them resulted in significant market share decline. Based on the Gartner market share data (Market Share: All Software Markets, Worldwide, 2015) HP software's revenues from 2013 to 2015 went from $4.4b to $3.4b showing a major issue in execution.

This kind of decline is highly abnormal within healthy businesses, and may be overstated by the Gartner data, but it creates an opportunity for companies who have inexpensive access to capital. Due to the low-interest rates, and the need for debt investments, there is ample access to cash. Just yesterday Thoma Bravo raised another $8b fund, which was oversubscribed. Capital supply is outstripping the amount of companies available to be acquired and streamlined. The criteria for these acquisitions focus on taking advantage of the massive amount of software maintenance generated by enterprise software businesses. There are examples of companies who's business goal is collecting software maintenance and lack innovation.

Analyzing HP's earnings statement filed on September 9th the software margins were only 17% (http://investors.hpe.com/~/media/Files/H/HP-Enterprise-IR/documents/q3-2016-quarterly-results.pdf page 11) meaning the optimization would improve profit. Most of this profit comes from maintenance versus new product sales or innovation.

Micro Focus is a good fit for this kind of business. Micro Focus has acquired loads of once innovative assets that have large install bases collecting maintenance revenue. Micro Focus' business includes software assets resulting from the merger of Attachmate, along with new additions. Much of the product set is a plethora of Mainframe software but also includes SUSE Linux, NetIQ, and other software graveyards. I still see quite a bit of NetIQ out there, but it's large legacy install bases especially in the Government Vertical. Replacement of these legacy management tools is a challenge; the result is a long tail of software maintenance. These customers pay vast sums of money for maintenance while getting no technology advances.

Analyzing the Micro Focuses annual report, maintenance outpaces license sales by over 2x. License sales and maintenance both declined between 2015 and 2016 with maintenance suffering a larger decline. That decline is a result of customers switching providers or technologies, and stopping maintenance payments.

Other notable examples of privatization and PE was the acquisition of business model innovator Solarwinds. Solarwinds IPOed in 2009 with $116m in annual revenue and had grown the business to $428m of annual revenue before being privatized in 2016. Solarwinds created not only simple and cost-effective tools for engineers but did so by only leveraging inside sales, driven by aggressive email marketing. Solarwinds figured out this formula long before companies like Marketo (and others) made is easy to build similar models. For many years, SolarWinds had a business advantage in the selling and marketing of its offerings to the technologist buyer. The tools weren't much better, but they were cheaper, easier to try and buy, and the value was there. Solarwinds ran into issues when others were able to replicate the business model they had, removing some of its advantages. As a response Solarwinds had to expand its portfolio, often too quickly to meet the demands of Wall Street investors. Several of SolarWinds acquisitions were not well thought through, and the efficiency of the business suffered. The net margin was over 30% in 2011, and when they were privatized in 2016, it was down to 18%. The SolarWinds portfolio today consists of many products which are not tied together, and highly complex to adopt. They still have several good technologies, but using them together provides no advantages. The current situation leaves a lot of room for current PE owners Silver Lake Partners and Thoma Bravo to optimize the spend and business, which will result in profits for their fund investors. Unfortunately, it will likely not result in a lot of investment in the technologies to keep place with the current market demands.

Similarly, Dynatrace was a highly innovative product at the time of the first public product in 2006 they focused on a gap in the market with an offering to address performance during the development and testing lifecycle. After expanding from Europe to the US, it was acquired by Compuware in 2011 for $256m and became part of a portfolio of products. After Compuware was acquired by PE in 2014, they split apart the mainframe business of Compuware (and kept the name) and brought back the Dynatrace name to handle the Application Performance Monitoring products. In the meantime, the PE firm also took Keynote private and combined them into Dynatrace. Over 14 years of being public Compuware's revenues went from a peak of $2.23b in 2000 to a low of $720m in 2014, which is less revenue than they made in 1997. Today the Dynatrace portfolio consists of several products which have different backends, user interface, and technologies. You are probably noticing a trend here, and it doesn't benefit the end users of these technologies.

A slightly older acquisition by PE, the privatization of BMC occurred in 2013. With the help of BMC's new owners, the optimization has taken hold; they are focusing on specific assets. The reduced spending has slowed revenue decline. From 2013 to 2015 based on the Gartner data BMC's revenue dropped from $1.87b to $1.84b. BMC's privatization required raising a massive amount of debt, affecting the risk rating of the company's ability to repay those debts. Based on Moodys over $6.7b in debt was raised (https://www.moodys.com/research/Moodys-downgrades-BMC-to-B3-rates-new-parent-notes-Caa2--PR_296668) causing an increase in investor risk. Similarly, when you look at the LinkedIn company data regarding BMC, the employee count has been reduced by 7% in the last two years, while the investment in engineering and sales headcount increased 1%. PE firms are cutting staff, and optimizing the business.

Dell/EMC is yet another example of two companies which have been taken private. Similar stories playing out at other large corporations including IBM, and CA. Each has similar stories, and the final shoe has not dropped yet around their software businesses. IBM has some successes in software, but the IT Operations software business is in a similarly dire situation to what has been seen above. In my past predictions, I expect CA to be taken private before the end of next year (2017), we'll see how accurate that turns out to be. Although Mike Gregoire has made some drastic improvements since taking the helm in late 2012, yet CA's portfolio continues to be legacy (heavy on the Mainframe) with almost no revenue coming from SaaS subscriptions.

In recently published research by Gartner's Gary Spivak (I&O Leaders Must Actively Respond When Key Vendors Face Activist Investor PressuresPublished: 29 March 2016 ID: G00291934) he states in his strategic planning assumption: "By 2020, eight of the top 12 publicly traded IT operations management (ITOM) vendors will respond to pressure from activist investors to sell all or parts of their businesses." As someone who studies the financial and market side, I'm quite happy to be siding with Gary's perspective on the future of this market.


You can probably spot the cycles I have highlighted, I would expect these to continue happening, especially if borrowing money remains cheap, and accessible capital remains high.

Thanks for the readership, please leave comments here or @jkowall on Twitter.

Tuesday, July 12, 2016

Breaking Down Engineering Investment at Innovative Companies

I’m always looking for good ways to understand how companies invest money internally. When I was a Gartner analyst, I would keep an eye on data from Glassdoor and LinkedIn regularly to try to gauge the trends. I would use them in my understanding of companies versus what I was being told by the companies and end-users themselves. I always did this by checking content on the sites regularly and recording it into build patterns and trends.

Thankfully, LinkedIn has come to the rescue by releasing the Premium Insights feature on company pages. I’ve been looking at this data, and I’m finding some interesting trends regarding what it uncovers. I’m also going to compare the LinkedIn data with the Gartner data which shows market share and revenue.  

I’m going to have a look at the larger companies in ITOM to compare those which are investing for the future of their customers and those who seem to prioritize other functions. I’m going to look at Support and Engineering percentages and revenue based on the Gartner market data.

Company
Employee Growth (2 yr)
Employee Tenure (2 yr)
Engineering (%)
Sales
Revenue change (3yr) - Software
Source
LinkedIn
LinkedIn
LinkedIn
LinkedIn
Gartner
AppDynamics
114%
1.7
32%
28%
531%
AppNeta
48%
2.6
22%
32%
162%
BMC
-5%
7.2
10%
17%
-2%
CA Techn
-13%
6.5
30%
16^
-10%
Dynatrace
37%
3.1
21%
23%
N/A
HP
-1%
7.9
14%
9%
-22%
IBM
-3%
8.1
16%
8%
-9%
Idera
18%
3,1
9%
49%
N/A
New Relic
90%
1.7
27%
29%
188%
Microsoft
-4%
6.1
25%
13%
3%
Google
20%
3.8
32%
12%
26%
Solarwinds
11%
3.1
15%
24%
52%
ServiceNow
67%
2.2
16%
20%
142%
Splunk
81%
2.2
25%
29%
116%
VMware
3%
3.8
32%
15%
27%

The reason I decided to analyze the turnover, is because happy employees shouldn’t be leaving. Then again, if you are in hyper growth mode, the turnover will likely be much higher. The other key number is the investment in engineering. If you are an end-user, it’s critical your vendor is investing in engineering. It seems the sweet spot for engineering is in the mid-twenties, but if you have a large portfolio of products, this could also potentially skew how much innovation is going on.

Disclaimer: This is my personal website and reflects my views and opinions only.  Any comments made on this website by myself or by third parties do not necessarily reflect views or opinions of my employer.

Tuesday, June 21, 2016

Industry Insights: The Challenges Behind New User Interfaces

The future of computing will be bifurcated. On one hand, there will be entirely new models for computing such as voice, autonomous agents, and bots, with no traditional user interfaces. On the extreme opposite hand, there will be new user interfaces augmented with our ‘real’ worlds, such as the innovation done by Microsoft holographic computing technologies, along with virtual reality platforms coming to market from Google and Facebook. Bringing these trends to fruition, though, will require some key enabling technological limitations to be overcome.

Voice

It’s been slowly happening for a while now: Voice recognition will change one of the key interfaces with today’s computing and applications. Apple’s Siri, Google Now, Microsoft Cortana, and the super-hot Amazon Echo, along with their smart agents, are the practical embodiments of a growing trend toward the application of machine learning to voice and data. Andrew Ng, chief scientists at Baidu, says that 99 percent accuracy is the key milestone for speech recognition. Companies like Apple, Google, and Baidu are already above 95 percent and improving. Ng estimates that 50 percent of web searches will be voice-powered by 2019.

Agents

The next natural step for this accurate voice recognition technology is the incorporation of a learning bot that learns all about your life and assists with your tasks, via voice recognition, of course.
These new technologies will require voice recognition access, data access, and interoperability with connected assets. These agents will continually learn, access new data sources, and provide you as a user with a significant amount of value. But these great innovations also come with many (sometimes steep) costs. They will generate ever increasing numbers of API calls, requiring vast amounts of infrastructure, and require new levels of scale and management.
Voice represents the new computing interface model — one that many point to as the interaction model of the future. And we are just a few percentage points away from achieving the technical prowess to make it ready for prime time. In Mary Meeker’s recently posted internet trends for 2016, she calls out an observation.

Performance

The performance of these interfaces is a key to adoption, but often the voice recognition system is hosted by a third-party on another network. Voice-driven applications must send data to one of these providers, get a response, and process it. Delivering results in under 10 seconds to the user? That’s quite a high service level, considering most transactions lack visibility from end-to-end. Transaction tracing capability and tying the user request through the dependent systems and APIs will be critical to meet this user response time requirement.
Another example is the wild success of Amazon’s Alexa voice services. Their first device, the Echo, is #2 in electronics in Amazon’s store today (June 2016) even after 20 months. In this short amount of time, there have been over 1000 integrations, known as “Skills.” Some of the most impressive Skills are the replacement of existing interfaces. There are useful apps from reference lookups, news and stocks, home automation, travel, ordering goods and services, and of course, personal and social data.
Among the most popular Skills are Capital One’s offerings. Capital One has a dedicated mini-site focused on this functionality. Capital One is one of an elite cadre of ‘traditional’ companies that recognize and embrace the imperative to innovate and leverage new technologies to the benefit of their customers. They’re paving the way with their efforts, example, and important contributions to open source technologies. At the same time, though, rolling out new interaction schemes also brings challenges when integrating existing backend systems to new API driven-functionality such as those required by Alexa. Troubleshooting and ensuring a high-quality user experience needs a proper end-to-end view across multiple systems and technologies. That 10-second threshold is ambitious given the complexity of the systems involved., But as we’ve seen with the traditional web, as consumers adopt and grow comfortable with new technologies, the bar quickly goes higher — never in the opposite direction.