Monday, July 17, 2017

CIO Insights: Gartner CIO Survey Shows Canadian growth increasing

Being a Canadian, and to celebrating Canada turning 150 years old, I wanted to call out some interesting trend data I recently came across. Having spent a good amount of time visiting with customers in Canada in 2017, as I do each year I have noticed changes in the country. Quite possibly due to the changes in government and the political climate, or change and progress in the private sector.

In Canada, there are economic challenges today, mostly due to a high reliance on natural resources, and low market prices today. Even with these difficulties there are increased IT budgets in Canada, surpassing the global average growth rates based on the Gartner IT survey data.


When breaking this down further, only 38% of government employees projected budgetary growth versus 64% of non-government respondents in Canada. Comparing these to last year the numbers were 16% and 57% respectively. Canada sees increased investment from a technology perspective based on the survey group. Much of that investment is coming from the private sector versus the public sector
which is encouraging for the Canadian economy.

I still will be making a couple of trips to Canada in 2017 including speaking at the full stack conference in Toronto (http://2017.fsto.co/ ). I'll be presenting new content including new thoughts around monitoring and instrumentation. I'm looking forward to finalizing my presentation over the next couple months. I wish you a happy birthday Canada and look forward to seeing the results of these changes. See everyone in Toronto.

Friday, June 30, 2017

Digital Business Operations the new mode for IT Operations


Recent conversations I've had with others in the IT Operations space helped formulate an idea I have been working on for a while concerning the future of Ops. The old mode of operating, you can call it ITIL based, or mode 1, or whatever you prefer is not going to be the primary mode of operations in the long term. Senior leadership has acknowledged this and made concerted efforts to change their habits of hiring, spending, and roadmaps for technology and delivery of new functionality.

More advanced teams operating in an agile manner, or mode 2 consist of smaller more integrated teams made up of individuals with different skills. These teams are meeting today's digital business challenges. In most enterprises, these teams are part of a bimodal strategy, but bridging the gap between mode 1 and mode 2 is something few have solved. I'm personally not sure this is possible due to cultural and fundamental differences in beliefs and trust. In many organizations, there is a high degree of variability on the level of investment between mode 1 and mode 2, but most leaders agree the future is moving more of the staff towards mode 2 due to business demand. Listening to customers is key.

There is a fundamental shift occurring between both modes of operation that we need two terms to explain how these teams operate and what they require from people, process, and technology perspective. Bimodal is not quite bifurcated enough compared to what is happening in these enterprises today. I'm coining the term Digital Business Operations and IT Operations as these two teams. There will be an acceleration of new technologies and capabilities which will further separate the way these teams operate, making unification even more complicated than it is today. Thankfully there will be better infrastructure abstraction technologies which will allow each of these teams to operate independently (naturally for root-cause we'll need bridges, which is what our goal is at AppDynamics). Many believe the answer to this will be the adoption of private and hybrid PaaS technologies, but I find these are too complex, rooted in yesterday's problems. A better lighter weight approach will emerge built upon containers and orchestration making infrastructure abstraction simple versus the complexity we see today in private PaaS.

The changes in infrastructure, application architecture and management are still pockets of enterprises, and often experimental in nature. Similarly digital business and agility must allow for experimentation, but at some point, the experiment solidifies into a core business tenant. This solidification is what will occur within Digital Business Operations which will result in new more repeatable (or industrialized) ways to handle processes, toolchains, and workflows which today are implemented inconsistently between organizations.


Digital Business Operations requires a fundamental change in specific areas we've taken for granted in mode 1 IT Operations. These include process frameworks such as ITIL, service management (ITSM, Ticketing, Bug Tracking), automation, and configuration management (especially the concept of a CMDB). Each one of these is a topic I'll hopefully cover in future posts. I'll share some of the challenges, ideas as to how the vendors may or may not solve these issues, and some insight into what practitioners or first mover organizations are doing to address these problems.

Friday, June 16, 2017

Instrumentation is Changing... Finally!

 I've always been a fan of trying to solve complicated problems. As an end user, I applied various technologies and tools to diagnose some strange ones over the years. Applications have become increasingly decoupled and distributed requiring the monitoring and diagnostics to change significantly. Let's look at a short timeline of the changes which have occurred, why these changes were necessary, how they helped solve a technological shift, and which challenges remain.

Phase
Why
How Overcome
Component monitoring
Distributed systems became pervasive
High-end solutions (CA, BMC, HP, BMC) became commoditized (Solarwinds)
Event correlation
Too many monitoring tools (still applies) created information overload
Enterprises rely on antiquated tools, many have given up.
Log analytics
Diagnostics too challenging in distributed systems
Splunk unlocked it, but ELK has commoditized it.
Front-end monitoring
Customer centricity is the key for digital business
Hasn’t yet been well adopted, penetration is still under 20%.
Time-series metrics
Microservices created too many instances creating too much data
Has not been solved, but open source is now growing up to handle scale.
Tracing
Complex and distributed systems are difficult to diagnose
Beginning to evolve.

In each of these phases we've had commercial innovators, and over time they have been replaced by either open source or commoditized commercial solutions which are basic and low cost. Typically as there were frankly bigger problems to solve.

I would say that 2010 to 2015 were the era of log analytics. At this point the technology is pervasive, 95% of companies I speak to have a strategy. Most of them use a combination of tools, but inexpensive or free software, not necessarily a trait which is shared by hardware or cloud storage, are becoming the norm. Most enterprises are using at least two vendors today for log analytics. Typically one of them is Splunk, and very often the other is ELK. When this market was less mature ELK was more fragmented, but not the solidified ElasticStack platform which Elastic codified has made it a viable alternative.

The era of end-user monitoring peaked in 2015. Everyone was interested, but implementations remain rather small for the use cases of performance monitoring and operational needs. End user experience is a critical area for today's digital businesses, but often requires a level of maturity which is a challenge for most buyers. I do expect this to continue growing, but there will always be a gap between end user monitoring and backend monitoring. Most vendors who have tried to build this as a standalone product have failed to gain traction. A recent example of this is SOASTA, who had good technology, and decent growth, but failed to build a self-sustaining business. Ultimately SOASTA was sold to Akamai.

2016 was the era of time series metrics, where that market peaked, and we saw a massive amount of new technology companies flourish. Examples of this were increased traction by vendors like DataDog, or newer cases including Wavefront, SingalFx, and others. We've just seen the start of M&A in this area Solarwinds buying Librato in 2015, and more recently VMWare buying Wavefront in 2017. We also have the commercial monitoring entrant Circonus who have begun OEMing their time-series database under the IronDB moniker. The next phase of this market will be the maturity of time-series databases in the open source world. Examples may include  Prometheus, but I'm keeping an eye on InfluxDB. As these options mature for monitoring, IoT, and other time-series use cases especially solving easier data storage at scale. The other missing component is front-ends like Grafana improving significantly with workflows and easier usage. Ultimately we may see an ELK like stack emerge in the time-series world, but we'll have to wait and see.

Due to the increase in microservices and the resulting Docker containers, orchestration layers are beginning to take hold in many organizations. As a result of these shifts in application architectures, 2017 seems to have more of a focus on tracing. The importance of tracing is great to see take hold, as root-cause isolation in distributed systems requires good tracing (at least until ML matures in a meaningful way). Tracing also allows you to understand impact when you have service outages or degradations. Additionally, tracing can be used to tie together IT and business metrics and data. The trace is essentially the "glue" of monitoring so to speak. By tagging and tracing you are essentially creating a forensic trail, although this has yet to apply within security, it will! Gartner even began talking about the application to security not long ago in the research note titled Application Performance Monitoring and Application Security Monitoring Are Converging G00324828, Cameron Haight, Neil MacDonald. The detailed tracing is what companies like AppDynamics and Dynatrace do, and has been the core of their technologies since they were founded based on these concepts. These tools solve complex problems faster and perform much more than just tracing, but the trace is the glue in the technology. Unfortunately for buyers, these monitoring and diagnostic technologies come with a typically high price tag, but they are not optional for digital businesses.

Today's open source tracing projects require developers to do work, this is different than how commercial APM tools work which auto instrument and support common application frameworks. The open-source tooling is getting better with standards like OpenTracing, and front-ends such as Zipkin continually evolving. The problem is these technologies still lack the automation for capture that you see in commercial tools. How often do I trace? What do I trace? When do I trace? If you expect developers to make these decisions all the time I think there will be issues. Developers don't understand the macro performance of how their component will fit into the bigger picture. Also once a service is written and reused for other applications, it's hard to understand the performance implications. I am interested and currently experimenting with proxy level tracing to help extend tagging and tracing in areas where you may not want or need heavier agents. I hope to share more on this on a future blog.

The long-term goal, however, is to combine all of these siloed data sets and technologies by using more advanced correlation capabilities, in addition to applying new algorithms and machine learning to the problem, which is currently in its infancy. Over at AppDynamics, we do this based on a trace itself, but we are evolving new capabilities, and doing so in a unified manner, going back to a trace. Monitoring of digital businesses is going to continue to be an exciting space for quite a long time, requiring constant evolution to keep pace with evolving software and infrastructure architectures.

I’ll be giving an updated talk on monitoring and instrumentation which will cover a lot of what is in this article. We will also go into more depth around instrumentation and tracing. I will premiere this new talk this October at the Full Stack Conference October 23rd and 24th 2017 in Toronto http://2017.fsto.co for tickets. I am Looking forward to contributing to this great event.  

Monday, October 10, 2016

That's too Expensive, the Pricing Battle

Pricing is a tricky beast. I've seen a lot of models out there, and each has pros and cons. Firstly it depends on who are your target customers. My experience is in Enterprise software, which typically has a larger transaction price and volume, yet the number of deals is smaller. When focusing on SMBs as a target, the models change along with the selling motion. For good reasons most software companies want to have both models, but I haven't seen many companies able to execute this strategy. You end up with vendors adopting one model and causing fractures in the way tooling is licensed, many times the economics don't equate to good business decisions for the end user or the vendor. Here are the various licensing models I've seen in the IT Operations Management space:
  • Application footprint or infrastructure footprint based pricing
    • Per node, per CPU, per application server, per JVM, per CLR, per runtime
  • User-based pricing
    • Per concurrent user, per named user, per monthly active user, per page view
This model can track the users being monitored (in the case of end-user experience), or the users using a tool. For example, in Service Desk it could be the number of help desk agents.
  • Storage based pricing
    • Per gigabyte consumed, per event consumed
The measurements become more challenging in highly dynamic or microservices environments which cause additional issues regarding usage of specific technologies. Most apps consist of both legacy and modern technologies; hence the value is different from a solution to manage them.
Then there are the terms of the license which can be anywhere from monthly billing to 5-year commitments. These commitments can be for a minimum and/or a burst model. I've had some great discussions with analysts around value-based pricing. Although this is a very loosely defined term, building a pricing model based on the value someone gets from the software sounds perfect, in theory. How many problems are you detecting? Solving? Although this makes sense, calculating the "value" is a guess in most cases. With APM you can determine the amount of time/money/revenue saved, but it's still a challenge to build and measure clear ROI. Measuring ROI becomes even harder with other less customer-centric technologies.

In my career I’ve now seen pricing from three distinct angles, I’m going to summarize what I’ve found in each of my roles. These are personal experiences, and your mileage may vary.

End User

As an end user, I bought tens of millions of dollars’ worth of software over my 17 years as a practitioner. I always tried various tricks to ensure I was getting the best deal possible for my employer. Licensing and pricing is a challenge. How do I pay as little as possible for the best solution for my needs? When can I afford to buy an inferior product to meet my budgetary needs? When should I request more budget to select a technology that will differentiate us as a business? Regardless, the net is that everything is overpriced, and I could never get the pricing low enough to remain satisfied. It doesn't matter if I were paying based on application, infrastructure, consumption, or on-demand pricing. Although my technology providers were my partners, I also felt I needed to extract the most value for the least amount of capital from them to serve our shareholders.

Analyst

As I transitioned over to an analyst role, I learned yet more tricks around pricing and deal negotiation. Most technologies go through a cycle of immaturity to mainstream, and finally into obsolesce. Gartner uses two models to describe this; one is the Hype Cycle which shows the technology trigger through it being a productive technology. Additionally when coupled with a Market Clock the lifecycle is visible. Market Clocks address the lifecycle of a market and how technology becomes standardized, commoditized, and eventually replaced. These constructs are useful to both end users and vendors to understand how technologies mature, and the related pricing and competition to be expected within a particular technology market. I often gave advice to vendors due to the number of licensing and pricing models I had seen, and what end users were asking. Clearly, there was always complaints by end users about any pricing model aside from open source. Everyone loves the idea of free software, yet there are many hidden costs to take into account. Which technology providers can deliver results is often more important than the licensing model.

Vendor

I was fortunate enough to run the AppDynamics pricing committee for about six months, and I learned a lot about how to license and price. During this experience was the first time I had studied margins. AppDynamics software is available both on-premises and via SaaS delivery which makes the model especially interesting. Each delivery model has different margins and costs to consider. These have to be taken into account when we determine a pricing model and our discounting. I also learned, first hand, the struggles customers had with our pricing model. The net result is that regardless of how you price a solution the users complain about the pricing. There is no way to solve this problem that I have found, and it was rather depressing. The opinions are strategies across sales, product management, and marketing are all different, and each group has a differing perspective which is very challenging to rationalize. I am not sad to have moved along from that responsibility :)

My Take

Every model is flawed; pricing models are inflexible and software costs too much. If you don't bundle products, then quoting and licensing become complex, yet if you bundle then, you end up with shelfware. End users want to pay for what they use, and yet they don't want commitments, they want "on-demand" pricing. Without commitments, most vendors have issues predicting revenue or demand. This is often an issue if you are using traditional hosting for the product, which most SaaS companies do to some extent for cost reasons. Software, unlike hardware, doesn't have the same type of fixed cost to deliver. The margins are different. End users and salespeople also want the model to be simple to understand, calculate, and rationalize.

I may share more secrets later, maybe around how to negotiate licenses and different ways to get leverage. Leave your comments here or via twitter @jkowall on what interests you on this topic.

Tuesday, September 20, 2016

Industry Insights: The Cycle of Innovation: The Rise and Fall of HP Software

I find the cycles in technology fascinating, and it's an unfolding lesson of historical cycles. Although we believe our industry moves at a rapid pace there are many macro cycles which occur over decades, the patterns do not change much. The first and current example is HP Software (with more focus on IT Operations). Let's rewind to the foundational pieces of HP Software, which came from the acquisition of Mercury Interactive in 2006. HP spent $4.5b to purchase Mercury and built a large well-established business off the platform in both Quality Assurance (QA) and IT Operations. Over time HP failed to invest, in what at one point was the market dominance of QA and a substantial footprint in ITOps, these once large market shares eroded as technologies commoditized and the buying shifted to best-of-breed. HPs solution set became difficult to implement (even for HP engineers), and ongoing management is hard requiring consulting and many resources. Having managed this portfolio at scale in the past, I felt this pain first hand. I speak to customers regularly today who use these tools, and none of them are satisfied users. The inability for these technologies to meet buyer demands and the burden of maintaining them resulted in significant market share decline. Based on the Gartner market share data (Market Share: All Software Markets, Worldwide, 2015) HP software's revenues from 2013 to 2015 went from $4.4b to $3.4b showing a major issue in execution.

This kind of decline is highly abnormal within healthy businesses, and may be overstated by the Gartner data, but it creates an opportunity for companies who have inexpensive access to capital. Due to the low-interest rates, and the need for debt investments, there is ample access to cash. Just yesterday Thoma Bravo raised another $8b fund, which was oversubscribed. Capital supply is outstripping the amount of companies available to be acquired and streamlined. The criteria for these acquisitions focus on taking advantage of the massive amount of software maintenance generated by enterprise software businesses. There are examples of companies who's business goal is collecting software maintenance and lack innovation.

Analyzing HP's earnings statement filed on September 9th the software margins were only 17% (http://investors.hpe.com/~/media/Files/H/HP-Enterprise-IR/documents/q3-2016-quarterly-results.pdf page 11) meaning the optimization would improve profit. Most of this profit comes from maintenance versus new product sales or innovation.

Micro Focus is a good fit for this kind of business. Micro Focus has acquired loads of once innovative assets that have large install bases collecting maintenance revenue. Micro Focus' business includes software assets resulting from the merger of Attachmate, along with new additions. Much of the product set is a plethora of Mainframe software but also includes SUSE Linux, NetIQ, and other software graveyards. I still see quite a bit of NetIQ out there, but it's large legacy install bases especially in the Government Vertical. Replacement of these legacy management tools is a challenge; the result is a long tail of software maintenance. These customers pay vast sums of money for maintenance while getting no technology advances.

Analyzing the Micro Focuses annual report, maintenance outpaces license sales by over 2x. License sales and maintenance both declined between 2015 and 2016 with maintenance suffering a larger decline. That decline is a result of customers switching providers or technologies, and stopping maintenance payments.

Other notable examples of privatization and PE was the acquisition of business model innovator Solarwinds. Solarwinds IPOed in 2009 with $116m in annual revenue and had grown the business to $428m of annual revenue before being privatized in 2016. Solarwinds created not only simple and cost-effective tools for engineers but did so by only leveraging inside sales, driven by aggressive email marketing. Solarwinds figured out this formula long before companies like Marketo (and others) made is easy to build similar models. For many years, SolarWinds had a business advantage in the selling and marketing of its offerings to the technologist buyer. The tools weren't much better, but they were cheaper, easier to try and buy, and the value was there. Solarwinds ran into issues when others were able to replicate the business model they had, removing some of its advantages. As a response Solarwinds had to expand its portfolio, often too quickly to meet the demands of Wall Street investors. Several of SolarWinds acquisitions were not well thought through, and the efficiency of the business suffered. The net margin was over 30% in 2011, and when they were privatized in 2016, it was down to 18%. The SolarWinds portfolio today consists of many products which are not tied together, and highly complex to adopt. They still have several good technologies, but using them together provides no advantages. The current situation leaves a lot of room for current PE owners Silver Lake Partners and Thoma Bravo to optimize the spend and business, which will result in profits for their fund investors. Unfortunately, it will likely not result in a lot of investment in the technologies to keep place with the current market demands.

Similarly, Dynatrace was a highly innovative product at the time of the first public product in 2006 they focused on a gap in the market with an offering to address performance during the development and testing lifecycle. After expanding from Europe to the US, it was acquired by Compuware in 2011 for $256m and became part of a portfolio of products. After Compuware was acquired by PE in 2014, they split apart the mainframe business of Compuware (and kept the name) and brought back the Dynatrace name to handle the Application Performance Monitoring products. In the meantime, the PE firm also took Keynote private and combined them into Dynatrace. Over 14 years of being public Compuware's revenues went from a peak of $2.23b in 2000 to a low of $720m in 2014, which is less revenue than they made in 1997. Today the Dynatrace portfolio consists of several products which have different backends, user interface, and technologies. You are probably noticing a trend here, and it doesn't benefit the end users of these technologies.

A slightly older acquisition by PE, the privatization of BMC occurred in 2013. With the help of BMC's new owners, the optimization has taken hold; they are focusing on specific assets. The reduced spending has slowed revenue decline. From 2013 to 2015 based on the Gartner data BMC's revenue dropped from $1.87b to $1.84b. BMC's privatization required raising a massive amount of debt, affecting the risk rating of the company's ability to repay those debts. Based on Moodys over $6.7b in debt was raised (https://www.moodys.com/research/Moodys-downgrades-BMC-to-B3-rates-new-parent-notes-Caa2--PR_296668) causing an increase in investor risk. Similarly, when you look at the LinkedIn company data regarding BMC, the employee count has been reduced by 7% in the last two years, while the investment in engineering and sales headcount increased 1%. PE firms are cutting staff, and optimizing the business.

Dell/EMC is yet another example of two companies which have been taken private. Similar stories playing out at other large corporations including IBM, and CA. Each has similar stories, and the final shoe has not dropped yet around their software businesses. IBM has some successes in software, but the IT Operations software business is in a similarly dire situation to what has been seen above. In my past predictions, I expect CA to be taken private before the end of next year (2017), we'll see how accurate that turns out to be. Although Mike Gregoire has made some drastic improvements since taking the helm in late 2012, yet CA's portfolio continues to be legacy (heavy on the Mainframe) with almost no revenue coming from SaaS subscriptions.

In recently published research by Gartner's Gary Spivak (I&O Leaders Must Actively Respond When Key Vendors Face Activist Investor PressuresPublished: 29 March 2016 ID: G00291934) he states in his strategic planning assumption: "By 2020, eight of the top 12 publicly traded IT operations management (ITOM) vendors will respond to pressure from activist investors to sell all or parts of their businesses." As someone who studies the financial and market side, I'm quite happy to be siding with Gary's perspective on the future of this market.


You can probably spot the cycles I have highlighted, I would expect these to continue happening, especially if borrowing money remains cheap, and accessible capital remains high.

Thanks for the readership, please leave comments here or @jkowall on Twitter.

Tuesday, July 12, 2016

Breaking Down Engineering Investment at Innovative Companies

I’m always looking for good ways to understand how companies invest money internally. When I was a Gartner analyst, I would keep an eye on data from Glassdoor and LinkedIn regularly to try to gauge the trends. I would use them in my understanding of companies versus what I was being told by the companies and end-users themselves. I always did this by checking content on the sites regularly and recording it into build patterns and trends.

Thankfully, LinkedIn has come to the rescue by releasing the Premium Insights feature on company pages. I’ve been looking at this data, and I’m finding some interesting trends regarding what it uncovers. I’m also going to compare the LinkedIn data with the Gartner data which shows market share and revenue.  

I’m going to have a look at the larger companies in ITOM to compare those which are investing for the future of their customers and those who seem to prioritize other functions. I’m going to look at Support and Engineering percentages and revenue based on the Gartner market data.

Company
Employee Growth (2 yr)
Employee Tenure (2 yr)
Engineering (%)
Sales
Revenue change (3yr) - Software
Source
LinkedIn
LinkedIn
LinkedIn
LinkedIn
Gartner
AppDynamics
114%
1.7
32%
28%
531%
AppNeta
48%
2.6
22%
32%
162%
BMC
-5%
7.2
10%
17%
-2%
CA Techn
-13%
6.5
30%
16^
-10%
Dynatrace
37%
3.1
21%
23%
N/A
HP
-1%
7.9
14%
9%
-22%
IBM
-3%
8.1
16%
8%
-9%
Idera
18%
3,1
9%
49%
N/A
New Relic
90%
1.7
27%
29%
188%
Microsoft
-4%
6.1
25%
13%
3%
Google
20%
3.8
32%
12%
26%
Solarwinds
11%
3.1
15%
24%
52%
ServiceNow
67%
2.2
16%
20%
142%
Splunk
81%
2.2
25%
29%
116%
VMware
3%
3.8
32%
15%
27%

The reason I decided to analyze the turnover, is because happy employees shouldn’t be leaving. Then again, if you are in hyper growth mode, the turnover will likely be much higher. The other key number is the investment in engineering. If you are an end-user, it’s critical your vendor is investing in engineering. It seems the sweet spot for engineering is in the mid-twenties, but if you have a large portfolio of products, this could also potentially skew how much innovation is going on.

Disclaimer: This is my personal website and reflects my views and opinions only.  Any comments made on this website by myself or by third parties do not necessarily reflect views or opinions of my employer.