Skip to main content

Artificial Intelligence in Digital Operations

Artificial Intelligence in Digital Operations


Not a week goes by where I don't see a vendor claiming that they have applied Artificial Intelligence (AI) to running digital businesses (there was a new one this week, but I began writing this beforehand). The list of vendors continues to increase, and when digging into the technology, the marketing is often overstepping the use of the AI term. Let's take a step back and understand what AI is from a computer science perspective.

The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception as per Wikipedia. The other interesting trend is the application of the AI effect. This term explains that as we apply models and data to problems, and create algorithms to solve these problems, we move the problem outside of the scope of AI. Thus, abusing term AI to describe much of what we do in technology. Most of the technology itself is not complex when it comes to building self-learning models, but yet is described in mathematical terms.   

Users expect AI to do the work for them, meaning providing insights and knowledge they may not possess or may require them to do additional learning and studying to come to these similar results. The net effect is we expect magic from these intelligent machines, hence the term AI. We see this very often in our digital assistants, whether you use the common ones Alexa, Google Assistant, Siri, or Cortana or the more advanced offerings such as x.ai or zoom.ai. Unfortunately, there are far fewer companies outside of a couple dozen building actual AI. Outside of the powerhouses AI today represents programming to produce outcomes we expect a human to come to, not providing us with additional insight. When implementing these poor AI solutions, they are often represented by a decision tree and focused knowledge along that tree. The downside is that if the decision tree or knowledge has gaps those become issues which the AI can no longer solve. These expert systems show theirs seems quickly. The intention would be for the AI to adapt, but that has yet to occur in the market. Humans, on the other hand, can react to and derive new creative decisions based on reasoning, broader knowledge (such as technical fundamentals), and perception. Computers do not have these capabilities unless they are programmed, which is a challenge today. This use of humans versus machines is one reason that we do not have automated heavy machinery when human lives are at risk (flights, military, cruise ships, and others). While people do make mistakes, they can react to the unknown, and computers cannot typically do this well.

When it comes to applying "AI" to IT Operations (monitoring and automation are my key focus areas), the industry is immature. For the most part, we've seen textual analysis, anomaly detection, topological analysis, regression analysis, and historical analysis. Some of these can be quite advanced by incorporating supervised learning and neural networks. As more of these capabilities are combined, they can almost seem intelligent and can help humans manage today's scale requirements.

The downsides are that today's AI solutions fail to incorporate general knowledge of how systems operate, they cannot do research and provide advice which is not programmed, and they lack "intelligence" aside from helping humans cope with challenges. The unsupervised learning capabilities are limited today and cannot create newly learned outcomes versus those which are programmed. These defined rule books consist of expected outcomes, which the computer then follows, versus intelligence required to create new outcomes. Incorporation of broader sets of knowledge such as StackOverflow, Quora, vendor knowledge bases, or other great sources of knowledge accessible on the internet would allow computers to derive new outcomes and advice. The use of these rich data sources is in its infancy, once this knowledge becomes integrated and automated it would provide a massive benefit to those complex operating systems.

As things evolve and real intelligence starts to take hold this will change, but we are a long way from that point, especially in digital operations today. We have some customers at AppDynamics thinking about the orchestrated enterprise of the future, and how these intelligent agents can help them make better and faster business and operational decisions. We are eagerly working with these customers as a trusted partner, and we look forward to hearing your aspirations, ensuring our alignment on the future state of AppDynamics.

Comments

Sam PI said…
I really impressed with your interesting blog and found some information about Artificial Intelligence in Business Management.Subscribed your blog.Thanks for sharing.

Popular posts from this blog

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged. Dynatrace, recently used the Gartner name and brand in a press release. In Its First Year as an Independent Company, Gartner Ranks Dynatrace #1 in APM Market http://www.prweb.com/releases/2015/06/prweb12773790.htm I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data: Dynatrace says in their press release: “expand globally with more than three times the revenue of other new generation APM vendors” First, let’s look at how new the various technologies are: Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was cr

Vsphere server issues and upgrade progress

So I found out that using the host update tool versus Vcenter update manager is much easier and more reliable when moving from ESXi 3.5 to 4.0. Before I was using the update manager and it wasn't working all that reliably. So far I haven't had any issues using the host update tool. I've done many upgrades now, and I only have 4 left, 3 of which I am doing this weekend. Whenever I speak to vmware they always think I'm using ESX, when I prefer and expect that people should move to the more appliance model of ESXi. With 4.0 they are pretty much on par, and I'm going to stick with ESXi. On one of my vsphere 4.0 servers (virtualcenter) its doing this annoying thing when I try to use the performance overview:   Perf Charts service experienced and internal error.   Message: Report application initialization is not completed successfully. Retry in 60 seconds.   In my stats.log I see this.   [28 Aug 09, 22:28:07] [ERROR] com.vmware.vim.stats.webui.startup.Stat

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi