Skip to main content

Industry Insights: The Challenges Behind New User Interfaces

The future of computing will be bifurcated. On one hand, there will be entirely new models for computing such as voice, autonomous agents, and bots, with no traditional user interfaces. On the extreme opposite hand, there will be new user interfaces augmented with our ‘real’ worlds, such as the innovation done by Microsoft holographic computing technologies, along with virtual reality platforms coming to market from Google and Facebook. Bringing these trends to fruition, though, will require some key enabling technological limitations to be overcome.

Voice

It’s been slowly happening for a while now: Voice recognition will change one of the key interfaces with today’s computing and applications. Apple’s Siri, Google Now, Microsoft Cortana, and the super-hot Amazon Echo, along with their smart agents, are the practical embodiments of a growing trend toward the application of machine learning to voice and data. Andrew Ng, chief scientists at Baidu, says that 99 percent accuracy is the key milestone for speech recognition. Companies like Apple, Google, and Baidu are already above 95 percent and improving. Ng estimates that 50 percent of web searches will be voice-powered by 2019.

Agents

The next natural step for this accurate voice recognition technology is the incorporation of a learning bot that learns all about your life and assists with your tasks, via voice recognition, of course.
These new technologies will require voice recognition access, data access, and interoperability with connected assets. These agents will continually learn, access new data sources, and provide you as a user with a significant amount of value. But these great innovations also come with many (sometimes steep) costs. They will generate ever increasing numbers of API calls, requiring vast amounts of infrastructure, and require new levels of scale and management.
Voice represents the new computing interface model — one that many point to as the interaction model of the future. And we are just a few percentage points away from achieving the technical prowess to make it ready for prime time. In Mary Meeker’s recently posted internet trends for 2016, she calls out an observation.

Performance

The performance of these interfaces is a key to adoption, but often the voice recognition system is hosted by a third-party on another network. Voice-driven applications must send data to one of these providers, get a response, and process it. Delivering results in under 10 seconds to the user? That’s quite a high service level, considering most transactions lack visibility from end-to-end. Transaction tracing capability and tying the user request through the dependent systems and APIs will be critical to meet this user response time requirement.
Another example is the wild success of Amazon’s Alexa voice services. Their first device, the Echo, is #2 in electronics in Amazon’s store today (June 2016) even after 20 months. In this short amount of time, there have been over 1000 integrations, known as “Skills.” Some of the most impressive Skills are the replacement of existing interfaces. There are useful apps from reference lookups, news and stocks, home automation, travel, ordering goods and services, and of course, personal and social data.
Among the most popular Skills are Capital One’s offerings. Capital One has a dedicated mini-site focused on this functionality. Capital One is one of an elite cadre of ‘traditional’ companies that recognize and embrace the imperative to innovate and leverage new technologies to the benefit of their customers. They’re paving the way with their efforts, example, and important contributions to open source technologies. At the same time, though, rolling out new interaction schemes also brings challenges when integrating existing backend systems to new API driven-functionality such as those required by Alexa. Troubleshooting and ensuring a high-quality user experience needs a proper end-to-end view across multiple systems and technologies. That 10-second threshold is ambitious given the complexity of the systems involved., But as we’ve seen with the traditional web, as consumers adopt and grow comfortable with new technologies, the bar quickly goes higher — never in the opposite direction.

Comments

Popular posts from this blog

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged. Dynatrace, recently used the Gartner name and brand in a press release. In Its First Year as an Independent Company, Gartner Ranks Dynatrace #1 in APM Market http://www.prweb.com/releases/2015/06/prweb12773790.htm I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data: Dynatrace says in their press release: “expand globally with more than three times the revenue of other new generation APM vendors” First, let’s look at how new the various technologies are: Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was cr...

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later. Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not? Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting wi...

IBM Pulse 2008 - Review

I spent Monday-Wednesday at IBM Pulse in Orlando. It was a good show, but quite a few of the sessions were full when I arrived. It was frustrating because they didn't offer them more than once. The morning sessions were mostly pie in the sky, and not very useful to me. I got to spend a lot of time with senior people in engineering, architecture, and acquisitions/strategy. I also got to meet people I knew from online or other dealings with IBM. Overall, the show was a good use of my time, and I found it enjoyable. Here are some of my highlights: ITM 6.2.1 improvements including agentless capabilities and such. New reporting framework based on BIRT which will be rolling forward. New UI which is being pushed and was on display from TBSM 4.2. Hearing about what other customers are up to (mostly bad decisions from what I've seen). Affirmation of ITNM (Precision) as a best of breed tool, with a excellent roadmap. Some things which are bad and make no sense: Focus on manufactur...