Skip to main content

Industry Insights: The Challenges Behind New User Interfaces

The future of computing will be bifurcated. On one hand, there will be entirely new models for computing such as voice, autonomous agents, and bots, with no traditional user interfaces. On the extreme opposite hand, there will be new user interfaces augmented with our ‘real’ worlds, such as the innovation done by Microsoft holographic computing technologies, along with virtual reality platforms coming to market from Google and Facebook. Bringing these trends to fruition, though, will require some key enabling technological limitations to be overcome.

Voice

It’s been slowly happening for a while now: Voice recognition will change one of the key interfaces with today’s computing and applications. Apple’s Siri, Google Now, Microsoft Cortana, and the super-hot Amazon Echo, along with their smart agents, are the practical embodiments of a growing trend toward the application of machine learning to voice and data. Andrew Ng, chief scientists at Baidu, says that 99 percent accuracy is the key milestone for speech recognition. Companies like Apple, Google, and Baidu are already above 95 percent and improving. Ng estimates that 50 percent of web searches will be voice-powered by 2019.

Agents

The next natural step for this accurate voice recognition technology is the incorporation of a learning bot that learns all about your life and assists with your tasks, via voice recognition, of course.
These new technologies will require voice recognition access, data access, and interoperability with connected assets. These agents will continually learn, access new data sources, and provide you as a user with a significant amount of value. But these great innovations also come with many (sometimes steep) costs. They will generate ever increasing numbers of API calls, requiring vast amounts of infrastructure, and require new levels of scale and management.
Voice represents the new computing interface model — one that many point to as the interaction model of the future. And we are just a few percentage points away from achieving the technical prowess to make it ready for prime time. In Mary Meeker’s recently posted internet trends for 2016, she calls out an observation.

Performance

The performance of these interfaces is a key to adoption, but often the voice recognition system is hosted by a third-party on another network. Voice-driven applications must send data to one of these providers, get a response, and process it. Delivering results in under 10 seconds to the user? That’s quite a high service level, considering most transactions lack visibility from end-to-end. Transaction tracing capability and tying the user request through the dependent systems and APIs will be critical to meet this user response time requirement.
Another example is the wild success of Amazon’s Alexa voice services. Their first device, the Echo, is #2 in electronics in Amazon’s store today (June 2016) even after 20 months. In this short amount of time, there have been over 1000 integrations, known as “Skills.” Some of the most impressive Skills are the replacement of existing interfaces. There are useful apps from reference lookups, news and stocks, home automation, travel, ordering goods and services, and of course, personal and social data.
Among the most popular Skills are Capital One’s offerings. Capital One has a dedicated mini-site focused on this functionality. Capital One is one of an elite cadre of ‘traditional’ companies that recognize and embrace the imperative to innovate and leverage new technologies to the benefit of their customers. They’re paving the way with their efforts, example, and important contributions to open source technologies. At the same time, though, rolling out new interaction schemes also brings challenges when integrating existing backend systems to new API driven-functionality such as those required by Alexa. Troubleshooting and ensuring a high-quality user experience needs a proper end-to-end view across multiple systems and technologies. That 10-second threshold is ambitious given the complexity of the systems involved., But as we’ve seen with the traditional web, as consumers adopt and grow comfortable with new technologies, the bar quickly goes higher — never in the opposite direction.

Comments

Popular posts from this blog

Misunderstanding "Open Tracing" for the Enterprise

When first hearing of the OpenTracing project in 2016 there was excitement, finally an open standard for tracing. First, what is a trace? A trace is following a transaction from different services to build an end to end picture. The latency of each transaction segment is captured to determine which is slow, or causing performance issues. The trace may also include metadata such as metrics and logs, more on that later.
Great, so if this is open this will solve all interoperability issues we have, and allow me to use multiple APM and tracing tools at once? It will help avoid vendor or project lock-in, unlock cloud services which are opaque or invisible? Nope! Why not?
Today there are so many different implementations of tracing providing end to end transaction monitoring, and the reason why is that each project or vendor has different capabilities and use cases for the traces. Most tool users don't need to know the implementation details, but when manually instrumenting with an API, t…

F5 Persistence and my 6 week battle with support

We've been having issues with persistence on our F5's since we launched our new product. We have tried many different ways of trying to get our clients to stick on a server. Of course the first step was using a standard cookie persistence which the F5 was injecting. All of our products which use SSL is being terminated on the F5, which makes cookie work fine even for SSL traffic. After we started seeing clients going to many servers, we figured it would be safe to use a JSESSIONID cookie which is a standard Java application server cookie that is always unique per session. We implemented the following Irule (slightly modified in order to get more logging):
http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=1171255 (registration is free)
when HTTP_REQUEST {
# Check if there is a JSESSIONID cookie
if {[HTTP::cookie "JSESSIONID"] ne ""}{
# Persist off of the cookie value with a timeout of 2 hours (7200 seconds)
persist…

NPM is Broken

As someone who bought and implemented NPM solutions, covered them as an analyst, and now watches the industry, one cannot help but notice that NPM(D) is broken. According to Gartner themselves, the data center is rapidly changing, the data center is going away, maybe not as quickly as Capp states, but it’s happening. This is apparent by the massive public cloud growth posted by Amazon, Microsoft, and Google in their infrastructure businesses. This means that traditional appliance-based NPMD offerings will not work, nor will traditional ways of collecting packet data. Many of the flow offerings do not handle the new types of flows which these services generate, but most importantly they do not understand the internet, which is the most important part of assuring services in cloud hosted environments.
The network itself is not just moving to overlay a-la NSX and ACI, it's moving inside of orchestrated containers, and new proxy/load balancing systems typically built off components or …