Skip to main content

Industry Insights: The Challenges Behind New User Interfaces

The future of computing will be bifurcated. On one hand, there will be entirely new models for computing such as voice, autonomous agents, and bots, with no traditional user interfaces. On the extreme opposite hand, there will be new user interfaces augmented with our ‘real’ worlds, such as the innovation done by Microsoft holographic computing technologies, along with virtual reality platforms coming to market from Google and Facebook. Bringing these trends to fruition, though, will require some key enabling technological limitations to be overcome.

Voice

It’s been slowly happening for a while now: Voice recognition will change one of the key interfaces with today’s computing and applications. Apple’s Siri, Google Now, Microsoft Cortana, and the super-hot Amazon Echo, along with their smart agents, are the practical embodiments of a growing trend toward the application of machine learning to voice and data. Andrew Ng, chief scientists at Baidu, says that 99 percent accuracy is the key milestone for speech recognition. Companies like Apple, Google, and Baidu are already above 95 percent and improving. Ng estimates that 50 percent of web searches will be voice-powered by 2019.

Agents

The next natural step for this accurate voice recognition technology is the incorporation of a learning bot that learns all about your life and assists with your tasks, via voice recognition, of course.
These new technologies will require voice recognition access, data access, and interoperability with connected assets. These agents will continually learn, access new data sources, and provide you as a user with a significant amount of value. But these great innovations also come with many (sometimes steep) costs. They will generate ever increasing numbers of API calls, requiring vast amounts of infrastructure, and require new levels of scale and management.
Voice represents the new computing interface model — one that many point to as the interaction model of the future. And we are just a few percentage points away from achieving the technical prowess to make it ready for prime time. In Mary Meeker’s recently posted internet trends for 2016, she calls out an observation.

Performance

The performance of these interfaces is a key to adoption, but often the voice recognition system is hosted by a third-party on another network. Voice-driven applications must send data to one of these providers, get a response, and process it. Delivering results in under 10 seconds to the user? That’s quite a high service level, considering most transactions lack visibility from end-to-end. Transaction tracing capability and tying the user request through the dependent systems and APIs will be critical to meet this user response time requirement.
Another example is the wild success of Amazon’s Alexa voice services. Their first device, the Echo, is #2 in electronics in Amazon’s store today (June 2016) even after 20 months. In this short amount of time, there have been over 1000 integrations, known as “Skills.” Some of the most impressive Skills are the replacement of existing interfaces. There are useful apps from reference lookups, news and stocks, home automation, travel, ordering goods and services, and of course, personal and social data.
Among the most popular Skills are Capital One’s offerings. Capital One has a dedicated mini-site focused on this functionality. Capital One is one of an elite cadre of ‘traditional’ companies that recognize and embrace the imperative to innovate and leverage new technologies to the benefit of their customers. They’re paving the way with their efforts, example, and important contributions to open source technologies. At the same time, though, rolling out new interaction schemes also brings challenges when integrating existing backend systems to new API driven-functionality such as those required by Alexa. Troubleshooting and ensuring a high-quality user experience needs a proper end-to-end view across multiple systems and technologies. That 10-second threshold is ambitious given the complexity of the systems involved., But as we’ve seen with the traditional web, as consumers adopt and grow comfortable with new technologies, the bar quickly goes higher — never in the opposite direction.

Comments

Popular posts from this blog

Patching and updating for home and corporate

We all are well aware of the Microsoft patches and windows update.  Same goes for those of us who use itunes and iOS devices, we know Apple Software Update.  Some of us may even patch our Adobe products, which we should since they have been the largest attack vector (http://goo.gl/bOQ3D) for the past 2 years hands down.  This is just at home.... How do you expect the security experts to keep on top of all of these patches in a corporate environment.  The number of patches for Oracle alone is daunting to understand and analyze. There are ways to do this, you can use some clever software which I will outline below, or you can read ~25 RSS feeds and analyze vendor security bulletins.  I do enjoy doing some of this, but I don't have time to keep on top of all the releases.  Here is some software for home and corporate use to help manage this. Corporate Patch Management: Microsoft WSUS and SCCM - This is free and a no brainer for patchi...

Dynatrace Growth Misinformation

For my valued readers: I wanted to point out some issues I’ve recently seen in the public domain. As a Gartner analyst, I heard many claims about 200% growth, and all kind of data points which have little basis in fact. When those vendors are asked what actual numbers they are basing those growth claims on, often the questions are dodged. Dynatrace, recently used the Gartner name and brand in a press release. In Its First Year as an Independent Company, Gartner Ranks Dynatrace #1 in APM Market http://www.prweb.com/releases/2015/06/prweb12773790.htm I want to clarify the issues in their statements based on the actual Gartner facts published by Gartner in its Market Share data: Dynatrace says in their press release: “expand globally with more than three times the revenue of other new generation APM vendors” First, let’s look at how new the various technologies are: Dynatrace Data Center RUM (DCRUM) is based on the Adlex technology acquired in 2005, but was cr...

Moving to the Cloud and Packing up DR

In other news on a side consulting gig I am doing we moved the company from exchange to Google apps. It's been a bit painful, but it will be more efficient in terms of cost and support. With DR being very important to the firm, this is a perfect fit, especially with the Postini archiving solutions. With such a small firm it made a lot of sense, and should prove to be a perfect fit. Also we are re-architecting the overall infrastructure from a dual location (DR) setup with clustering to a single location. In the process we are moving from Windows Server 2008 to Windows Server 2008 R2. I haven't done a lot of Hyper-v, but I have done a lot of VM work, Windows, and iSCSI. This should prove to be an interesting project both on technology and moving to cloud based resources, as well as the future direction of the company. Expect more soon!