top of page
  • Writer's pictureMax Eaglen

Talking Heads: Robot Wars or Computer Love?



How voice user interfaces are changing the way we live and work.


With more and more of us beginning to use voice activated devices for everything from turning off our lights to playing our favourite music, how far away are we from finding it the norm to talk to computers rather than humans?


A recent piece of research estimated that 35.6 million U.S. consumers would use a voice-activated device at least once per month in 2017, representing 128.9 percent growth over last year[1]. Add that to the fact that more than one-third of millennials (33.5%) will use a virtual assistant this year[2] and there is no doubt that virtual assistants are increasing in usage, and popularity.


We all know that computers are continuing to change the shape and way we live, yet is it a Dystopian future to think that we will have a “digital self” and an “analogue self” and how much can we learn from our automated friends?


Where we are today.


The way that we interact with computers via voice activation is currently a little stunted. Most devices work by listening for a command, and once given, the user then needs to structure their sentence in a way that they hope the computer will understand. If the user doesn’t get it right, they may end up waking up to Black Sabbath rather than Black Grape.


This onus on the end user needs to change. Rather than thinking about a visual reference or GUI (Graphical User Interface) voice interface designers need to think about the VUI (Voice User Interface). For the VUI to work well, it needs to understand context and therefore able to have a conversation. Understanding context is crucial to a good user experience when communicating with a device because if a voice assistant doesn’t understand a request, it can be disappointing at best and totally disengaging at worst. The amount of cognitive function required to construct a sentence in a way that your computer may understand currently seems far more demanding than prodding your phone.


Where we will be soon.


However, things are changing with Google recently demonstrated how conversational aware their AI and voice activated devices have become. Their recent Duplex Demo is astonishing for many reasons but mainly because it was nearly impossible to determine it was a computer, and the AI was able to understand a non-native English speaker speaking disjointed English. This level of voice and speech AI is currently only able to decipher a limited set of scenarios, but it won’t be long before the AI has advanced to understand a broad range of contexts. Is this the end of the call centre as we know it?


As AI is now progressing at an exponential rate, the question will not be whether your computer can understand you but more about what your computer can teach you. As voice takes over the way we communicate with computers, do we need to develop an entirely new language to speak to our computers (Googles AI has already created its own base language) and if so, do we need to learn from our computers, rather than the other way around?


As computers become more and more intelligent and more adaptive to languages than the human brain can ever be, how much will we look to computers to teach us about language, even our own? Have humans developed their language as far as it can go, and do we now need computers to develop it further for us?


Translating foreign languages is a casing point. Real time translation means we are nearly at the stage where we can talk to our foreign counterparts through our phones instantly. With translation algorithms improving day on day, it will soon be effortless to talk to another person with a different native tongue – a real world Babel Fish. This will be ground-breaking for global businesses who trade internationally.


There might still be some teething problems for the tech companies to iron out when it comes to voice activating devices but be assured it won’t take long for these to be fixed. A quick look at YouTube of children talking to Alexa (of which there are around 65,400 at the time of writing this article) shows that the future will involve a friendly robot or two around the house. What this means is that it won’t be long before our children speak more to robots than they might humans. And, here’s the hard one for non-digital natives to get their head around; might talk to their friends’ digital assistants more than their real ones – they may prefer it. And to take it one step further, it won’t be long before our digital assistants are talking to each other on our behalf. Back to the digital self and analogue self.


At Platform, we are seeing a huge interest in voice user interfaces within our client work, from triggering workflows via voice to driving sales by providing just-in-time information. From a business perspective VUIs are an exciting part of the future, it will mean simpler, more efficient and quicker ways of working. The debate will remain for many how many embrace their virtual friend as part of their daily life and accept that they can learn, rather than fear the march of the robots ahead.


Nathan Askew is Technical Director at the Platform Group.






Would you like to know more? Please give us a call.



bottom of page