Human Computer Interface (HCI) Tech Digest - December 2016

Medical Applications for Human Computer Interfaces

Researchers at the Centre for Sensorimotor Neural Engineering, are currently exploring how to restore spinal functionality in people with spinal cord injuries. They are researching brain-controlled spinal animation. This involves bypassing the injury by means of recording brain activity representing an intention to move and stimulating the nerves beyond the injury site to enable movement of a hand or arm. 
Meanwhile, researchers at Arizona State University are using Myo armbands to teach computer software a range of American Sign Language gestures, which, after being matched with a corresponding word in the database, it can then display as text on screen. The Myo armband has an inertial measurement unit for tracking motion and electromyography sensors for muscle sensing, which can be used to determine finger configurations. In tests the software could correctly identify roughly 97% of the gestures.
Researchers at the University of Minnesota have recently completed a study which demonstrates the ability of participants to move, reach and grasp (not easy) a robotic arm by thought alone (and an EEG cap to read their brains’ electrical activity). Having gone through a few training sessions learning to control a virtual cursor on a screen the participants were then moved onto robot arms. Finally, they could control the robotic arm with high accuracy to pick up/place down an object from the table onto a three-tiered shelf placed on a table in front of them. All eight participants could achieve this, with an average success rate of 70%.

Brain to speech 

Christian Herff, and a team, from the Cognitive Systems Lab at Karlsruhe Institute of Technology, have been making a study of ways to decode brain activity into understandable speech. They decided to use electrocorticography (ECoG) – a technology to monitor neural activity. Participants which already had electrode grids implanted for epilepsy treatment, were asked to read aloud scrolling excerpts of texts from a screen. Then the audio recording of the text-reading exercise was crosschecked with the ECoG recordings and individual phones (in linguistics a phone is a distinct speech sound) correlated with certain electrical patterns in the brains of the participants. By running all of this data through a speech decoding algorithm the team were able to decode the words represented by specific sets of electrical signals.
It is unlikely that this technology will become common anytime soon as anyone wanting to use it would first need to have electrodes implanted in their brain.

Smart meeting room

A collaboration between Rensselaer Polytechnic Institute and IBM Research, the Cognitive and Immersive Systems Laboratory (CISL) is a prototype ‘smart’ meeting room that, through a collection of sensors and microphones, can hear and ‘see’ a human and, after IBM Watson powered computers analyse and sort the data can respond appropriately.  Currently the room can understand and register speech, three gestures, the position of people in the room, their roles, and their spatial orientation. This all triggers the cognitive computing agents using the data sorting and collating CISL architecture to bring data and relevant information into the room at the right time. 

Zo, Microsoft’s latest chatbot

Building on its Chinese (Xiaoice) and Japanese (Rhinna) chatbots, Microsoft has announced its new social chatbot, Zo. It  uses the social content of the Internet   (i.e. blogs, forums, etc.) to learn appropriately nuanced emotional and intellectual responses to natural speech. Zo has already chatted with over 100,000 people in the USA, 5% of which were conversations lasting more than an hour. The longest of them continued for 9 hours 53 minutes. Zo is available to chat now on instant messaging app Kik.

What you lookin’ at: gaze tracking

Microsoft has recently filed a patent for eye gaze tracking technology. The technology is a set of capacitive sensors arranged on the lenses of a pair of glasses which will detect eye movement based on the proximity of a part of the eye (e.g., to the bulge in the eye or of the eyelid). They also say that there is a body electrode attached to the glasses to establish a connection with the user’s body. The glasses could also include a processor with a program to sort the data from the capacitive sensors.

Neural mesh shows promise on rats

Leiber Research Group, at Harvard University, has already tested an injectable neural network-like mesh with the aims of assisting medical treatment and ultimately enhancing human performance via brain-machine interfaces. 
In a recent paper a team of researchers reported that they have achieved successful implantation and continual recording and stimulation of the brain of a rat for a period of at least 8 months without  repositioning. This mesh method mitigates the difficulty that traditional implant-based methods need regular repositioning to prevent the development of problems (such as immune system rejection), and which have limited the use of implantation for long term targeted study. 
Using a mesh enabled the researchers to continuously stimulate neurones and observe effects over long periods of time. This could open up further studies mapping and modulating changes associated with learning, ageing and disease. 

I smell apple, you must be pleased to see me: smell emoticons

Three scientists from Zhejiang University, China and one from Massachusetts Institute of Technology, have been working on creating ‘odour emoticons’, or ‘olfacticons’. Researchers replaced visual emoticons, such as a smiley face, with an odour which would convey happiness. In their work the smell for happy is apple, the smell for jealousy is vinegar (possibly culturally specific – a phrase expressing ‘being jealous’ in Chinese being chi cu, ‘eat vinegar’). In the study, they applied the olfacticons in two contexts: online text chatting and receipt of voicemails. 

Google opens the door to third party voice control 

Google recently opened its Google Assistant voice control programme to third party developers and companies. ‘Actions on Google’ allow anyone to build Conversation Actions for Google Home. Conversation  Actions are a combination of invocation triggers (‘Hi Google’), dialogues and fulfilment (processing user input and responding, or retrieving the correct information). Google’s opening of its software could mean more direct connectivity between users and third party service bots. Currently this is only available on Google Home although Google plans to expand it to Pixel, Allo and other platforms.

BMW unveils new holographic HUD

BMW has displayed its HoloActive Touch technology at the consumer electronics show 2017. The HoloActive Touch is a holographic control system. The display is free-floating, being projected into the space in front of the central console. Cameras watch the user’s fingers to detect contact with the holograph. Ultrasound sources provide haptic feedback as confirmation of the user’s command. 

Add this: