Human Computer Interface (HCI) Tech Digest - May 2017

Spray-paint human computer interface 

Carnegie Mellon University researchers have developed a spray-paint that can be applied to a surface to turn it into a touchpad. The technology, called Electrick, can be applied to a wide variety of surfaces – furniture, walls, steering wheels, toys and edible jelly were all demonstrated.  The spray-paint is electrically conductive and by applying a series of electrodes to the conductive material the researchers could use EFT (electric field tomography) to sense finger positions on the material. The scientists believe the conductive paint is compatible with common manufacturing methods as well as 3D printing, while it is also within reach of the hobbyist. The device can detect the location of the finger to within a centimetre. 

Fog screen

Scientists from the University of Sussex, UK, have developed a fog screen. The mid-air display allows users to interact with 2D and 3D objects in the fog. The screen uses shape reconstruction and 3D projection algorithms to support user interaction while removing image distortion caused by moving fog. MistForm is the size of a 39” TV screen and is formed of layers of fog stabilised by curtains of air. The display is projected on to the fog from above and the motion of the user is tracked by motion tracking systems. The device could be used in a similar way to VR or touchscreen technologies as it allows the user to interact with 2D and 3D objects on and in the display. 

Emotional chatbot 

A team of Chinese scientists have developed an emotional chatting machine (ECM) that attempts to not only respond to humans at a content level (i.e. it is relevant and grammatical) but also on an emotional level (i.e. it has consistent emotional expression). The EMC was judged by 61% of human testers as favourable to an emotionally neutral chatbot. The chatbot was trained on 63,000 Weibo posts that had been manually classified by emotion by humans.  The chatbot would exhibit emotion in conversation such as in response to a user entering ‘Worst day ever’, the bot would reply ‘sometimes life just sucks’ (showing disgust), ‘I’m always here for you’ (liking) for example. The team expects the chatbot’s replies to become more nuanced over time.  

Robots watching humans and learning

OpenAI has demonstrated a robot control system that can learn to, and then perform a task by watching a human performing the task in virtual reality. The system has two neural networks: a vision network and an imitation network. The vision network’s input is camera images and it outputs a map of the environment including objects’ relative positions. The imitation network watches a demonstration, infers the intent of the task, and achieves the intent starting from a different configuration. The imitation network must generalise. In a test of the system, a block stacking task, the imitation system was shown to be able to perform the difficult task of extracting relevant data from the messy data provided by the human demonstration. 

AI helping humans

Scientists at Yale University, USA, have demonstrated that even low-level AI can help humans perform better when carrying out tasks as a group. A test was conducted, in which more than 4000 people participated, that involved working together to achieve a collective goal. In addition they added numerous bots that were programmed with three levels of behavioural randomness – they sometimes deliberately made mistakes. The test showed that the bots accelerated the median time for groups to solve problems by 55.6%. There was a further effect from one-upmanship, those people that were performing better due to the bots’ assistance became a target for others to aspire to causing their performance to improve. 

Nanometre thin hologram

Scientists at Australia’s Royal Melbourne Institute of Technology (RMIT) have developed a hologram that is only 25nm thick. It is fabricated using a simple and fast direct laser writing system which would make it suitable for large-scale uses and mass manufacturing, according to a scientist involved. The scientists used a topological insulator material called Sb2Te3 to reduce the size of holograms by reflecting the laser beam lights to reflect multiple times that can create an illusion of depth. The scientists plan to create a film to put on LCD screens to make 3D holographic displays. 

Sonar-based controller

Maxus Tech, a Hong Kong based HCI company, has developed Welle, a device that uses sonar based gesture control to turn any surface interactive. The small yellow box can be placed on a table, worktop, or desk and uses a sonar field emanating down onto the table to sense gesture inputs such as a person running their finger along the table top to increase volume on the TV. The device connects to other devices via Bluetooth, has an app for Android and iOS devices, and can be programmed to respond to self-defined gestures. Examples of its uses on the Welle Kickstarter page show it being used to control window blinds, a computer game, a robotically held stylus and a multi-coloured lamp.

Implantable system-on-a-chip

ARM, a chip manufacturer, has announced plans to work with the University of Washington’s CSNE  (Center for Sensorimotor Neural Engineering) to develop a system-on-a-chip (SoC) that could be implanted into a human brain. The purpose of the chip would be to allow for bi-directional brain-computer interfaces (BBCI) to help people with neurodegenerative disorders. The chip would decode neural activity related to movement intention and process it. It would then send that information to a stimulator implanted in the spine of the implantee to stimulate the appropriate muscles to carry out the action. ARM says that its Cortex-M0 processor will be used in the CSNE’s SoC. 

Putting words into an audio recording’s mouth

Scientists at the University of Princeton, USA, have created a program that allows the user to alter audio recordings by inserting words and phrases which then take on the vocal idiosyncrasies of the voice in the audio. VoCo displays the spoken audio as a transcript which can then have words added to it by typing. The added words are automatically synthesised into the speaker’s voice. Advanced users can also alter the pitch of the inserted words to make them sound more natural by syncing with sentence intonation and speed. 

Brainwave devices could be used to steal password information

Nitesh Saxena, an associate professor at the University of Alabama at Birmingham, has conducted a study that suggests that EEG reading headsets that are being used for HCI systems could be used by hackers to steal sensitive information such as passwords. The participants’ brainwave associations were learnt by them repeatedly entered random PINs and passwords. After having watched the participant enter about 200 characters the algorithm could predict which characters the participant entered based on their brainwave activity. The scientists claim that this method reduced the chances of guessing a four-number passcode from one in 10,000 to one in 20, and the chances of guessing a letter-based 6 character password by around 500,000 times to one in 500. 

Add this: