AI Tech Digest - November 2016

Getting lippy, lip reading AI shows promise

A research team at Oxford University’s Department of Computer Sciences has developed a new automatic lip reading system which has been trained and tested on a set of short formulaic videos which show a person face-on and well lit. Despite this level of simplicity, the results have been promising with the LipNet programme managing to correctly read 93% of test samples; a human lipreader’s success rate being much lower at only 52%. After further development, the research team are hoping that LipNet could be used to help people with hearing difficulties.

KO’d by AI

Using Reinforcement Learning (RL), AI systems have learnt to play Atari 2600 games to a level which surpasses that of expert human players. Now, researchers at the Israel Institute of Technology, are teaching AI to play the more demanding SNES games. The result was the AI beating a human player at ‘Mortal Kombat’, but underperforming in more open ended games like ‘Wolfenstein’. The RL algorithm the team chose to use was a Deep Q-Network algorithm, this works by assessing the current situation and deciding on the best course of action to achieve the best result (in this case, the highest score). In the range of games played by the AI the kind of things which the computer considered a reward was also altered; so, for ‘F-Zero’, a racing game, the speed achieved at any given time was built into the reward mechanism. This was important in overcoming the reward delay of completing a whole lap (which was too many actions away for the AI to devise appropriate strategies).

New suggestion to speed up data extraction

A vast amount, of the vast amount, of information on the Internet is in plain text. The question is how to extract and categorize data from this text for analysis. Traditional machine learning relied on humans providing a framework of categorical parameters for the computer to find and fill with suitable information. The human trainers would train the system with as much data as possible, enabling the computer to solve as many problems as possible. In a new study, MIT researchers simplified the model – they trained their computer with minimal data. The system checked this data against a document it was analysing and categorizing. Once it had categorised a piece of data it rated how confident it was in this data being correct. If the rating was low then the computer did a web search to discover further information-sources. Through repetition of this procedure the computer is mimicking what we would naturally do – research for verification or nullification. In a study involving mass shootings data, the computer had to identify from the data the name of the shooter, location of the shooting, number wounded and number killed. The total originally supplied number of documents was 300, and the machine referred to 10 external articles on average. The result was that this newer system outperformed previous versions of the systems by about 10%.

Facebook gets arty with deep learning software

‘Style Transfer’ is a way to add creative effects to videos which make them look like a painting. However, the requisite graphic processing power needed to do this in real-time had meant that it was slow; requiring the video to be sent to a data centre for processing. Now, Facebook, with its app-embedded, deep learning software Caffe2go, have made this a possibility to do on your phone and in 50ms, literally quicker than a blink of an eye. The way that this speed has been achieved is by firstly, reducing the number of layers used to achieve the final effect, and secondly, by reducing the size of the image for processing (called ‘pooling’), and then enlarging it again after the effect has been applied (called ‘deconvolution’). Facebook say that they have plans to further integrate Caffe2go into other apps and apply it to other applications.

GE acquire Bit Stew and Wise.io

American giant General Electric (GE) has acquired Bit Stew Systems and Wise.io (for undisclosed prices), and has released new suite of Predix applications and services, with the aim of opening the platform up to operators, business analysts, and general users. Predix is GE’s OS for the industrial internet that aims to extend the cloud to the edge (the connected devices). The acquisition of Wise.io (which works with Pinterest, Twilio and Thumbtack) is a move to strengthen GE’s Predix platform’s machine learning and data science capabilities. Meanwhile the purchase of Bit Stew (which works with Scottish and Southern Energy, PC Hydro, etc.) allows for more efficient data ingestion.

Microsoft and OpenAI team-up

Microsoft has announced a partnership with OpenAI, a non-profit AI research organization co-founded by Elon Musk, Sam Altman, Greg Brockman and Ilya Sutskever. Open AI will now be able to use Microsoft’s AI workload supporting cloud platform, Azure, to create new tools and techs – which are only possible with the storage and compute power a cloud system can provide – and Microsoft’s new Nvidia-powered Azure N-series Virtual Machines which can run Microsoft Cognitive Toolkit. The Azure N-series are capable of intensive compute workloads, including deep learning, simulations, rendering and training of neural networks. This is expected to be beneficial to Open AI in its experiments.

Robot teacher teaches from the heart

A team from the Department of Artificial Intelligence in Madrid have developed ARTIE (Affective Robot Tutor Integrated Environment) to assist primary-school children in the classroom. ARTIE – the Madrid teams’ software inside SoftBank’s NAO robot – doesn’t just teach; it monitors the student’s emotional state and responses to what is being learnt. It does this by using sensors and software which can detect and analyse keyboard and mouse usage, body language, facial expression and tone of voice. Furthermore, based on this information the robot can change its teaching method to better suit the child’s current emotional state – a technique called ‘pedagogical intervention’. Depending on different factors such as the child’s competence and motivation, ARTIE will become either a ‘tutor of inspiration’, a ‘coach tutor’, a ‘directive tutor’, or a ‘guide tutor’.

DeepMind continues to go deeper

Google’s DeepMind project has revealed new findings about the deep learning method ‘Differential Neural Computers’ (DNC). DNCs are good at learning new information through a process of trial and error, and subsequently retaining the correct answer to allow for recall when needed. Using this type of deep learning, DeepMind has shown that a computer is able to learn the shortest route from one London underground station to another, or able to work out the destination stop from a set of given directions. As an example of more abstract thinking, when shown a family tree the DNC could deduce who someone’s maternal great uncle was, showing elasticity in conceptual learning and the ability to generalise. Finally, DNC also showed the ability to work at a task based on reinforcement learning - when the computer was close to a right answer the researchers would give it a higher point score than if the computer was further from the answer. Using this reward method, the computer learnt to rearrange coloured blocks in accordance with instructions e.g. place the blue block next to the green block.

Memristors come to the aid of ANN

Students at the University of Southampton have demonstrated that a nanoscale device called a memristor could be used to power AI systems. The problem with current Artificial Neural Networks (ANNs) is that they currently lack efficient hardware ‘synapses’, which any high functioning ANN would need in large amounts. A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and can store large amounts of charge even after it has been turned off. The researchers believe that this emulates key characteristics of learning synapses. Memristors have experimentally been demonstrated to be able to learn and re-learn input patterns in an unsupervised manner within a probabilistic winner-takes-all network. This has potential for IoT embedded processors as it enables the processing of big data in real-time without having any previous knowledge. Also, their size and low-energy usage mean that memristors could be used in the future within electronic brains, according the University of Southampton article.

Add this: