Artificial Intelligence (AI) Tech Digest - June 2017

Negotiating with AI

Researchers at Facebook Artificial Intelligence Research (FAIR) have developed AI agents capable of negotiating. The agents were tasked with a multi-issue bargaining activity. They were shown a collection of items and were then asked to divide the items between themselves by negotiating how to divide them. The agents were assigned individual values for each item – e.g. a hat is worth 3 value points for agent 1. Just as in human negotiation, the agents were not aware of each other’s value judgements but had to deduce it from dialogue. To further test the agents, the FAIR researchers made them negotiate with people online. Most people that interacted with the bot didn’t believe that it was an AI agent. FAIR has open-sourced the code for the agents to allow developers to improve on it.

Optical chip for AI processing

Massachusetts Institute of Technology researchers have developed an optical neural network system that could improve the speed and efficiency of some deep learning computations such as tasks that involve repeated multiplications of matrices. The scientists say their chip can optically perform this otherwise electrically performed operation ‘with, in principal, (sic) zero energy, almost instantly’. The use of light to carry out these calculations would, theoretically, make working AI algorithms much faster and use less than one-thousandth as much energy per operation than a conventional electronic chip.

Making AI obedient

Deepmind and OpenAI have been investigating how to let AI know what human do and do not want it to do. The research involved a human without AI expertise teaching a reinforcement leaning (RL) AI agent to perform a complex task. In some of the tasks it took the AI only 30 minutes to learn to do backflips as a simulated robot following prompts from the nonexpert human. The RL model differed from traditional RL models in that it used a neural network known as a reward predictor. It consists of three parallel processes:

1. A RL agent explores its world.

2. Periodically, short clips of the agent’s behaviour are sent to the human operator who then selects which behaviours are most appropriate to achieving the goal.

3. The human’s choice is used to train the reward predictor which then trains the agent.

In time, the agent learns to maximise its rewards from the predictor by following the human’s preferences. Areas the scientists identified as warranting further investigation include reducing the amount of feedback the agent requires from the human or enabling the human to give natural language feedback to the agent.

Lifespan-predicting AI

Researchers at the University of Adelaide have developed an ‘AI’ that can predict a human’s lifespan based on images of his/her internal organs. The AI was trained with medical images of 48 patients’ chests. The algorithm was able to predict which patients would die within five years with 69 percent accuracy – which is said to be comparable to clinicians’ predictions. The most confident predictions were made by the algorithm for patients whose images showed signs of chronic diseases such as heart failure and emphysema. The researchers will expand the system to analyse tens of thousands of patient medical images.

Processor for image recognition neural network

Scientists at Korea Advanced Institute of Science and Technology (KAIST) have developed a convolutional neural network processor on silicon that is small enough to fit into mobile phones, wearable devices and IoT devices. Convolutional neural networks are commonly used in image recognition systems. The processor consists of multiple parallel processing cores that the scientists claim use only 1/5000 the power that a GPU would require to carry out the same operations. The scientists made an image sensor that integrated the processor to perform face recognition they say achieved 97 percent accuracy while only drawing 0.62mW of power.

Flo, AI assisted video editing app

NexGear Technology, an Indian start up, has created Flo – an app for iPhone that takes video and auto-edits it. The app uses machine learning and natural language processing to extract the video that the user wants and then edit it. The user can use voice input to tell the app to, for example, make a video story of my dog. Flo looks for related videos, ‘picks the best parts’, adds music, a filter and produces a set of stories to choose from. Flo uses local – ie on the phone – convolutional neural networks running in real-time to interpret what it is receiving from the phone’s camera.

Relational reasoning in AI

Deepmind, a British AI company, has been conducting research into relational reasoning in AI systems. Examples of relational reasoning include piecing clues together to solve a mystery, running ahead of a ball rolling downhill to stop it, or comparing products when out shopping – to use logic to establish relationships between things. To test whether AI would be able to perform relational reasoning Deepmind constructed a plug-and-play relational network (RN) module that can be added to neural network architectures. The RN is able to take unstructured input and reason out the relations of objects within that data. On a standard RN evaluation test called CLEVR the AI achieved results of 95.5 percent accuracy when answering questions such as what is next to the small rubber ball on a table of objects. Results for humans on the test is 92.5 percent, and for standard AI systems is 68.5 percent. Deepmind also tested the AI on a language based relational reasoning test in which it scored 95 percent which is similar to other current models, although it did score better on induction.

Google Brain open-sources machine learning system

Google’s Google Brain has released Tensor2Tensor (T2T), an open-source system for training learning models in TensorFlow. T2T helps developers create models for machine learning applications such as translation, parsing and image captioning. The release also contains a library of datasets and models to help developers with their deep learning projects.

Add this: