Artificial Intelligence (AI) Tech Digest - April 2017

Feeling lucky punk? One shot learning in drug research; Teaching a baby to shoot

There are many ways to teach an AI system, below are two recent studies.
Stanford University students have been investigating the application of one-shot learning in deep learning systems to help in novel drug discovery. One-shot learning is a method that overcomes the problem that deep learning systems traditionally face – the huge amount of input data required for the algorithm to produce a useful output. One-shot learning uses a relatively small amount of input data making it suitable for areas like drug design where there is a scarcity of data.  To map the complex behaviour of molecules in a language an algorithm could process the researchers created a graph (a representation of the connections between atoms). The algorithm was trained using two sets of data:  toxicity data for six chemicals, and 21 examples of drug side-effect information. The algorithm was tested for its ability to predict the toxity of three new chemicals and the side-effects of six new drugs. Its predictions were more accurate than random chance. The team believes that one-shot learning methods could be more widely applicable to molecular chemistry research, and is testing the method on chemical compositions for solar cells. 
Meanwhile at Baidu’s AI Lab, zero-shot learning has been demonstrated by an AI taught a language by a virtual teacher. The ‘virtual baby’ AI was placed in a 2D maze environment – called XWORLD –  where it was tasked with navigating by following natural language commands given by a virtual teacher. Following the metaphor, the baby slowly begins to understand the commands from a base of complete non-understanding, through reinforcement learning. Eventually the baby’s grasp of the language’s syntactical structure allows it to develop a zero-shot learning ability enabling it to understand new commands e.g. knowing cut apple with a knife, means that cut pear with a knife involves a similar action but a different object.       

AI doesn’t listen to instruction when it thinks it has found a better technique… and it learned to play a game following English language instructions

Stanford University researchers have taught an AI agent to navigate the difficult Montezuma’s Revenge game for the Atari 2600 using natural language instructional input. The AI was taught to associate specific English language instructions (such as ‘climb the ladder’) with screenshots of the action being carried out in game. It then practised the game with instructions as to what to do in each room, receiving a reward for following commands. To prove that the AI was learning the team placed the AI in a room which it had not previously seen and it was able to correctly follow the instructions, showing an ability to generalise the learnt information. It also learned to ignore instructions if it had discovered a better way to achieve a goal. The team believes that this method could enable closer, more fluent cooperation between humans and AI and robots in the future. 

Image recognition fooled again

Further to the March 2017 news that image recognition systems can be tricked by simple peturbations such as subtly changing a pixel’s shading, another study has revealed weaknesses in semantic image segmentation (SIS) (in which the AI picks out objects from an image or video). A team of scientists from the University of Freiburg and the Bosch Centre for Artificial Intelligence have demonstrated that SIS systems can be fooled to not perceive for example a group of pedestrians crossing the road when they are there. This blindness is achieved by introducing a subtle universal noise pattern onto the image to confuse the AI system. The team behind the work believes that this has possible implications for self-driving AI computer vision systems. 

AI can write positive or negative sentiment comments

OpenAI’s AI system has demonstrated an ability to be able to detect sentiment in customer-written product reviews on Amazon. Using 82 million Amazon reviews, the team trained the AI system to predict the next character in a block of text. The training took one month with the AI processing 12,500 characters per second. The research team noticed that it was possible to classify reviews for positive or negative sentiment in a more efficient way than they expected. The team could also get the AI to write its own convincing sentiment-modified content. A problem recognised by the team was poor performance on longer texts (believed to be due memory loss).  

AI and cancer

A multi-university team of researchers have developed an algorithm that can recognize cervical cancer. The deep learning system was trained on pictures of lesions from 1112 patients, 345 of which had lesions likely to develop into cancer and 767 of which had non-malignant lesions.  The system achieved an accuracy rating of 85%, which the scientists say is 10% higher than similar systems. The team hopes to continue their work by carrying out trials with the AI system. 
Another deep learning system helping diagnose illness is IBM’s diabetic neuropathy (DN) detector.  The algorithm could detect and classify DN caused legions within 20 seconds - an apparent advance on the time-intensive manual classification process. 

Speech replicator

Lyrebird, a University of Montreal spinout, has demonstrated technology that can impersonate another person’s voice. The technology requires a one minute recording of a voice to be able to mimic the voice. The company has demonstrated the mimicked voices of Donald Trump and Barack Obama with intonations which add emotional tones, such as anger, sympathy or stress, to the generated voice.

Add this: