AI Tech Digest - January 2017

AI explains its reasoning

Researchers at the Georgia Institute of Technology in Atlanta, Georgia, have developed a way to enable an AI system to communicate its decision-making processes. The researchers programmed the AI with transcriptions  of human communications while playing a computer game called Frogger.  This text could be used by the computer when it found itself in a similar situation to explain for example why it moved up one square rather than to the left. This was a proof of concept study. 

Human poker players have been beaten by AI

After a 20 day poker match, Libratus, an AI program developed by Carnegie Mellon University, has defeated its four human opponents, winning USD1.5 million worth of chips. The AI was programmed to play no-limit heads-up Texas Hold-’em which its developers say is a challenging game for AI, as it relies on second guessing one’s opponent, bluffing and other human attributes which are hard to write an algorithm for. The AI system was updated at the end of every day’s play meaning it could learn from that day’s play and adapt its future style of play. 

AI can predict when you will die

Scientists from MRC London Institute of Medical Sciences (LMS) have been testing a piece of machine learning software which can predict when patients with heart failure will die. The researchers used the program to assess the prognosis of 250 patients based on blood tests and MRI scans of their hearts. This data was then used to create a 3D model of a patient’s heart. This was combined with the health records of previous patients and analysed by the ML software so that it learned the warning signs of dying from heart failure within five years. The scientists claim that the machine learning program predicted which patients would still be alive a year later with 80% accuracy. This stat was confirmed by patient follow-ups. 

Google Brain’s AI software develops its own AI software

Google Brain’s AI research group has made machine learning software that can in turn make other software. The group designed software to create an AI system that would take a test to discover how well software can process language. According to the Google Brain research group, this tech has potential to do some of the work currently done by machine-learning experts.  

Stories of AI doing things better than humans

Scientists from Northwestern University have developed an AI program that performs at human levels on a standard intelligence test. The team used the CogSketch platform which enabled the AI to solve visual problems and understand sketches. The AI was tested on Raven’s   Progressive Matrices intelligence test (developed by psychologist John C Raven) that measures abstract reasoning, problem solving, and pattern identification. The test consists of ‘complete the series’ style questions. The AI was shown to perform better than the average American. 
Stanford University scientists have adapted Google’s image recognition software to recognize cancerous skin growths (carcinoma and melanoma). The system was shown 129,450 photographs of skin conditions and told what it was looking at for each one. Eventually it learned to spot melanomas  by itself . They pitted the system against dermatologists and claim that its effectiveness was equal to that of a dermatologist at identifying cancerous growths. They believe that this has potential to be incorporated into a smartphone app. 

The demon in machine learning

A team at the University of Maryland (UM) have been investigating how machine learning (ML) outputs are subject to corruption. A problem with ML software is the so-called black box problem. This is a lack of clarity about what processes the ML software goes through to reach its output. This lack of transparency could lead to breaches in the ML software that could result in incorrect insurance premium quotes, or evasion of justice by criminals through manipulation of ML systems’ input code. The UM team disclosed five vulnerabilities, and revealed common insecure practices across many ML systems. An example that they demonstrated is taking an open source machine learning software and corrupting the input of a human face causing the image to blur and become unrecognizable. To find where the system had been manipulated, a software testing technique called the American Fuzzy Lop  Was used to cause bugged inputs to crash the system.  

Microsoft acquires Maluuba

Microsoft has acquired Maluuba, a company that has experience with natural language processing, deep-learning and reinforcement learning technology. Microsoft explains that it is interested in Malubaa’s potential contributions in speech and image recognition to its next developmental step – machine reading and writing (machine literacy). Microsoft envisions an intelligent NLP driven AI which could intelligently assist workers to find relevant information – not simply by keyword flagging  . The acquisition price was not disclosed.

Baidu releases digital assistant

Baidu has released its digital assistant system Xiaoyu Zaijia – or Little Fish. The assistant works in a way similar to Amazon Echo or Google Dot – including voice search, playing music, finding local services etc. – but in a different development includes a touch screen and camera allowing for video chat and remote home monitoring. Little Fish uses Baidu’s DuerOS AI platform. It can be bought for USD320 online.

Add this: