Artificial Intelligence Tech Digest - December 2016

Deep learning system can model the future

Researchers from Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated that a deep learning algorithm can model possible futures in the form of a short video from a single still image. The algorithm was trained with 2 million videos. After training it could produce a second and a half long video of what happens next in the picture, that human participants considered to be realistic 20 per cent more often than a baseline model using a specially designed similar, but simpler, program. 
The process works by a method called ‘adversarial learning’ which involves two competing neural networks. One network creates video which the other must discriminate from real video. The creating network soon learns to trick the discriminator, flagging the accepted fake scene as a possible future scene.

Watch my lips, latest AI lip-reading research outdoes previous attempts

Following news in November 2016 of Lipnet’s lip-reading AI, a project by Google’s DeepMind and the University of Oxford has also shown that an AI system is capable of lip-reading better than a human. The researchers trained the AI by showing it 5000 hours of TV shows. It was then asked to watch the lips and provide subtitles to TV shows it has not seen previously. A professional lip-reader was given 200 TV recordings and had an annotation success rate of 12.4 per cent. The AI could accurately annotate 46.8 per cent of all words in the dataset. 

Learning to remember

Google-owned company DeepMind has developed an algorithm which reduces machine memory loss. For machines to develop general AI they need to be able to remember previously learned content while still learning novel content, otherwise they suffer from ‘catastrophic forgetting’. To enable AI to retain previously learnt information the algorithm’s parameters must remain unchanged. But for AI to learn a new task the parameters must change. The new algorithm that DeepMind has developed uses ‘elastic weight consolidation’ which retains important ‘weighted’ parameters (things that are desired to be remembered) while allowing the less important parameters (things that can be forgotten) to be altered when learning a new task. The method was tested on Atari games, and, although it was better than similar amnesia-reducing algorithms, it was not as effective as using the currently common method of having individual networks for each skill set rather than having them together in one network.

DeepMind makes its AI training platform open source

Aiming to make AI systems more suitable to the challenges of the real world, DeepMind has also recently opened its platform to the research community. The DeepMind Lab platform is a first-person 3D environment where the AI solves puzzles, plans actions, and learns to navigate and explore the environment. DeepMind has opened the platform to users of GitHub for anyone to design, create and add new levels for the AI to explore. The company hopes that this will lead to research advances in areas such as AI navigation, memory and exploration. It believes that through interacting with an environment in a way similar to humans, the AI will become more natural, and more intelligent. 
Recently, OpenAI has also made its AI training environment open source.

Fuzzy logic 

Cincinnati based Psibernetix has recently developed a system in collaboration with the University of Cincinnati to assist in the monitoring of bipolar disorder – LITHIA (lithium intelligent agent). LITHIA uses fuzzy logic based language control to predict lithium treatment effectiveness in bipolar patients. Psibernetix claim that its system has advantages over other systems in that it is transparent (reasons can be given for why things are as they are), accurate (it claims 100% accuracy in predicting symptom remission after treatment) and robust. 
Fuzzy logic is generally thought to be closer to the way human thought works than traditional Boolean binary computing – fuzzy logic allows for graded answers, so instead of only two possible states, tall or short, 1 or 0 in binary, a fuzzy logic system would also allow for an answer between these states, for example, of 0.43 tallness.  

Nations come together to discuss autonomous weapons

89 nations have agreed to formalize deliberations on lethal autonomous weapons systems by establishing a Group of Governmental Experts. The group will meet in either April or August 2017 and again in November. The group is expected to discuss issues related to compliance with international human rights law, responsibility and accountability, ethics, potential for an arms race and risks posed by cyber operations. 
The announcement came at the United Nations hosted Convention on Conventional Weapons (CCW) Fifth Review Conference. The number of countries endorsing a blanket ban rose to 19. 

Ethical development of AI recommended by IEEE study

The IEEE has recently published a paper entitled Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems. It encourages those involved in technology to prioritize ethical considerations when creating autonomous and intelligent technologies. The IEEE argues that to gain maximum benefit for humans and the natural environment from AI and autonomous systems, those involved in creating them must align technology to moral values and ethics. In turn this will - they assert - allow greater trust between humans and technology. To achieve this the IEEE proposes standardization projects and codes of conduct; educational material; setting up of new committees; and increased cooperation of industry. 

Tencent opens AI Lab

Chinese company Tencent, maker of the WeChat and QQ instant messaging apps, has recently sent representatives to an important event in AI, the Neural Information Processing Systems conference. Back in April 2016 Tencent opened its AI Lab and in its own words promised to ‘put AI as a top priority’. The lab currently has 50 scientists, experts and researchers focusing on computer vision, speech recognition, machine learning and natural language processing. 

Add this: