Artificial Intelligence (AI) Tech Digest - March 2018

Recognising difficult characters with AI-enhanced machine vision

The Codice Ratio project of the Vatican Registers is using a new optical character recognition (OCR) system that can deal with the complexity of mediaeval manuscripts, allowing documents to be transcribed and analysed – some of them for the first time for decades, or even centuries (the documents date back to the 13th century, and there are huge numbers of them).

OCR algorithms used for present-day texts are not able to deal accurately with the variation in styles, use of ligatures and abbreviations found in these old manuscripts. Rather than treating each letter as a separate entity, the new system developed by researchers at Roma Tre University analyses the pen strokes used in complete words, then identifies letters based on grammatical rules and the context of a word in a document. A neural network was trained on a labelled data set to enable the thousands of documents to be transcribed for research purposes.

Humans think through the security consequences of AI

A new AI task force has been established by the USA’s Center for a New American Security (CNAS) with the aim of considering and preparing for security challenges related to AI. Potential issues to be studied include the cybersecurity, surveillance, disinformation and defence challenges of AI and machine learning, robots and other autonomous unmanned systems. The task force is composed of specialists with military, academic and industrial backgrounds.

Open-source AI tool for better portraits

Google has made its DeepLab-v3+ code available to developers. The code is a tool based on convolutional neural networks (CNN) that can separate an image into parts so that effects can be applied to specific areas of a photograph. For instance, in Google’s own Pixel camera, the technology is used to create portraits where a face in the foreground is in sharp focus, with the background blurred, in a pleasing ‘bokeh’ effect.

The company says that image segmentation techniques have developed very rapidly in the last few years through the use of deep learning, and that it expects developers to take the approach further by using its code.

Machine learning can speed up pre-clinical development in pharma research

One of the areas of drug development that is ripe for machine-learning approaches is the prediction of the effect of potential drug molecules on the body’s systems. In the pharma industry, this is called ADME / PK (absorption, distribution, metabolism, excretion / pharmacokinetics) screening, and is done through laboratory tests. But machine learning algorithms can predict the ADME / PK properties – and the potency – of potential drugs, reducing the time it takes to get an effective drug to market. Reverie Labs is one start-up that has developed such a predictive tool, trained on known small molecules (it can also use training data sets from its clients). The tool suggests how specific changes to the molecule may improve the drug’s properties.

Cardiologist spells out how AI techniques help the profession

An American cardiologist has neatly explained how deep learning approaches – and supervised machine learning – can help his profession by improving quality and the ability to interpret echocardiograms. Cardiovascular Business magazine reports that Randolph Martin told the American College of Cardiology’s annual meeting in Orlando that the computing power and labelled data sets now available mean that cardiologists’ diagnostic ability is increased, for instance by better calculating the likelihood of co-morbidities.  As an example, Martin cites results from Edwards Lifesciences, whose CardioCare program has studied 150,000 echocardiograms, screening for a condition called aortic stenosis. It found that 24 per cent of echos were of inadequate quality. Martin said that this led to incomplete analysis and a variation between what happened in practice and what ought to have happened.

AI-composed music for Alexa Echo

Amazon is offering Alexa Echo users free music composed by an AI engine called DeepMusic. Algorithmic composers and tools have existed for many years (recent examples of commercial offers include those from Amper and JukeDeck) but Amazon’s is interesting because of the company’s scale and reach. DeepMusic for Alexa isn’t offered as a tool to help composers (for instance by helping to reduce the worry of a musical idea not sounding right when worked up as a complete composition) – but simply as something for people to listen to. Amazon’s website says it uses “a collection of audio samples and a deep recurrent neural network,” and that there is no post-production human editing. Reviews on Amazon are generally positive – though some say they can detect a certain robotic quality.

Amazon AI head predicts fortunes for cloud platform companies

A report in Technology Review includes the views of Swami Sivasubramanian, head of Amazon’s AI division, on the enormous potential for cloud-based AI services – which the technology chief describes as “the operating system of the next era of tech”. One reason for his optimism is the way that he has been able to use his own company’s SageMaker cloud service to build a system that can train itself to identify the bears that visit his house at night, and distinguish them from other visitors; he next plans to use Amazon’s DeepLens wireless video camera that has deep learning software embedded that will be able to use the model to carry out the image discrimination. The selling price for DeepLens is expected to be as low as $250. The ability of image discrimination using ML on a device means that huge video streams don’t have to traverse networks for cloud servers to perform the analysis: something telecom operators will be pleased about. While using AI for image discrimination isn’t ground-breaking, the fact that it’s so accessible and easy to use, with low-cost products and services, is hugely significant.

Of course, Amazon does not have the market to itself; Google and Microsoft are among the companies striving to develop cloud AI services at scale: in machine learning, the more data you can grab for training models of various types, the better. For many applications, it’s a market that will suit the big players.

China’s first smart hospital featuring AI opened in Guangzhou

According to a report in Yangcheng Evening News, made available on, a hospital in in Guangzhou features AI that is being used to make recommendations to patients before they arrive at the hospital, and helps doctors prescribe medications. It also helps with making appointments and payments. The hospital expects the system to cut the time needed to deal with patient inquiries by half; the report says the system can diagnose around 90% of illnesses that are treated at community clinics.

Guangzhou Second Provincial Central Hospital has been working with Tencent and voice recognition firm iFlytek to integrate AI into its systems and states that it is now using AI within its systems and processes for triage, image diagnosis, intelligent logistics, patient identification, in-hospital navigation systems and crucially, payment. It also has ‘intelligent’ robots to provide patient interaction and an AI doctor’s assistant that can help with diagnosis, based on a user’s response to a chatbot’s questions

Mimicking human navigation skills

Google’s DeepMind subsidiary has been thinking about how humans learn about navigating – and how that process might be applied to AI. In a research paper the authors describe a virtual city environment built from Google Street View images and used as a game with which to train an AI to navigate to a specified destination using visual observation only. They report that the AI agent learns principles that are applicable in new cities: the neural network is composed of three modules – one of which is not dependent on the city in which it is trained. While the landmarks vary by city, the AI agent does not need to relearn visual representations or behaviours: it remembers how to navigate using the non-location-dependent neural network module. The DeepMind researchers say their work may help scientists understand how the navigation learning process works in humans.

Add this: