Media Forensics: A New Way of Thinking
March 1, 2020
Artificial Intelligence (AI) approaches such as Generative Adversarial Networks, or GANs (that pitch AI systems against each other to improve their output) and other forms of Deep Learning (DL) are increasingly used to create or edit media. Mainstream business and consumer applications such Adobe Photoshop
So while AI makes it easier to create “fake” new media, and to manipulate images, audio and video such that is indistinguishable from “real” media, AI will be able to help in its detection … though not necessarily in ways that one might assume.
Simon Sherrington, MD
Get in touch on 07917 541246 to discuss what we can do for your business.
The capabilities of these tools mean it’s harder than ever to spot a fake image, or a fake sound recording or a fake video – and that’s a problem for many organisations. What’s needed is a radical rethink, by those who care, of the way that trust in media content can be evaluated, that draws on well-established protocols and analogous problems. Rather than try to spot a fake by looking more closely at its content (for instance, the edges of visual elements, composition, relationships between pixels in an image, or the attributes of the file such as file size or compression) etc, one must instead look more closely at its context. There are multiple ways this might be done. One relatively simple idea is, of course, to look for evidence that an image or video contravenes fundamental laws of physics, or has obvious errors, such as photographs of people with extra hands. This approach raises the level of forensic analysis up from the technical detail layer and brings in context from the world, and from culture.
The military have long been specialists at deception (think camouflage) and the assessment of information of variable trustworthiness. The long-established Admiralty or NATO scale categorises information on two axes: reliability of the source, and plausibility of the information itself. This general principle is likely to assume increasing importance outside the military as it becomes harder to use the technical attributes of the content itself for evaluating its authenticity.
There are analogies to be made from industries other than defence. In telecommunications, the ability to know what type of data is crossing an operator’s network is important, for purposes such as preventing cyber attacks, charging customers, and efficient routing of traffic. Deep Packet Inspection (DPI) has proved useful as a technique for identifying data type and content but as more traffic becomes encrypted, and so harder to inspect this way, and as the volume of data traversing networks has grown, making it impractical to evaluate everything in real-time, operators have increasingly turned to techniques such as contextual and behavioural analysis. They take into account the source of the traffic, the patterns of traffic from that source (frequency, time of day, routes taken, data profiles etc). They also work more closely with the organisations that generate the largest volumes of encrypted traffic (content providers), for instance to share information that will characterise the traffic and enable the operator to deliver it more efficiently and reliably (this is what the content providers want after all).
Security risks in the Internet of Things – particularly around the huge numbers of simple sensors that are becoming connected – have raised concerns too, for experts such as UK security specialist Paul Galwas, who told me “The sensor is intrinsically insecure; you can’t tell if the data coming from it is genuine. But you can seek to understand the behaviour of sensors and their data in order to judge whether the data is real or not.”
Security risks in the Internet of Things – particularly around the huge numbers of simple sensors that are becoming connected – have raised concerns
Can AI help detect fake media?
Galwas believes that this contextual analysis is where AI can help efforts to detect manipulated media. “The manipulated media will be too good, at the content level – GANs could be used to optimise media content to be practically undetectable as fakes,” he says. And while it will always be difficult to have bullet-proof provenance for media, Galwas believes “you can use AI – most likely Machine Learning approaches, and networks of inference – at a higher level of abstraction to help work out if the contextualising information makes sense.”
There is certainly much research work in AI that will be relevant. In the UK, the Alan Turing Institute is working on a project led by Yarin Gal of the University of Oxford on the corruption of machine learning training data and adversarial learning in the context of image classification, and the Institute’s programme on AI for data wrangling looks at the understanding of available data, integrating data from multiple sources, identifying what’s missing and extracting features for modelling purposes. Nottingham University has a project funded by DSTL (Defence Science and Technology Laboratory) examining ways to fool machine learning algorithms by assessing limitations of machine learning and developing algorithms that are impervious to adversarial attack.
So while AI makes it easier for more people – beyond Hollywood special effects departments – to create “fake” new media, and to manipulate images, audio and video such that is indistinguishable from “real” media, AI will be able to help in its detection … though not necessarily in ways that one might assume.
[Image licensed by Ingram Image]
Such are the capabilities of AI to help improve the traditional ways of creating manipulated media that there is the potential to disrupt sectors of commerce – as well as presenting a challenge to news organisations and publishers.
No human carer could be available 24/7; robots on the other hand are not confined to shifts and rotas.
VR is still in its early days and its impact on human body and mind is yet to be thoroughly assessed. However, various sources point out that immersive reality and pharmaceutical products may have a thing in common – side effects.
No universal therapeutic tool is possible because no two medical conditions are the same. For instance, simple 3D images are required for dementia sufferers whose fading memory struggles with the complexity of the real world, while advanced and engaging virtual worlds must be created to distract cancer patients from the painful procedures they have to endure. VR for young children is a whole different story where a fairy-tale, cartoon-like approach is vital.
Software developers and medics around the world are working to prove that Virtual Reality (VR) powers stretch far beyond gaming and entertainment and have the potential to aid thousands suffering from cancer, anxiety, personality disorders, physical or psychological...
Smart clothes are where style and science meet, giving garments a whole host of innovative applications, such as charging depots for personal electronic gadgets, fitness trackers for capturing biometric data and colour-changing fashionable assets that go with everything.
Today’s vehicles are evolving fast, adopting more and more autonomous features on their way towards fully fledged self-driving status. Cities can’t afford to fall behind in their effort to remain fit for the autonomous vehicles of the near future.
Should AI be used solely to ease navigation through big data and augment human capabilities, or should it be allowed a more independent role in intelligent decision-making?
spotting a photograph or video where part of the image has been manipulated. Such a challenge faces news organisations on a regular basis: sensitivity over “fake news” means responsible publishers are on heightened alert to potential manipulation.
Predictive analytics, based on big data, is becoming an integral part of developing and delivering successful strategies in sport.
Network slicing, a key feature of 5G, lets operators automatically create separate, virtual end-to-end networks over the same physical infrastructure.
Today’s industrial technology settings have more interfaces than ever before, making industrial systems some of the most attractive targets for malware and ransomware attacks.
Financial incentives are available to encourage EV drivers with access to off-street parking to install home-charging units. Local councils have access to funding to install on-street EV chargers. The former has been successful, the latter has so far failed.
The prognosis for MedTech energy harvesting and natural batteries is good …
A customer asked us to provide a rear-view snapshot of artificial intelligence themes …
Drones are eyes in the sky helping farmers gain insight into crop growth and about microclimates within individual fields …
IoT has also been used to track and monitor livestock helping to keep it healthy and reduce wastage …
The five challenges to commercialising wearable and ingestible medical sensors
The hope for autonomous vehicles on the farm is not only automating the process but also increasing yield …
Entirely automating the farming process would significantly increase the speed and efficiency of agriculture …
Drones are a great tool for researchers, conservators and archaeologists, uncovering secrets, correcting theories …