Media Forensics: A New Way of Thinking

March 1, 2020

Artificial Intelligence (AI) approaches such as Generative Adversarial Networks, or GANs (that pitch AI systems against each other to improve their output) and other forms of Deep Learning (DL) are increasingly used to create or edit media. Mainstream business and consumer applications such Adobe Photoshop and Pixelmator Pro have tools that use AI techniques to create a realistic fill when an object is removed from an image, and graphics specialist NVIDIA has demonstrated how new images of celebrities can be created by using AI trained on publicly available image banks. In the audio domain, services such as Lyrebird are using machine learning approaches to clone individuals’ voices. In early 2018, the emergence of AI-based realistic pornographic videos featuring celebrity face swaps, known as Deepfakes, caused consternation in publishing, security, legal and government circles.

So while AI makes it easier to create “fake” new media, and to manipulate images, audio and video such that is indistinguishable from “real” media, AI will be able to help in its detection … though not necessarily in ways that one might assume.

Simon Sherrington, MD

Get in touch on 07917 541246 to discuss what we can do for your business.

The capabilities of these tools mean it’s harder than ever to spot a fake image, or a fake sound recording or a fake video – and that’s a problem for many organisations. What’s needed is a radical rethink, by those who care, of the way that trust in media content can be evaluated, that draws on well-established protocols and analogous problems. Rather than try to spot a fake by looking more closely at its content (for instance, the edges of visual elements, composition, relationships between pixels in an image, or the attributes of the file such as file size or compression) etc, one must instead look more closely at its context. There are multiple ways this might be done. One relatively simple idea is, of course, to look for evidence that an image or video contravenes fundamental laws of physics, or has obvious errors, such as photographs of people with extra hands. This approach raises the level of forensic analysis up from the technical detail layer and brings in context from the world, and from culture.

The military have long been specialists at deception (think camouflage) and the assessment of information of variable trustworthiness. The long-established Admiralty or NATO scale categorises information on two axes: reliability of the source, and plausibility of the information itself. This general principle is likely to assume increasing importance outside the military as it becomes harder to use the technical attributes of the content itself for evaluating its authenticity.

There are analogies to be made from industries other than defence. In telecommunications, the ability to know what type of data is crossing an operator’s network is important, for purposes such as preventing cyber attacks, charging customers, and efficient routing of traffic. Deep Packet Inspection (DPI) has proved useful as a technique for identifying data type and content but as more traffic becomes encrypted, and so harder to inspect this way, and as the volume of data traversing networks has grown, making it impractical to evaluate everything in real-time, operators have increasingly turned to techniques such as contextual and behavioural analysis. They take into account the source of the traffic, the patterns of traffic from that source (frequency, time of day, routes taken, data profiles etc). They also work more closely with the organisations that generate the largest volumes of encrypted traffic (content providers), for instance to share information that will characterise the traffic and enable the operator to deliver it more efficiently and reliably (this is what the content providers want after all).

Security risks in the Internet of Things – particularly around the huge numbers of simple sensors that are becoming connected – have raised concerns too, for experts such as UK security specialist Paul Galwas, who told me “The sensor is intrinsically insecure; you can’t tell if the data coming from it is genuine. But you can seek to understand the behaviour of sensors and their data in order to judge whether the data is real or not.”

Security risks in the Internet of Things – particularly around the huge numbers of simple sensors that are becoming connected – have raised concerns

Can AI help detect fake media?

Galwas believes that this contextual analysis is where AI can help efforts to detect manipulated media. “The manipulated media will be too good, at the content level – GANs could be used to optimise media content to be practically undetectable as fakes,” he says. And while it will always be difficult to have bullet-proof provenance for media, Galwas believes “you can use AI – most likely Machine Learning approaches, and networks of inference – at a higher level of abstraction to help work out if the contextualising information makes sense.”

There is certainly much research work in AI that will be relevant. In the UK, the Alan Turing Institute is working on a project led by Yarin Gal of the University of Oxford on the corruption of machine learning training data and adversarial learning in the context of image classification, and the Institute’s programme on AI for data wrangling looks at the understanding of available data, integrating data from multiple sources, identifying what’s missing and extracting features for modelling purposes. Nottingham University has a project funded by DSTL (Defence Science and Technology Laboratory) examining ways to fool machine learning algorithms by assessing limitations of machine learning and developing algorithms that are impervious to adversarial attack.

So while AI makes it easier for more people – beyond Hollywood special effects departments – to create “fake” new media, and to manipulate images, audio and video such that is indistinguishable from “real” media, AI will be able to help in its detection … though not necessarily in ways that one might assume.

[Image licensed by Ingram Image]

Other Stuff

Disruptive media manipulation

Disruptive media manipulation

Such are the capabilities of AI to help improve the traditional ways of creating manipulated media that there is the potential to disrupt sectors of commerce – as well as presenting a challenge to news organisations and publishers.

VR: Regulation and side effects

VR: Regulation and side effects

VR is still in its early days and its impact on human body and mind is yet to be thoroughly assessed. However, various sources point out that immersive reality and pharmaceutical products may have a thing in common – side effects.

Impact of VR on Healthcare

Impact of VR on Healthcare

No universal therapeutic tool is possible because no two medical conditions are the same. For instance, simple 3D images are required for dementia sufferers whose fading memory struggles with the complexity of the real world, while advanced and engaging virtual worlds must be created to distract cancer patients from the painful procedures they have to endure. VR for young children is a whole different story where a fairy-tale, cartoon-like approach is vital.

In Pain? Don a VR Headset

In Pain? Don a VR Headset

Software developers and medics around the world are working to prove that Virtual Reality (VR) powers stretch far beyond gaming and entertainment and have the potential to aid thousands suffering from cancer, anxiety, personality disorders, physical or psychological...

Joseph’s technocapable coat: energy harvesting for smart clothes

Joseph’s technocapable coat: energy harvesting for smart clothes

Smart clothes are where style and science meet, giving garments a whole host of innovative applications, such as charging depots for personal electronic gadgets, fitness trackers for capturing biometric data and colour-changing fashionable assets that go with everything.

Could AI help spot a fake Donald Trump?

Could AI help spot a fake Donald Trump?

spotting a photograph or video where part of the image has been manipulated. Such a challenge faces news organisations on a regular basis: sensitivity over “fake news” means responsible publishers are on heightened alert to potential manipulation.

Telecom Industry Slices 5G

Telecom Industry Slices 5G

Network slicing, a key feature of 5G, lets operators automatically create separate, virtual end-to-end networks over the same physical infrastructure.

Industrial Internet of Things Dangers

Industrial Internet of Things Dangers

Today’s industrial technology settings have more interfaces than ever before, making industrial systems some of the most attractive targets for malware and ransomware attacks.

Electric Vehicles: Charging Infrastructure Policy Stifling Adoption

Electric Vehicles: Charging Infrastructure Policy Stifling Adoption

Financial incentives are available to encourage EV drivers with access to off-street parking to install home-charging units. Local councils have access to funding to install on-street EV chargers. The former has been successful, the latter has so far failed.

FarmTech: Application of Drones

FarmTech: Application of Drones

Drones are eyes in the sky helping farmers gain insight into crop growth and about microclimates within individual fields …