HomeDeepfakes: Manipulated video and audio pose threat to elections

Deepfakes: Manipulated video and audio pose threat to elections

Preferred Source of Google

A video on shows a high-ranking U.S. legislator declaring his support for an overwhelming increase. You react accordingly because the video looks like him and sounds like him, so certainly has be him.

The term “fake news” is taking a much more literal turn as new technology is making it easier to manipulate the faces and audio in videos. The videos, called deepfakes, can then be posted to any social media site with no indication they are not the real thing.

Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, says deepfakes are a growing danger with the next presidential election fast approaching.

Advertisement
Saksham Bharat 2026
Saksham Bharat 2026
A multi-stakeholder dialogue on skilling gap in Cybersecurity, Data Resilience and AI — and the roadmap to a Saksham Bharat.
Register Now →
VeeamON 2026 Tour India - Mumbai
VeeamON 2026 Tour India - Mumbai
A VeeamON 2026 India Leadership Series Mumbai for senior public sector and government technology leaders.
Register Now →
Cyber Surakshit Uttar Pradesh
Cyber Surakshit Uttar Pradesh
Find out strategies, frameworks and solutions for building a resilient and secure digital ecosystem across Uttar Pradesh.
Register Now →
VeeamON 2026 Tour India - Bengaluru
VeeamON 2026 Tour India - Bengaluru
A VeeamON 2026 India Leadership Series Bengaluru for senior public sector and government technology leaders.
Register Now →
VeeamON 2026 Tour India - Delhi
VeeamON 2026 Tour India - Delhi
A VeeamON 2026 India Leadership Series Delhi for senior public sector and government technology leaders.
Register Now →
Infosec Reimagined
Infosec Reimagined
Infosec Reimagined 2026 is the premier information security summit where top leaders—CISOs, CROs, CIOs, CTOs and risk executives—converge to redefine cyber resilience.
Register Now →
Digital Senate
Digital Senate
Digital Senate is a premier conference uniting government leaders, technologists and innovators to share ideas, success stories and strategies on digital governance, public sector transformation, cybersecurity and emerging technologies in India.
Register Now →
CIO Prism
CIO Prism
CIO Prism unites forward-thinking technology leaders to exchange transformative insights, shape digital strategies, and foster innovation, empowering enterprises to excel in an era of rapid technological change.
Register Now →

“It’s possible that people are going to use fake videos to make fake news and insert these into a political election,” said Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering. “There’s been some evidence of that in other elections throughout the world already.

“We’ve got our election coming up in 2020 and I suspect people will use these. People believe them and that will be the problem.”

The videos pose a danger to swaying the court of public opinion through social media, as almost 70 percent of adults indicate they use , usually daily. YouTube boasts even higher numbers, with more than 90 percent of 18- to 24-year-olds using it.

Advertisement

Delp and doctoral student David Güera have worked for two years on video tampering as part of a larger research into media forensics. They’ve worked with sophisticated machine learning techniques based on artificial intelligence and machine learning to create an algorithm that detects deepfakes.

Late last year, Delp and his team’s algorithm won a Defense Advanced Research Projects Agency (DARPA) contest. DARPA is an agency of the U.S. Department of Defense.

“By analyzing the video, the algorithm can see whether or not the face is consistent with the rest of the information in the video,” Delp said. “If it’s inconsistent, we detect these subtle inconsistencies. It can be as small as a few pixels, it’s can be colouring inconsistencies, it can be different types of distortion.”

Advertisement

“Our system is data driven, so it can look for everything – it can look into anomalies like blinking, it can look for anomalies in illumination,” Güera said, adding the system will continue to get better at detecting deepfakes as they give it more examples to learn from.

The research was presented in November at the 2018 IEEE International Conference on Advanced Video and Signal Based Surveillance.

Deepfakes also can be used to fake pornography video and images, using the faces of celebrities or even children.

Delp said early deepfakes were easier to spot. The techniques couldn’t recreate eye movement well, resulting in videos of a person that didn’t blink. But advances have made the technology better and more available to people.

News organizations and social media sites have concerns about the future of deepfakes. Delp foresees both having tools like his algorithm in the future to determine what video footage is real and what is a deepfake.

“It’s an arms race,” he said. “Their technology is getting better and better, but I like to think that we’ll be able to keep up.”

Get the day's headlines from Tech Observer straight in your inbox

By subscribing you agree to our Privacy Policy, T&C and consent to receive newsletters and other important communications.
Tech Observer Desk
Tech Observer Desk
Tech Observer Desk at TechObserver.in is a team of technology reporters led by a senior editor who brings latest updates and developments from the world of technology.
- Advertisement -
Powered By Veeam Logo
- Advertisement -

Subscribe to our Newsletter

By subscribing you agree to our Privacy Policy, T&C and consent to receive newsletters and other important communications.
- Advertisement -

AI agents break legacy security models, Veeam CEO warns at VeeamON

Veeam Software CEO Anand Eswaran says zero-trust security models built for human users have broken down as autonomous AI agents move inside enterprises at machine speed, and that recovery, identity and data governance can no longer be treated as separate problems.

RELATED ARTICLES