Today's Paper Search Latest App In the news Traffic #Gazette200 Paper Trails Listen Digital FAQ Weather Newsletters Obits Puzzles/Games Archive
ADVERTISEMENT

Fake news and social media posts are such a threat to U.S. security that the Defense Department is launching a project to repel "large-scale, automated disinformation attacks," as the top Republican in Congress blocks efforts to protect the integrity of elections.

The Defense Advanced Research Projects Agency wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips. If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.

"A decade ago, today's state-of-the-art would have registered as sci-fi -- that's how fast the improvements have come," said Andrew Grotto at the Center for International Security at Stanford University. "There is no reason to think the pace of innovation will slow any time soon."

U.S. officials have been working on plans to prevent outside hackers from flooding social channels with false information ahead of the 2020 election. The drive has been hindered by Senate Majority Leader Mitch McConnell's refusal to consider election-security legislation. McConnell said in July that "it's very important that we maintain the integrity and security of our elections," but that proposals from the Democratic-led House are too partisan.

President Donald Trump has repeatedly rejected allegations that dubious content on platforms like Facebook, Twitter and Google aided his election win. Hillary Clinton supporters claimed a flood of fake items may have helped sway the results in 2016.

"The risk factor is social media being abused and used to influence the elections," Syracuse University assistant professor of communications Jennifer Grygiel said. "It's really interesting that [the Defense Advanced Research Projects Agency] is trying to create these detection systems, but 'good luck' is what I say. It won't be anywhere near perfect until there is legislative oversight. There's a huge gap and that's a concern."

False news stories and so-called "deepfakes" are increasingly sophisticated and making it more difficult for data-driven software to spot. AI imagery has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems. Researchers have shown that these generative adversarial networks can be used to create fake videos.

To highlight the risk of trusting material online, Oscar-winning filmmaker Jordan Peele last year impersonated former President Barack Obama's voice and created a fake video of Obama praising the villain from the movie Black Panther and making a slur against Trump.

After the 2016 election, Facebook Chief Executive Officer Mark Zuckerberg played down fake news as a challenge for the world's biggest social media platform. He later signaled that he took the problem seriously and would let users flag content and enable fact-checkers to label stories in dispute. These judgments subsequently prevented stories being turned into paid advertisements, which were one key avenue toward viral promotion.

"Where things get especially scary is the prospect of malicious actors combining different forms of fake content into a seamless platform," Grotto said. "Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis."

By increasing the number algorithm checks, the military research agency hopes it can spot fake news with malicious intent before going viral.

"A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies," the agency said in its Aug. 23 concept document for the Semantic Forensics program.

Current surveillance systems are prone to "semantic errors." An example, according to the agency, is software not noticing mismatched earrings in a fake video or photo. Other indicators, which may be noticed by humans but missed by machines, include weird teeth, messy hair and unusual backgrounds.

The algorithm testing process will include an ability to scan and evaluate 250,000 news articles and 250,000 social media posts, with 5,000 fake items in the mix. The program has three phases over 48 months, initially covering news and social media, before an analysis begins of technical propaganda. The project will also include weeklong "hackathons."

With a four-year project scale for the Semantic Forensics program, the next election will have come and gone before the system is operational.

A Section on 09/01/2019

Print Headline: Agency calls for software to detect, fight fake news

ADVERTISEMENT

Sponsor Content

COMMENTS - It looks like you're using Internet Explorer, which isn't compatible with the Democrat-Gazette commenting system. You can join the discussion by using another browser, like Firefox or Google Chrome.
It looks like you're using Microsoft Edge. The Democrat-Gazette commenting system is more compatible with Firefox and Google Chrome.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT