Digital media has made all sorts of information increasingly accessible, and as a result, the deliberate spread of disinformation has become an increasingly important issue.
Known by the relatively new term “fake news,” this false information rose to prominence during the 2016 election, with governments and internet companies desperately making efforts to fight it.
Fortunately, students in the UM system are now working to combat fake news by creating programs that can recognize fabricated photos, videos and audio. Students developed verification software as a part of the Student Innovation Competition at the University of Missouri’s Reynolds Journalism Institute (RJI).
“The easier it gets to spread misinformation and lies, the harder it is for the public to know what to trust,” said RJI Director of Innovation Kat Duncan. “We need to be able to prove that photo, video and audio have been tampered with or not [sic] – which means we need better tools that are accessible to everyone.”
Teams that entered the competition were challenged to create these tools. The contest began in September of 2019, and after preliminary judging in December, five teams were selected as finalists. Four of these teams were composed of computer science, computer engineering and communication studies students from UMKC, with the other team coming from Mizzou.
The final competition took place Saturday, Feb. 8 at the Missouri School of Journalism. Each team had 15 minutes to pitch their idea and 10 minutes to answer questions from a panel of judges and the public.
Two UMKC teams, Defakify and FakeLab, placed in the competition. Team Defakify took second place and was awarded $2,500, and team FakeLab took third and won $1,000. The top prize of $10,000 was awarded to Team Deeptector.io of Mizzou.
Team Defakify’s program focused on detecting fabricated videos and photography by using deep learning technology—computer algorithms that essentially allow computers to learn in a similar way to humans. For example, by exposing the tech to thousands of photographs and videos and labeling each piece of data as either fake or authentic, the computer will eventually learn to make the distinction itself. Then, when an unlabeled photo or video is run through the program, it can predict whether or not the material is authentic.
“Humans learn by observing things, and then after we observe something so many times, we start learning about that thing,” said Gharib Gharibi of team Defakify. “And this is how we literally teach the machines to detect fake videos. By showing it large amounts of data that we already know the result to, the computer will learn on its own to solve this problem.”
While technology like this currently exists and is utilized by some big names in tech such as Google, Gharibi said these programs are largely inaccessible to the public, and their application to the news industry is limited. The RJI competition encouraged the cooperation of computer science and communications students to improve on the technology, make it more accessible and consider its application to journalism.
“Our goal is to simply provide a tool that will hopefully be used by tech companies, media houses and people within the journalism field in the near future,” said Lena Otiankouya, freshman communications major and member of team FakeLab.
When fake news is allowed to spread unchecked, it plagues the news industry by undermining the truth.
“There has been a rapid increase in disinformation since I started journalism in 2006,” said Jason Rosenbaum, a political correspondent at St. Louis Public Radio and one of the industry judges of this year’s RJI competition. “I think the rise of manipulated photos and videos provides more urgency for this kind of tech to come out and assure credibility.”
If utilized properly, verification technology such as that presented at the RJI competition could help stifle the spread of disinformation and ensure a brighter future for media consumption.