迪弗克斯在这里留下来随着人工智能变得越来越强大,管理虚假信息和保护公众将比以往任何时候都更具挑战性。
【原文】
This article is republished here with permission from The Conversation. This content is shared here because the topic may interest Snopes readers; it does not, however, represent the work of Snopes fact-checkers or editors.
An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.
Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.
Journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected. Research has shown that people remember the lie, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “Deepfake” in the title might not be enough to counter some kinds of disinformation.
Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. We are part of a growing research community that is taking on this threat, in which detection is just the first step.
John Sohrawardi, Doctoral Student in Computing and Informational Sciences, Rochester Institute of Technology and Matthew Wright, Professor of Computing Security, Rochester Institute of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Help Supercharge Snopes For 2020
We have big plans. We need your help.