Yesterday, Meta / Facebook published a blog post about how their AI bot is handling content related to the former President Trump’s recent assassination’s attempt. The bot is flagging images related to the shooting as needing to be fact checked, and is responding that there was “No real assassination attempt” on the former President Trump.
Now, we all know that’s incorrect. But the AI thinks otherwise.
What’s Going on?
The big question is, why does the AI think that this is fake news? The answer is simultaneously super complex, but boils down to something really simple. AI models like what Meta is using are stupidly expensive to train. Adding new information to these models is ridiculously complex and can cause issues with the model.
So, these models often have outdated data.
In this case, the model that Meta is using is not up to date with the Trump shooting. Since the model doesn’t have knowledge of it in it’s database – the AI doesn’t know that this happened yet.
That’s it. The AI doesn’t read the news and doesn’t know about the assassination attempt. That’s the reason why Meta / Facebook is labelling the Trump shooting as fake news and saying it needs to be fact checked. This instance isn’t Meta censoring Trump news, if anything it’s the inverse where Meta is un-censoring it.
What Meta Says About it:
Meta published a blog post discussing issues related to the handling of political content on their platforms. The post, authored by Joel Kaplan, VP of Global Policy, identified two specific issues. The first issue concerned Meta AI responses about an attempted assassination of former President Trump. The AI initially provided incorrect information or refused to answer questions about the event due to the limitations of large language models in handling rapidly developing real-time topics. Meta has since updated the responses but acknowledged the delay in doing so.
The second issue involved the circulation of a doctored photo of former President Trump with his fist in the air, which incorrectly made it appear as if Secret Service agents were smiling. A fact check label was correctly applied to the doctored photo. However, due to the subtle similarities between the doctored image and the original photo, the fact check label was mistakenly applied to the real photo as well. Meta’s teams worked to quickly correct this mistake.
Concerns for the Future:
So, while the reasons for the Trump shooting being labeled fake news are innocent, the implications that it exposes are concerning. As we as a society add more and more AI into our lives, willingly or as services that we consume like our Meta feed, using outdated models is going to become more and more of an issue.
What if this had been health advice that was outdated? That could be potentially deadly. What if this was information about a natural disaster or catastrophe that folks were trying to get away from? What about if it was a recall on food?
Using models with outdated information in scenarios that might be life threatening can be, well, life threatening. And this is something we are going to have to fix and address.
For more information, you can view Meta’s blog post about the incident here.