X wants AI bots to write Community Notes
Community Notes has become one of X’s most visible moderation tools.
Elon’s X (formerly Twitter) has spent the last few years hyping up Community Notes, its crowdsourced fact-checking feature, as the solution to misinformation. But now, it wants to hand that job over to... AI chatbots.
As part of a new pilot, X will let AI tools like its in-house Grok or third-party large language models (LLMs) submit Community Notes via API. The pitch is that machines can scan more posts than humans ever could. As X's VP of product Keith Coleman put it, “humans tend to check the high-visibility stuff. But machines could potentially write notes on far more content.”
There’s a kind of logic here. Misinformation spreads fast, and bots could help plug the gaps, especially on a platform where traditional moderation has been dialed down. But there’s also a heavy dose of irony. AI tools like Grok and ChatGPT are famous for hallucinating facts, parroting bias, or just making things up. And now they’re supposed to fix misinformation?

To X’s credit, it isn’t pitching full automation. AI-generated notes will go through the same consensus-based vetting process as human ones. And a recent research paper by the Community Notes team frames the project as a “virtuous loop,” where human feedback trains better AI, which in turn proposes better notes.
But critics aren’t wrong to raise eyebrows. What happens when bots flood the system with half-baked context? Will unpaid human raters actually keep up? In a post-truth internet, does adding more machine-generated “truth” even help?
It matters, because Community Notes has quietly become one of X’s most visible moderation tools. As of May 2024, more than 500,000 people across 70 countries have contributed. In 2023 alone, 37,000 notes were viewed over 14 billion times, with another 29,000 notes seen more than 9 billion times by April this year. The feature has even inspired copycats at Meta, YouTube, and TikTok.
The pilot’s live now, but it’s early days. If it works, AI could become a quiet partner in the moderation process. If it doesn’t? X will have handed the keys to the fact-checking feature it proudly built right back to the same tech that caused the mess.
