Notes on Community Notes
Checking up on fact-checking
Contrary to the experience of many internet users, my earliest days on the internet found me gravitating towards debunking sites. Not long after I discovered the internet, I discovered Snopes, which at that point was mainly associated with debunking urban myths. I remember thoroughly enjoying reading their investigations of a great many early web and pre-Internet hoaxes.
As Snopes grew into checking what was later to be termed “fake news,” the site itself became the target of hoaxers who claimed that the debunkers had an agenda beyond just outing the lies. Most of these claims have also been debunked over the years. Snopes isn’t perfect, but it is extremely helpful if you keep two things in mind when you use it and similar sites:
Don’t just read the headline. Read the article.
Click through to look at the primary sources that back up the fact-checker’s claims.
Drawing on my interest in truth and objectivity, I did some early fact-checking of my own. One of my early writing jobs was for a now-defunct outlet called Examiner.com, where I held the title of Atlanta Conservative Examiner. That may come as a surprise to some of my readers who assume that I’m a Democrat because I’m Never Trump, but I have been a conservative my whole life. The fact that I took conservatism seriously and not just as a synonym for “Republican” is why I oppose Trump.
At that point, a lot of the fake news and conspiracy theories floating around the interwebs involved Barack Obama and the Affordable Care Act. I opposed both, but I still felt that the truth mattered, so I debunked several of these conspiracy theories, such as the claims that Obamacare would usher in beheadings (that one got picked up by Politifact, which gave me a shout-out) and that the ACA funded a paramilitary secret police force. Living in Texas, I also did my part to debunk the Jade Helm conspiracy theories that Obama was going to invade the southwestern US (which really made no sense given that he already controlled the federal forces that were based in that part of the country) and detain citizens in concentration camps set up in abandoned Walmarts.
For those who didn’t follow politics or were too young to remember those times, it’s difficult to believe just how crazy Republicans were even before Trump became a candidate. It does explain a lot about why a party that Obama literally drove insane and that was already not living in the real world was ready to rally behind Trump, though. It’s harder to explain why so many Republicans now embrace the very acts that they were sounding the alarm about in those days. (I don’t limit my criticism to the right, but I was a consumer of right-wing media, and those outlets were chock full of conspiracy theories.)
Fast-forward to the present, and the state of fact-checking has changed a lot. Fact-checking departments at social media platforms were closed and the fact-checkers fired after Trump’s 2024 win. It looked like the fake news had not only won the election but had also routed efforts to even balance the fakery with the truth.
What replaced the professional fact-checkers was a band of volunteers. The social media companies began allowing select users to write notes that were appended to certain posts that were either outright lies or bent the truth. Needless to say, when I saw notifications that invited me to apply to these fact-checking programs, I signed up.
The response was not quick. I signed up for the platform formerly known as Twitter and Facebook/Meta fact-checking programs back in the spring. I was approved by Threads a few weeks later, and the Twitter approval quickly followed. I was just added to the Facebook program yesterday.
There is a lot of similarity between the programs. In all cases, Community Notes are written by users and approved by other users. On Twitter at least, the approval is not based on majority rule but by an algorithm that relies on agreement between users who have disagreed in the past.
For a new Note writer, the first step on all platforms is to start reviewing Notes written by others. This gives the user an opportunity to see what good and bad Note writing looks like. It also gives the algorithm a chance to look at how the writer responds. Twitter gives writers an “impact” score for both reviewing and writing Notes, with a reviewing score of 10 being required before you can start writing.
Across the board, the reviewing process is similar. Users are asked to choose whether a Note is helpful or not. (Twitter also allows “somewhat helpful” as an option.) For either choice, there is a checklist of items to support your choice that includes things like using high-quality sources, being objective and unbiased, adding important context, and being relevant to the post’s claim. For unhelpful Notes, the options include being biased or argumentative, incorrect information, typos, unreliable sources, or missing key points. Notes can also be rejected if they are based on opinion rather than facts or if a note is simply not needed on the post.
Reviewers can also choose to write a Note to address their own views on the post. Sometimes this leads to interesting back-and-forth exchanges behind the scenes. Much of this back-and-forth is between partisans who disagree over both the underlying truth and whether a Note is needed. But disagree carefully on Twitter because there is a limit of one Note per day, and the back-and-forth Notes count toward this limit.
Additionally, I have also seen some Notes written by AI on Threads. These notes still have the same review process as Notes written by humans.
Notes can be written on any post, but the platforms do have a list of posts where a Note might be needed. Users can scroll down this list and add a Note if they choose.
So how well does this new system work? That is still being determined. A Spanish fact-checking site found that only 8.3 percent of proposed Notes on Twitter posts ever became visible. That number rises to 15.2 percent when the Note is linked to a verified fact-checking organization.
Sometimes a joke gets through the approval process as well. The Twitter satirist and youth football coaching legend, @3YearLetterman, was famously defended by Note writers and reviewers who declined to correct him due to his status as a Notary Public. Another attempt to “Note” the Coach was beaten back amid allegations that the Note writer financed his waterbed. It generally helps in life to have a sense of humor.
That doesn’t necessarily mean that the process doesn’t work. We may be seeing the algorithm weeding out bad Notes. After all, a lot of users probably have an ax to grind, and that may (it definitely does) show up in the Notes they write.
In my personal experience, I have written 17 Notes for Twitter and 28 for Threads. Of those, zero have been published on Twitter, and six have been published on Threads. (The two Notes that I wrote for Facebook yesterday are also unpublished as of this writing.)
Yeah, I’d say the system is broken. And Twitter’s system seems more broken than others.
The biggest problem seems to be getting Notes reviewed and published. As currently set, standards may be too high, or maybe there needs to be more incentive for users to go through and review Notes in order to reduce the backlog. Maybe a more fundamental problem is that we can’t even agree on what is true.
Similarly, if misinformation is highly technical or requires specialized knowledge to refute, a layman might not be up to the job. Misinformation about vaccines that cites medical studies would be an example.
It seems to me that if a Note isn’t approved relatively quickly, it will probably never be approved. This creates a problem because the misinformation is still out there, even if the corrective Note languishes. Having said that, I remember posting one Note on Threads on a hot topic, and it was published by the next morning.
The crowd-sourced fact-checking is not a bad idea, but it does need improvement. In particular, there needs to be a way to expedite approval of Community Notes. Outlets also need to attract Note writers with expertise in technical areas. Artificial Intelligence, in addition to creating fake news, may turn out to be a big part of the solution.
The fake news may be winning the battle, but we can’t let it win the war. The fight for truth on the internet is vital when it comes to preserving our rights and our Republic. In the meantime, try to be a savvy internet user and question sensational stories before you click “like” or share them, even if they tickle your ears and your preconceived biases.
And if you’re already a savvy internet user who can look at issues objectively, we’re looking for a few good fact-checkers.
Today marks 310 days of Donald Trump’s second term, and the Epstein files still have not been released.
SOCIAL MEDIA ACCOUNTS: You can follow us on social media at several different locations. Official Racket News pages include:
Facebook: https://www.facebook.com/NewsRacket
Twitter/X: https://twitter.com/NewsRacket
Our personal accounts on the platform formerly known as Twitter:
David: https://x.com/captainkudzu
Steve: https://x.com/stevengberman
Jay: https://x.com/curmudgeon_NH
Tell your friends about us!



