There are people who will never be happy with anything this or any other responsible newspaper does, and a comment on last week's column was indicative of why.
(I know I shouldn't read the comments, but sometimes it's educative.)
A frequent combatant in the troll wars noted, "The future of news is media like Elon Musk's X and citizen journalists being fact-checked by their peers (Community Notes)."
On the surface, it would seem such an endeavor would be fruitful. The site formerly known as Twitter states, "In the face of misleading information, we aim to create a better informed world so people can engage in healthy public conversation. We work to mitigate detected threats and also empower customers with credible context on important issues."
One of the ways it says it's doing that is with Community Notes, formerly known as Birdwatch. Sarah Perez of TechCrunch wrote in August, on the occasion of the service announcing the "extra context" notes would be removed for those already experienced with Community Notes, that the system has been refined "so the 'wisdom of the crowds,' so to speak, couldn't be easily gamed by someone or a group of people wanting to spread misinformation. The system is not as simple as having a post or fact check upvoted or downvoted for accuracy. If that were the case, brigades of like-minded contributors could team up to promote their own viewpoints.
"Instead, Community Notes uses a 'bridging' algorithm that attempts to find consensus among people who don't usually share the same views. Not everyone can immediately become a contributor to Community Notes, either. They first have to prove they're capable of writing helpful 'notes' by correctly assessing other notes as either Helpful or Not Helpful, which earns them points."
Sure, I guess that's one way to get your news if you don't trust "mainstream media," but crowdsourcing doesn't really scream "credibility."
So how does Community Notes work in real life?
Not very well, according to a story by Madison Czopek of the Poynter Institute, a nonprofit media institute and newsroom that provides fact-checking, media literacy and journalism ethics training to citizens and journalists. The institute's MediaWise Director Alex Mahadevan told a summit of fact-checkers in June that the experiment thus far had been lackluster, with only about 8.5 percent of the notes showing up for regular users. (And that was before the announcement that notes would be removed for some, soooo ...)
"To determine which Community Notes see the light of day," Czopek wrote, "Twitter relies on ratings from other Community Notes users, who can read notes and evaluate whether those notes cite high-quality sources, are easy to understand, provide important context and more. If notes get enough ratings stating that they are helpful, they are more likely to be displayed publicly." However, for a note to become public, it has to be accepted by a consensus across the political spectrum.
Ideological consensus ... sure, that's easy!
Mahadevan noted that getting a "cross-ideological agreement on truth" is virtually impossible in the hyperpartisan environment we now have. Plus, the site uses past behavior to determine political leanings and waits till a similar number on the right and left weigh in on a note.
How that works with people like me who are reading tweets across the spectrum in the course of their work ... well, I'm sure the algorithms would figure that out, just like they've figured out things like me not eating beef because of my IBS. Heck yeah, keep on showing me ads for Omaha Steaks!
One of the biggest problems is that 60 percent of the most-rated notes aren't public, meaning the tweets most in need of a context note don't get one. "So this algorithm that was supposed to solve the problem of biased fact-checkers basically means there is no fact-checking," Mahadevan said. "So crowd-sourced fact-checking in the style the Community Notes wants to do [is] essentially non-existent."
But maybe being engineered to fail is the point.
Valerie Wirtschafter and Sharanya Majumder found a mixed bag in an analysis on Community Notes published in Journal of Online Trust and Safety. While they found some bright spots, there weren't noticeable declines in engagement with tweets marked with Community Notes, nor were misleading tweets more likely to be deleted.
Crowdsourcing has good potential, but it also can fail miserably (the Internet detectives of the Boston bombing come to mind, misidentifying missing student Sunil Tripathi as one of the bombers; Tripathi died by suicide the month before, but his body wasn't found till after the bombing).
For now I'd much rather put my trust in people who've done the research and been fact-checked by layers of editors. People who are willing to stand up for correct information using their own name, and who provide documentation to back up their research.
Pardon me if I trust people who do this for a living than some armchair Internet sleuths with no culpability for being wrong.
Assistant Editor Brenda Looper is editor of the Voices page. Email her at email@example.com. Read her blog at blooper0223.wordpress.com.