Meta announced today that they're ditching independent fact-checkers for a more "community-oriented" moderation system much like X, formerly Twitter, adopted a few years ago.
In reading CNN's take on the announcements (and man alive do you have to swim through a lot of politic slop to get to the meat of what Meta is doing) this gem arrived:
Meta also plans to adjust its automated systems that scan for policy violations, which it says have resulted in “too much content being censored that shouldn’t have been.” The systems will now be focused on checking only for illegal and “high-severity” violations such as terrorism, child sexual exploitation, drugs, fraud and scams. Other concerns will have to be reported by users before the company evaluates them.
I have to laugh at this, because the number of outright scams I've encountered -- and reported -- on Facebook is legion, and I can't recall any of the posts being removed. The best I've been able to do is comment on the posts that they're a scam, which leads the post to getting more engagement points and the original poster burying or deleting my comments -- or reporting me for spamming them.
The only rule that makes sense is that mostly what I've reported are paid or sponsored posts, and as I am one of the unwashed idiots on Facebook freeloading, their moderators and bots go where the money is.
Whether or not they follow up with not censoring content "that shouldn't have been [censored]" remains to be seen. I've had some that fell into that category.
No comments:
Post a Comment