メタ傘下SNSで「有害コンテンツ」急増 投稿監視ポリシー変更後

メタは今年1月、米国内でのファクトチェック業務を廃止し、誤情報の検証を一般ユーザーに委ねる「コミュニティーノート」モデルに移行した。
この決定はドナルド・トランプ大統領の新政権に迎合する試みとして広く受け止められた。トランプ政権が支持基盤とする保守派は、メタなどのプラットフォーム上でのファクトチェックは言論の自由を制限し、右派のコンテンツを検閲する手段だと長らく批判してきた。
メタはまた性別や性的アイデンティティーといったテーマに関する規制も緩和した。更新後の同社のコミュニティー・ガイドラインでは、ユーザーがジェンダーや性的指向に基づいて他者を「精神疾患」と称したり、「異常」だと呼ぶことを許容するとしている。
今回の調査はメタ傘下のインスタグラム、フェイスブック、スレッズのアクティブユーザー約7000人を対象に、メタによる一連のポリシー変更後に行われた。その結果、「有害コンテンツの増加、表現の自由の後退、自己検閲の増加を示す明確な証拠」が確認された。
調査回答者の6人に1人が、メタ傘下のプラットフォーム上でジェンダーや性的指向に基づく何らかのハラスメントや嫌がらせを受けたと報告。66%が、ヘイト表現や暴力的な内容などの有害コンテンツを目にしたと答えた。
また回答者の92%が有害コンテンツの増加を懸念し、そのような投稿に「さらされたり標的にされたりする」ことから、これまでよりも「保護されていない」と感じると答えた。
さらに、77%は自らの考えや意見を自由に表現することについて「以前よりも安全ではない」と感じていると答えた。
ウルトラバイオレット、GLAAD、All Outなどのデジタル・人権団体によって発表された調査報告は「これらのポリシー変更は、メタが約10年をかけて築いてきた投稿監視基準を劇的に覆すものだ」と指摘している。
メタはこの調査に関するコメントを控えている。【翻訳編集AFPBBNews】
〔AFP=時事〕(2025/06/17-18:01)
Rise in 'harmful content' since Meta policy rollbacks-- survey

Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the United States and eased moderation policies, a survey showed Monday.
The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as Community Notes, popularized by X.
The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content.
Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of mental illness or abnormality based on their gender or sexual orientation.
These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade, said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out.
Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship.
One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66 percent said they had witnessed harmful content such as hateful or violent material.
Ninety-two percent of surveyed users said they were concerned about increasing harmful content and felt less protected from being exposed to or targeted by such material on Meta's platforms.
Seventy-seven percent of respondents described feeling less safe expressing themselves freely.
The company declined to comment on the survey.
In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact.
Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas, the report said.
But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment.
Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution, Jenna Sherman, campaign director at UltraViolet, told AFP.
But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process.
Facebook and Instagram already had an equity problem. Now, it's out of control, Sherman added.
The groups implored Meta to hire an independent third party to formally analyze changes in harmful content facilitated by the policy changes made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier.
The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programs covering more than 100 countries.
AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

最新ニュース
-
阪神の村上、よもやの乱調=プロ野球
-
オリックスの曽谷、自己最多の8勝目=チームを2位に押し上げる―プロ野球
-
日産、米国でホンダに自動車供給検討=工場の稼働率向上
-
抑え不在も救援一丸=中日、僅差守り切る―プロ野球
-
20歳永田、ポルト移籍=Jリーグ・横浜C
写真特集
-
ラリードライバー 篠塚建次郎
-
巨人・江川卓投手 元祖“怪物”
-
つば九郎 ヤクルトの球団マスコット
-
【野球】「サイ・ヤング賞右腕」トレバー・バウアー
-
【野球】イチローさん
-
【スノーボード男子】成田緑夢
-
【カーリング】藤沢五月
-
【高校通算140本塁打の強打者】佐々木麟太郎