|
|
|
|
August 21, 2025
|
Hackers Infiltrate Alleged North Korean Operative’s Computer, Leak Evidence of...
|
|
August 21, 2025
|
Ecosia Proposes Unusual Stewardship Model for Google Chrome
|
|
August 21, 2025
|
OpenAI Presses Meta for Evidence on Musk’s $97 Billion Takeover Bid
|
|
August 15, 2025
|
ChatGPT Mobile App Surpasses $2 Billion in Consumer Spending, Dominating Rivals
|
|
|
Meta Cracks Down on Unoriginal Facebook Content, Targets Spam and Impersonators
July 14, 2025
Meta has announced new steps to limit the spread and monetization of unoriginal content on Facebook, reinforcing its commitment to platform authenticity. The company says it has already removed around 10 million accounts impersonating major creators and taken action against 500,000 accounts exhibiting spam-like behavior or fake engagement. These enforcement efforts include demoting suspicious comments, limiting the reach of problematic content, and restricting monetization access.
This move aligns with recent changes at YouTube, which also updated its policy on repetitive and mass-produced videos—many of which are now easier to generate using generative AI tools. Meta clarifies that it won’t penalize users who engage with content in creative ways, such as making reaction videos or joining trends. Instead, the crackdown is aimed squarely at accounts that repeatedly repost other users’ original content, often to profit or deceive audiences.
Accounts found abusing the system will lose access to monetization programs and experience decreased content distribution. When Meta detects duplicate videos, it will reduce their visibility and is testing a new feature that links viewers to the original version of the content to ensure credit goes to the rightful creator.
These changes come amid rising frustration among Meta users over automated enforcement errors across Facebook and Instagram. A growing petition with nearly 30,000 signatures criticizes Meta’s lack of human support and its mishandling of account bans, particularly affecting small businesses and creators. Though Meta hasn’t publicly addressed these concerns, the company continues to double down on its integrity efforts.
Meta’s update also subtly acknowledges the growing wave of AI-generated “slop” content—low-quality, mass-produced media assembled from text-to-video tools. While not explicitly banning AI content, Meta encourages creators to avoid “stitching together clips” or relying on watermarks alone when using others’ material. Instead, the platform urges a focus on authentic storytelling and warns against short, low-value videos that lack originality.
The company reiterates that reused content from other platforms remains prohibited, and it advises creators to ensure that video captions are high quality, possibly signaling tighter scrutiny on auto-generated captions from AI tools.
These changes will roll out gradually in the coming months, giving creators time to adapt. Facebook’s Professional Dashboard now includes post-level insights to help users understand why their content may not be reaching audiences. Additionally, creators will be able to see whether they are at risk of penalties in the Support section of their professional profiles.
Meta’s broader transparency efforts continue through its quarterly reports. In the first quarter of 2025, Meta reported that 3% of Facebook’s global monthly active users were fake accounts, and it took action on 1 billion such accounts between January and March alone.
As the platform evolves, Meta has also shifted its misinformation policy, stepping away from in-house fact-checking in favor of Community Notes in the U.S.—a feature inspired by X (formerly Twitter), allowing users to collaboratively assess the accuracy of posts.
With these updates, Meta is signaling that originality and trust will be central to content visibility and monetization on Facebook going forward.
|
|
|
Sign Up to Our Newsletter!
Get the latest news in tech.
|
|
|