John Doe was round 13 years outdated when he was tricked, blackmailed and threatened by intercourse traffickers on Snapchat into taking nude photographs and movies of himself. Two years later, he discovered from his highschool classmates that his photographs had been being shared as baby intercourse abuse materials on Twitter.
The social networking platform, later renamed X, initially dismissed the household’s studies, responding: “We have reviewed the content material, and did not discover a violation of our insurance policies, so no motion will probably be taken presently.”
John Doe’s household filed a number of studies with Twitter, their native police division and finally with the US Division of Homeland Safety earlier than Twitter eliminated the sexually graphic materials. Whereas it was stay, the unlawful content material racked up over 167,000 views on Twitter, and John Doe skilled “harassment, vicious bullying, and have become suicidal,” in line with the household’s preliminary criticism in its 2021 lawsuit towards Twitter.
The creation and distribution of kid intercourse abuse materials, known as CSAM, is likely one of the excessive risks that kids and youngsters face when utilizing the web. There’s all the time been a spectrum of potential on-line harms, going again to the early days of MySpace.
Mother and father have lengthy been involved concerning the lasting psychological results of display screen habit. Many younger individuals say social media harms their psychological well being, fostering isolation, anxiousness and despair. Researchers have discovered that teenagers who spend time scrolling by means of curated and edited content material can develop unrealistic physique photographs and consuming issues. Others may flip to suicide and self-harm or turn out to be susceptible to predators.
Few points in immediately’s digital age spark as a lot fiery debate as on-line security, regulation and coverage for youngsters and teenagers. The dilemma is figuring out who has the first responsibility and position in safeguarding kids: social media corporations, the federal government, dad and mom, educators — or a mixture.
On the core of the talk is a perception shared by many: The web must be protected for its youngest customers. However no one can agree on how, precisely, to make {that a} actuality.
Current applications have loopholes, and proposed laws and initiatives — particularly age-verification legal guidelines — are controversial. Whereas policymakers and tech leaders endlessly debate the deserves and pitfalls of every potential answer, youthful generations, their dad and mom and educators are pressured to navigate an ever-changing terrain rife with landmines.
Social media’s Large Tobacco second
One mechanism for change is thru the courts. On March 24, a New Mexico jury discovered Meta accountable for deceptive customers about security and permitting baby exploitation on its platforms, ordering the corporate to pay $375 million to the state in penalties. The following day, a Los Angeles jury discovered Google and Meta accountable for creating deliberately addictive platforms to hook younger customers, ordering the 2 corporations to pay a mixed $3 million in compensatory damages.
These landmark verdicts mark a major shift in holding social media platforms accountable for exploiting and endangering younger customers. Platforms have beforehand averted this as a result of they are not legally chargeable for the content material on their websites, because of a provision of the 1996 Communications Decency Act known as Part 230. The verdicts might set a precedent for hundreds of circumstances and lawsuits that problem social media giants for failing to safeguard teenagers and kids. An identical social media habit lawsuit is now shifting ahead in Massachusetts.
“The jury discovered that Meta’s dishonesty and design options have put children at risk and the corporate has a duty to mitigate the hurt it has triggered,” New Mexico Lawyer Basic Raúl Torrez informed CNET.
Meta CEO Mark Zuckerberg leaves Los Angeles Superior Court docket after testifying in a landmark trial on social media habit.
With tech giants within the highlight, baby welfare advocates are calling it Large Tech’s “Large Tobacco” second, impressed by lawsuits that used private legal responsibility authorized ways to point out how highly effective firms had been knowingly harming their customers. Within the ’90s, tobacco corporations had been confirmed to be protecting up the well being risks of smoking. Now, Large Tech has to show it isn’t selling abuse or fueling a psychological well being disaster.
Meta informed CNET in every case that it disagreed with rulings. Google stated it plans to enchantment.
Social media platforms generate billions of {dollars} from their accounts by means of extremely customized promoting and in-app commerce, together with knowledge harvesting and assortment. It is of their curiosity to maintain customers scrolling.
The monetary affect of those early rulings is small in contrast with the platforms’ large income streams, and it is unclear if ongoing social media lawsuits will finally have an effect on Google’s or Meta’s backside line. The query is what quantity (or diploma) of authorized and monetary challenges will pressure these corporations to implement basic adjustments.
Bans and the chaos of KOSA
Among the many many proposed legal guidelines and guidelines round kids and the web, essentially the most outstanding is the Youngsters On-line Security Act, or KOSA.
The federal invoice has gone by means of many iterations — and garnered lots of controversy — because it was launched in 2022 by Sens. Marsha Blackburn, a Tennessee Republican, and Richard Blumenthal, a Connecticut Democrat. The most recent variations of the twin Senate and Home of Representatives payments had been reintroduced in 2025 and are nonetheless in committee.
The 2025 Senate model says tech corporations have a “responsibility of care,” which obligates on-line platforms to “train affordable care within the creation and implementation of any design function to stop and mitigate” express harms outlined within the invoice, typically about psychological well being and consuming issues, compulsive social media use, violence and bullying, monetary harms from misleading or unfair practices, and sexual abuse and exploitation.
What makes this language vital is that it permits for the federal government (by means of companies just like the Federal Commerce Fee) to convey lawsuits and impose penalties on tech platforms if they do not meet this normal of “affordable care.” Theoretically, these authorized repercussions would assist hold tech corporations in line.
The invoice has additionally uncovered tensions in competing views of what is greatest for minors, and for society at massive.
Following the preliminary introduction of KOSA, the ACLU, the Digital Frontier Basis, GLAAD and over 100 different civil society and LGBTQ+ teams signed an open letter opposing the invoice and expressing critical considerations round privateness and free speech.
They wrote that the invoice’s language was “successfully forcing suppliers to make use of invasive filtering and monitoring instruments; jeopardizing non-public, safe communications; incentivizing elevated knowledge assortment on kids and adults; and undermining the supply of important providers to minors by public companies like faculties.”
Australia is one nation that carried out a ban on social media for teenagers below 16.
Outdoors the US, different nations have imposed stringent authorized measures. Australia, Spain and Indonesia have all partially or completely banned social media for teenagers. However that additionally comes with threat. Critics identified that we have seen web entry and app bans weaponized by authoritarian regimes to censor important speech. In 2020, the Egyptian authorities blocked information web sites and interrupted web providers amid anti-government protests.
Others fear about eradicating teenagers’ entry to social media totally. The web, for all its faults, does nonetheless maintain worth for teenagers. Significantly, serving to cut back social isolation for marginalized teams.
“This blanket ban on social media will deprive tens of thousands and thousands of younger individuals in Indonesia of significant channels for speaking with others, accessing info, creating creativity and expressing themselves,” Usman Hamid, Amnesty Worldwide’s Indonesia govt director, stated on the time of the 2026 ban.
Social media corporations like Meta have fought towards these worldwide bans.
“Any youth security laws should put dad and mom within the driver’s seat — blanket social media bans don’t do this,” a Meta spokesperson informed CNET. “As a substitute, they isolate teenagers from on-line communities and knowledge, create inconsistent protections throughout the various apps they use, they usually push teenagers to much less regulated areas of the web that lack age-appropriate guardrails.”
These age-appropriate guardrails usually depend on age verification.
Age verification: Buddy or foe?
Age verification is likely one of the hottest options proposed by tech corporations. Proponents of age verification say social media platforms want to have the ability to determine youthful customers to guard them. Anybody can lie about their birthday when organising a brand new Instagram account.
The European Fee is constructing an age verification app. States like California, Utah, Louisiana and Ohio have handed legal guidelines requiring platforms to confirm customers’ ages. Discord and Roblox introduced this 12 months that they’re rolling out age verification processes for his or her gaming platforms. YouTube makes use of machine-learning-based AI expertise to determine younger customers, which it says disables customized adverts, activates digital well-being settings and provides safeguards round beneficial content material.
Within the subsequent section of the New Mexico trial, Torrez and his company are pushing for the court docket to order Meta to implement efficient age verification. He informed CNET that is partly as a result of the corporate has determined “to blind itself to customers’ precise age.”
“Meta admits that its merchandise are unsuitable for youngsters below the age of 13 however has allowed thousands and thousands of individuals 12 and below to create accounts,” Torrez stated.
However age verification programs are unreliable, not least as a result of each children and adults have discovered methods to avoid them. Tech corporations do not agree amongst themselves on who must be answerable for checking the age of customers. Meta has stated, for instance, that it should not be app builders like them; quite, it must be finished by dad and mom when organising their kid’s machine, and enforced by Apple and Google’s app shops.
Irrespective of who does it, age verification expertise comes with vital knowledge privateness dangers, requiring customers to submit private info, usually within the type of driver’s licenses or biometrics.
Whereas KOSA does not explicitly require platforms to implement age verification processes, tech platforms are prone to do it anyway in the event that they must adjust to the regulation, stated Jenna Leventoff, senior coverage counsel on the ACLU. As a result of most youngsters do not have IDs, the burden falls on adults — the vast majority of social media customers.
Teams just like the EFF level out that age verification programs may be dangerous and discriminatory. Disproportionally, individuals who do not have photograph IDs are traditionally marginalized teams, together with Black and Hispanic individuals and people with disabilities. Some customers, similar to older adults, may not be snug linking their ID with their account or offering a face scan, which may inadvertently kick adults out of these areas.
Trusting tech corporations with delicate paperwork or biometric knowledge may be dangerous. Deprived and oppressed teams, like transgender of us and undocumented immigrants, could not need to share their identification with tech corporations, since platforms can retailer private knowledge or hand it over to regulation enforcement.
Biometrics, like face scans, are one technique to do age verification.
Facial recognition tech has a well-documented historical past of discriminating towards darker-skinned of us, girls and transgender of us. Newer, AI-powered variations of those programs aren’t significantly better.
Even when age verification is launched, age-specific experiences could not work as supposed. Instagram, for instance, has its teen accounts program, which provides teenagers stronger default privateness settings. However researchers reviewed 47 of the teenager account options, and solely eight labored as marketed; two out of three security instruments had been “ineffective or nonexistent,” in line with a 2025 report from Meta whistleblower Arturo Béjar and different educational and civil society teams.
“We’re usually seeing that these corporations may have an awesome coverage on paper, however in apply, it isn’t enforced very effectively,” Haley McNamara, govt director of the Nationwide Middle on Sexual Exploitation, informed me.
A now-fixed lapse let adults message teenagers they did not know — a significant pink flag for grooming — and activate disappearing messages, which erase proof of communication. The hidden phrases function, a guardrail to flag cyberbullying, was “considerably ineffective,” the report discovered. Meta rebutted these claims on the time, with spokesperson Andy Stone calling them “dangerously deceptive.”
The dying of anonymity on-line: Surveillance and censorship worries
Andy Yen, CEO of the Swiss privacy-forward tech firm Proton, put it bluntly: Age verification measures can be “the dying of anonymity on-line.” Proton has a horse on this race as a result of it sells VPN subscriptions. Folks can use VPNs to bypass age gates by connecting to servers in nations the place there aren’t any restrictions. But when mass age verification procedures are rolled out, many people could flip to a free VPN, which CNET’s VPN consultants say is a safety nightmare.
The First Modification protects your proper to talk anonymously. Particularly relating to talking out on controversial points, like politics, individuals could not need their identification linked with their speech, for worry of retribution, Leventoff stated, from tech corporations, regulation enforcement companies like Immigration and Customs Enforcement and different entities
In case your identification is linked to your on-line exercise, which may dissuade you from talking out. Age verification measures can have a “chilling impact” on free speech, Leventoff stated. Linking your identification along with your on-line speech makes it simpler for tech corporations to surveil and censor speech. That impacts everybody, together with kids and teenagers.
“There’s this false impression on the planet that the First Modification does not apply to children if the objective is to maintain them protected, and that is simply not true,” Leventoff stated. “You do not earn your First Modification rights on a sure birthday. You are born with them.”
Age verification measures might infringe on the rights of social media customers.
Legal guidelines and guidelines round content material restrictions, below the guise of defending children, may encourage platforms to take down extra speech. If the federal government passes a regulation that makes it unlawful to point out children content material about smoking cigarettes, platforms may take away an excessive amount of content material, together with posts with academic content material concerning the risks of smoking.
The moderation instruments social media platforms use cannot reliably distinguish between informative content material and content material that violates their insurance policies. And tech corporations are more and more utilizing AI to try this moderation, making it even simpler for protected speech to get swept away by error-prone tech.
Supporters of KOSA, like McNamara, say the most recent model of the invoice “has nothing to do with speech” and is crafted to keep away from giving tech corporations the power to do mass surveillance and censor speech. However the core of the First Modification points with these payments are that they require the federal government to make guidelines for tech corporations on what sort of info or speech is appropriate for teenagers.
“Everybody needs to guard children,” Leventoff stated. “However the authorities deciding what speech is sweet, what speech is unhealthy, isn’t the way in which to do it.”
No straight solutions
Each answer for teen security on-line makes an attempt to reply the query: What would be the handiest technique to maintain tech corporations accountable that may really lead to a safer social media expertise? Consultants cannot agree.
The shortage of a coherent plan is troubling, particularly as a result of we face lots of the similar points with teenagers and generative AI. A majority of teenagers (64%) use AI chatbots, Pew Analysis Middle studies. We all know these instruments can hallucinate and, like social media, give us false info. AI safeguards do not all the time work as supposed. Chatbots may be dangerous for people with psychological well being issues, resulting in tragic, deadly leads to uncommon circumstances. And AI may be addictive in some excessive circumstances, that “AI psychosis” is a sort of chatbot echo chamber.
If we won’t work out find out how to hold children protected on social media, how will we hold them protected with newer, much less safe AI instruments?
There is not any silver bullet for these points. As a substitute, we’re pressured right into a debate the place it appears now we have to select the lesser evil. However the benefit of getting baby security being mentioned in so many various methods — within the courts, in Congress, with tech corporations, dad and mom and academics — is {that a} mixture of options will possible emerge.
However tech corporations, with all their cash, information and energy, have not give you a greater answer to maintain children protected on-line with out infringing on the rights of the remainder of us. Perhaps in the event that they needed to, they might.
“A few of the greatest minds work at these expertise corporations,” McNamara stated. “In the event that they spent the time and power on really constructing their platforms to be protected for teenagers, then we would not must have this dialog.”

