This chapter is part of my book, The Child Safety Protocol. It is published here, free and in full, because the companies it names are updating their practices faster than print can follow.
If you arrived here independently, the context you need is this: the book identifies the five-stage grooming sequence that predators use against children: Selection, Access, Trust, Isolation, Control. It then builds a layered protection model around that sequence. Chapter 11 covers the online spaces where risk concentrates. This chapter asks a question Chapter 11 deliberately left open: what happens when the space itself is the predator. The rest of the book is available at [link coming soon].
If you’re reading this because you scanned the QR code at the end of Chapter 11, you already have the full context. Keep going.
This chapter is updated as new court filings, internal documents, and regulatory actions surface. Last updated: Saturday, April the 11th, 2026.
A Different Predator Class
This book has spent its first eleven chapters examining a specific sequence. Selection, access, trust, isolation, control. That sequence describes how an individual predator identifies a child, gains proximity, builds dependency, removes protective relationships, and establishes dominance over the victim. It is the operational framework that makes every other chapter in this book functional.
This chapter applies that same sequence to a different kind of actor: the corporation.
Not as a metaphor. As a structural analysis of what the largest technology companies on Earth are doing to children, through documented business practices, on hardware their parents purchased for them.
The companies this chapter addresses are not staffed by people who set out to harm children. That is not the claim. The claim is narrower and more verifiable: their business models require the extraction of behavioral data and the maximization of engagement from every user, and when the user is a child, those requirements produce outcomes that parallel the grooming sequence this book has already described. The intent is profit. The mechanism is manipulation. And the target cannot consent.
The Sequence at Scale
Selection. An individual predator has to find a target. These companies don’t need to. Their selection is total. Every child who touches a device running their software, opens their app, or visits their platform enters the system by default. There is no opt-in. The selection happened at the factory when the operating system was installed, at the app store when the download completed, or at the school when the Chromebook was distributed. For most children, the relationship with these companies began before they could read the terms of service that governed it.
Microsoft holds approximately 72% of the global desktop operating system market. Meta’s internal documents from as recently as 2024 described acquiring new teen users as “mission critical” to Instagram’s success.¹ TikTok’s own research determined that after watching roughly 260 videos, which can take as little as 35 minutes given the brevity of the content, a user is likely to develop habitual use of the platform.² These are not companies that happen to have young users. These are companies whose growth strategies depend on capturing young users during the developmental window when habits form most easily and resistance to persuasion is lowest.
Access. The depth of access these companies have to a child’s inner life exceeds what any individual predator could achieve. An individual predator might see a child for a few hours a week. These platforms interact with them for hours a day, every day, across years.
An operating system like Windows sits below every application. It observes every process, touches every file, and in the case of Microsoft’s Recall feature, can capture a screenshot of everything on screen every few seconds and store it in a searchable local database. Every email read, every conversation had, every document opened, every search entered. The OS doesn’t need to be invited in. It is the room.
The platforms operate one layer up but with comparable depth. Instagram, TikTok, YouTube, and Snapchat don’t just know what a child posts. They know what the child looked at and for how long. They know what made them pause, what made them scroll past, what made them come back. They know the emotional arc of a child’s afternoon mapped in millisecond-resolution engagement data. A leaked 2017 Facebook memo indicated that the company could identify when teenagers felt insecure, worthless, stressed, or in need of reassurance, and could use that information for advertising targeting.³ That capability has only become more refined since.
Trust. The platform presents itself as a tool, a utility, a social space where kids connect with friends. Parents buy devices framed as educational instruments. Schools distribute Chromebooks as learning infrastructure. The trust relationship is established not through personal charm, which is how an individual groomer operates, but through institutional framing. The platform is presented as safe, normal, and necessary by every authority figure in the child’s life. Parents use it. Teachers require it. Peers socialize on it. Opting out means social isolation.
The companies reinforce this trust by building parental control features into the same platforms that collect behavioral data on children. Meta launched Instagram Teen Accounts with built-in protections and expanded them to Facebook and Messenger.²¹ These features create the impression that the company is aligned with the family’s interests. The controls exist to make parents comfortable enough to grant continued access. They do not change the underlying business model, which requires that the child’s attention, behavior, and emotional states be harvested continuously.
Isolation. An individual predator isolates a child from protective relationships. These platforms achieve something structurally similar: they make it difficult for the child to function without the platform.
The architectural entanglement in Microsoft’s telemetry system, where security updates cannot be cleanly separated from behavioral data collection, is one form of this. You cannot protect your device without also participating in the data collection. The platform makes itself necessary and then leverages that necessity.
On the social platforms, the dynamic is different but produces a comparable outcome. A teenager who leaves Instagram or Snapchat doesn’t just lose an app. They lose their social infrastructure. Group chats, event invitations, the connective tissue of their peer relationships lives on platforms they cannot leave without losing access to the relationships themselves. Internal documents cited in litigation show that Meta researchers described Instagram as functioning like a “drug,” while Snapchat executives acknowledged that habitual users had, in their words, “no room for anything else.”⁹ The platform has positioned itself between the child and their social world. That positioning is a design outcome, not an accident of adoption.
Control. This is where the corporate pattern diverges from the individual predator in timeline but converges in effect. The control is not immediate. It is developmental. It is longitudinal. And by the time it becomes fully operational, the child has aged into an adult whose decision-making environment has been shaped, invisibly, for over a decade.
A company that has continuous behavioral telemetry on a child from age six through eighteen doesn’t just have data. It has a developmental profile. It has a record of how that child’s attention patterns evolved, what they searched for when they were confused or scared, what they engaged with when they were lonely, how their interests and emotional patterns shifted across years. No psychologist, no parent, no teacher has ever had access to that kind of longitudinal behavioral data at that resolution.
By one estimate, ad tech companies had accumulated at least 72 million data points on a child by the age of 13, a figure the researchers themselves described as a substantial underestimate because it excluded trackers from Facebook, YouTube, and other major platforms.¹⁶ The United Nations Convention on the Rights of the Child Committee has urged member states to prohibit the commercial profiling of children, finding the practice is associated with potential children’s rights violations and may cause significant harm.¹⁷ The urging has not produced prohibition. The profiling continues.
That profile does not expire when the child turns eighteen. It becomes the foundation for ongoing influence calibrated to a depth of knowledge about the individual that the individual themselves does not consciously possess. What ad to surface. What recommendation to make. What emotional state to engage. What friction to remove at exactly the moment the person is most susceptible. The system does not need to understand why it works. It needs only to know that it works, and the longitudinal profile is what tells it so.
This is not a theoretical concern. It is the business model, operating as designed.
What the Internal Documents Show
The evidence that these companies understood the effects of their products on children, and continued without meaningful course correction, is not speculative. It is documented in their own internal communications, often by employees whose recommendations were overridden by growth priorities.
Meta. Unsealed court filings allege that Meta employees proposed multiple ways to mitigate harms to teen users and were repeatedly blocked by executives who feared that safety features would reduce teen engagement or slow user growth.⁴ A Meta researcher who worked in child safety roles from 2017 to 2024 wrote in a 2020 internal email that sexually inappropriate messages were being sent to approximately 500,000 victims per day on the platform in English-language markets alone, adding that the true figure was expected to be higher.⁵ The company did not begin rolling out privacy-by-default features for minors until 2024, seven years after it had internally identified the risks.⁶
In January 2025, the Wall Street Journal reported that Meta’s AI chatbot had engaged in adult sexual role play with Instagram accounts registered to users as young as 13, following a directive to make the chatbot less restrictive, which had resulted in a carveout for romantic role play.⁷
Internal studies had found that 13.5% of teen girls said Instagram made thoughts of suicide worse and 17% said it worsened eating disorders.⁸ The company had the data. It had the internal recommendations. It had proposed solutions from its own safety teams. It prioritized growth.
TikTok. TikTok determined through its own internal research how quickly its algorithm creates dependency: 260 videos, often consumable in under 35 minutes, after which habitual use becomes likely. Separate internal documents showed the company was aware that its features produced what employees described as a constant and irresistible urge to keep opening the app.² An internal report noted that minors lack the executive mental function necessary to control their screen time.¹⁰ The company understood, in its own language, that its product engaged a neurological vulnerability specific to developing brains.
Six families have filed lawsuits alleging TikTok’s design directed their children, aged 11 to 17, toward dangerous content including choking challenges. All six children died.¹¹ Internal communications quoted in legal filings reiterate that the company’s own assessment was that users lack the executive control function needed to manage their engagement with the platform.
Snapchat. New Mexico’s Attorney General described Snapchat as one of the most harmful purveyors of child sexual abuse material and harm-inducing features on children’s devices, alleging the platform was designed to attract and addict young people while facilitating the distribution of illicit sexual material involving children.¹² The platform’s disappearing-message architecture, the feature that defines its identity, is the same feature that makes it operationally useful for predators. Messages vanish. Evidence vanishes.
An investigation by Jonathan Haidt and colleagues examined multiple court cases involving severe or fatal harm allegedly facilitated by Snapchat’s features. From 2022 through 2025, more than 600 lawsuits specifically named Snap Inc. as a defendant in the multidistrict litigation.¹³ Investigators who searched the deep web identified more than 10,000 records in a single year related to Snapchat and child sexual abuse material, including material involving children under 13.¹³ Snapchat’s Quick Add feature, which suggests new connections based on social proximity, has been documented as a mechanism through which predators identify and contact minors. Snap Map broadcasts a child’s real-time location to anyone on their friend list.
Google/YouTube. Google settled a class action lawsuit in 2025, agreeing to pay $30 million for allegedly continuing to collect personal data from children under 13 on YouTube without parental consent, even after a prior $170 million FTC settlement in 2019 for the same conduct.¹⁴ The pattern is worth noting: the company was found in violation, paid a penalty, agreed to change its practices, and then continued the same practices until the next enforcement action.
Independent research found that advertising cookies associated with behavioral tracking were being set on browsers every time they visited YouTube channels labeled as “made for kids,” even on brand-new browser profiles with no prior history.¹⁵ Google’s stated policies claim to disable ad personalization for minors. The technical implementation, as documented by independent researchers, did not match the stated policy.
The Developmental Profiling Problem
The dimension of this problem that receives the least attention is what happens when behavioral profiling is applied to a developing brain over the course of childhood.
An adult who is profiled by an advertising network has a relatively stable identity. Their preferences, vulnerabilities, and decision-making patterns are largely formed. The profile captures who they are.
A child who is profiled from age six is not a stable target. They are a moving target whose movement is being recorded. The profile captures not who they are but who they are becoming, and it captures the inputs that shaped that becoming. Every search query a ten-year-old makes about a topic they’re confused by. Every late-night scroll through content that speaks to an anxiety they haven’t yet named. Every pattern of engagement that reveals an emerging interest, insecurity, identity question, or emotional need.
Research on the effects of advertising exposure on children has found that frequent exposure correlates with increased materialism and decreased emphasis on intrinsic motivators such as relationships and personal achievement, and that the development of consumer-oriented identities at an early age is associated with compulsive consumption behaviors in adolescence and adulthood.¹⁸ That research was conducted in the context of traditional advertising. The behavioral profiling infrastructure that exists now is orders of magnitude more precise, more persistent, and more personally calibrated than what those researchers were studying.
The child does not know the profile exists. The parent does not know the profile exists. The profile is not a file sitting in a folder. It is a distributed representation across multiple data systems, enriched continuously, used to train recommendation models that determine what the child sees next. The child experiences this as “the algorithm knows me.” That sentence is worth sitting with.
Research has found that exposure to persuasive content activates the brain’s reward system, increasing dopamine release and reinforcing desires for immediate gratification while affecting the development of executive functions like impulse control and delayed gratification.¹⁸ When that exposure is algorithmically optimized and applied across the entirety of a child’s developmental window, the cumulative effect is not a series of individual impressions. It is the shaping of the reward architecture of a developing brain by a system optimized for engagement, not for the child’s wellbeing.
Why This Is a Child Safety Issue, Not Just a Privacy Issue
Privacy advocates have been making the data collection argument for years. This chapter is making a different argument, and the distinction matters because it changes what the appropriate response looks like.
A privacy problem suggests that the solution is better consent mechanisms, clearer policies, and user controls. Those interventions assume a user who can understand the tradeoff and make an informed choice. A child cannot. A child does not have the cognitive development to understand behavioral profiling, cannot meaningfully consent to longitudinal data collection, and cannot anticipate the downstream consequences of a profile built during their most formative years being used to shape their decision-making environment for decades afterward.
A child safety problem suggests that the solution is protection, not disclosure. The same way this book does not propose solving individual grooming by handing children a copy of the predator’s playbook and asking them to consent wisely, it would be inadequate to treat the solution to developmental profiling as a more readable privacy policy.
The litigation landscape reflects this reframing. In January 2026, a landmark trial began in Los Angeles to determine whether social media companies deliberately designed their platforms to be addictive for children, contributing to a youth mental health crisis.¹⁹ TikTok settled a case on the day jury selection was to begin, while Meta and YouTube proceeded to trial.¹⁹ The multidistrict litigation against Meta alone included over 1,700 cases as of April 2025.²⁰ State attorneys general from across the political spectrum, from Minnesota to Texas, have filed suits making substantially the same argument: these companies designed products that engage children’s neurological vulnerabilities for profit.¹⁰
The bipartisan nature of the enforcement is worth noting. This is not a partisan issue. Progressive attorneys general and conservative attorneys general are making the same case, using the same internal documents, against the same companies. The business model does not have a political affiliation. It has a revenue requirement.
What Parents Can Actually Do
This chapter’s purpose is not to leave you feeling powerless. It is to ensure that the protective measures in this book are applied with an accurate understanding of the full threat landscape.
The technical layers described in Chapters 5 and 6 are directly relevant. DNS filtering at the network level can reduce telemetry from reaching its destination. Device hardening reduces the data collection surface of the operating system and installed applications. Disabling features like Microsoft’s Recall, tightening Windows telemetry settings, removing advertising IDs from children’s devices, and restricting app permissions are concrete steps that reduce the volume and resolution of the developmental profile being built on your child.
But the technical measures are the outer perimeter. The core protection, as it is throughout this book, is the human layer.
Your child needs to understand, in age-appropriate language, that the apps and platforms they use are not neutral tools. They are businesses whose revenue depends on keeping the child engaged for as long as possible, and the way they achieve this is by learning what the child responds to emotionally and delivering more of it. That is not a conspiracy theory to share with a twelve-year-old. It is media literacy. It is the same category of knowledge as understanding that a commercial is trying to sell them something. The difference is that the commercial lasted thirty seconds and the algorithm runs continuously.
The relationship layer described in Chapter 9 is equally important. A child who has a strong, disclosure-safe relationship with a parent is a child who will tell you when something on a platform makes them feel bad, confused, pressured, or unable to stop. That disclosure is your signal. No monitoring tool will give you the resolution that your child’s own words can provide, and no monitoring tool can substitute for the trust that makes those words possible.
For families with the technical willingness, the most comprehensive response to the OS-level profiling problem is a transition to Linux. That sentence is not written casually. For most families, it is not realistic in the near term. But for families where a parent has the comfort level to make the switch, a Linux machine running a mainstream distribution like Ubuntu or Fedora does not phone home by default, does not build a behavioral profile, does not take screenshots of the screen every few seconds, and does not entangle its security mechanisms with its data collection architecture. It is not without its own risks. But it removes the operating system itself from the list of entities profiling your child, which is a meaningful reduction in exposure.
For everyone else, the practical path is harm reduction within the existing ecosystem. Tighten every setting. Disable every optional data collection feature. Remove advertising IDs. Use the DNS filtering layer to block telemetry domains where possible. Delay your child’s entry onto social platforms as long as is realistically sustainable. And when they do enter those platforms, make sure they enter with the human firewall already built: the knowledge, the language, and the relationship with you that makes them harder to manipulate.
What This Means for the Rest of the Book
The companies discussed in this chapter will dispute the characterizations made here. They will point to their teen safety features, their parental controls, their content moderation investments, and their compliance with applicable law. Some of those measures are real and some of them provide marginal benefit. None of them address the structural issue, which is that their business models require the behavioral profiling of every user, including children, and that the resulting profiles constitute a form of ongoing influence over developing minds that no previous generation of children has been subjected to.
The grooming sequence from Chapter 3 was not designed with corporations in mind. But the sequence maps onto this behavior because it describes a universal pattern: identify a target who cannot protect themselves, gain access through mechanisms the target does not control, establish a relationship that prevents the target from questioning the arrangement, create dependency, and then use that dependency to extract value.
The individual predator extracts sexual gratification or control. The corporate actor extracts behavioral data, attention, and revenue. The child at the center of both patterns did not consent to either.
That is the reality this chapter documents. What you do about it is the subject of every other chapter in this book.
The Child Safety Protocol is available at [link]. If this chapter showed you what the problem is, the book shows you what to do about it.



