They Were Never Protecting You. They Were Farming You.
The moral bankruptcy at the heart of surveillance capitalism, and how we got here.
Someone who understood the internet very early told me something that has aged like fine wine while everything around it has curdled:
“Assume everything you put on the internet is available publicly for everyone else to read, no matter your ‘privacy settings.’ That includes your DMs.”
That was the 1990s. AOL. Dial-up. The internet before it had a UX team.
That person was right then. They are still right now. The difference is that three decades of trillion-dollar infrastructure have been built specifically to make you forget it.
The Hook: LinkedIn Just Got Caught
Last week, a German privacy association called Fairlinked e.V. published an investigation they named “BrowserGate.” The findings are worth sitting with.
Every time one of LinkedIn’s one billion users visits the site, a 2.7-megabyte hidden JavaScript bundle silently scans their browser for the presence of over 6,000 specific Chrome extensions. It collects 48 distinct device characteristics: CPU core count, screen resolution, battery status, timezone, audio fingerprint, available memory. It encrypts all of it and transmits it back to LinkedIn’s servers, where it gets attached to your session.
None of this is mentioned in LinkedIn’s privacy policy.
Because LinkedIn requires you to be logged in, this data isn’t attached to an anonymous visitor. It’s attached to you: your real name, your employer, your job title, your professional network. One billion identified people, scanned without their knowledge every single time they open the app.
Here’s where it gets interesting. The scan list isn’t just looking for bots or scrapers. It includes:
509 job search extensions used by 1.4 million people who are quietly exploring other opportunities while their current employer watches their LinkedIn profile
200+ extensions that compete directly with LinkedIn’s own Sales Navigator product, a $1 billion annual revenue line, including Apollo, Lusha, and ZoomInfo. LinkedIn now knows which of its users’ employers use competitor tools. That’s an extracted customer intelligence database belonging to thousands of software companies, harvested without anyone’s knowledge or consent.
Extensions that identify religious practice, Muslim prayer-time tools, faith-based productivity apps
Extensions built for neurodivergent users, tools designed for people with ADHD, dyslexia, processing differences
Extensions that signal political orientation
Under GDPR, religious beliefs, political opinions, and health conditions are “special category” data, meaning they require explicit consent to process. LinkedIn has no such consent. LinkedIn’s privacy policy doesn’t mention the scan exists.
The scan list grew from 38 extensions in 2017 to over 6,100 by early 2026. That growth didn’t happen organically. It accelerated precisely as the EU’s Digital Markets Act came into force, the regulation designed to force platforms like LinkedIn to open up to third-party tools. LinkedIn’s response to being told to let competitors in was to build a surveillance system to identify every user of those competitors.
They named the script internally: Spectroscopy.
They named it after the science of identifying the composition of matter by analyzing the light it emits.
They knew exactly what they were building.
This Isn’t a Bug. It’s the Business Model.
LinkedIn didn’t do something unusual. LinkedIn did something representative.
To understand why, you have to understand the philosophical architecture that makes this behavior not just possible but inevitable, and the specific people who built the justification framework for it. Because this didn’t emerge from a vacuum. It was constructed, by named individuals, at named institutions, who made choices about whose interests their work would serve.
In the early days of commercial internet a question was posed, mostly implicitly, occasionally explicitly, about the nature of human attention and behavior online. The question was: who owns it?
The answer that won was: whoever captures it first.
This isn’t a neutral technical position. It’s a philosophical claim with a specific lineage. It borrowed from the older tradition of resource extraction economics, the idea that unowned resources are inert until human effort transforms them into value. Unfarmed land. Untapped oil. Unmined data.
The framing was deliberate. “Data is the new oil” became the decade’s most repeated business cliché not because it was accurate: oil is fungible, data is not, but because it smuggled in a crucial assumption: that the data existed in a commons waiting to be claimed, rather than belonging to the person it described.
Once you accept the extraction framing, everything else follows with brutal logical consistency:
Consent is friction, not a right
Privacy settings are a pacifier, not a protection
The user is not the customer, the user is the crop (and we can do anything we want to them).
Now let’s talk about the people who handed the architects of this system their tools.
B.J. Fogg, Stanford psychologist, founded the Persuasive Technology Lab in 1998, later rebranded, with notable timing, as the “Behavior Design Lab.” Fogg coined the term “captology”: computers as persuasive technologies. He spent years systematically mapping how digital systems could be engineered to change what people believe and what they do. His Fogg Behavior Model, motivation, ability, trigger… became the foundational framework for designing compulsion loops into products at industrial scale.
To his credit, Fogg wrote about the ethics of persuasive technology early and often. He warned the FTC in 2006 about where this was heading. He says he wanted his work used for good.
Here’s the problem: his lab’s alumni went directly into the companies that used his frameworks to build the surveillance machinery. His students co-founded Instagram. His lab’s research director went to Facebook. The knowledge transfer from “how to change behavior” to “how to extract maximum engagement and data” was not a corruption of Fogg’s work, it was a direct application of it. The tools didn’t care about intent. They cared about results. And the results were a generation of platforms engineered to override human autonomy at scale.
Then came Nir Eyal, a Fogg protégé and Stanford MBA, who in 2014 published Hooked: How to Build Habit-Forming Products. The book is a detailed operational manual for engineering addiction, variable reward schedules, internal triggers, investment loops, all drawn explicitly from behavioral psychology and gambling mechanics. Eyal acknowledged the ethical dimensions. He included a chapter called “The Morality of Manipulation.” He asked what responsibility product designers have.
His answer, in practice, was a shrug dressed up as nuance.
When critics pointed out that his framework was being used to addict children to social media and harvest behavioral data from vulnerable populations, Eyal argued that regulation was overreach and that self-control was the individual’s responsibility. Critics noted, accurately, that this argument is structurally identical to the one Big Tobacco deployed for decades: we didn’t make you smoke. You chose to.
The tell is in the sequel. After spending a career teaching Silicon Valley how to hook people, Eyal wrote Indistractable, a book about how to free yourself from digital distraction. He built the trap. Then he sold you the map out of it. Both for profit.
Fogg and Eyal are not aberrations. They are the institutional pipeline: academic frameworks, Stanford credentialing, direct alumni placement into the companies that scaled the behavior modification apparatus to a billion users. The ethical caveats were real. They were also irrelevant, because the frameworks didn’t come with enforcement mechanisms, only profit incentives.
The woman who named and most rigorously documented this entire architecture is Shoshana Zuboff, Harvard Business School professor emerita, whose 2019 book The Age of Surveillance Capitalism remains the most comprehensive autopsy of how we got here. Zuboff is not an enabler, she’s the diagnostician. But her work is worth citing because it frames the core claim precisely: surveillance capitalism asserts the unilateral right to take human experience as raw material for prediction products, without asking, without telling, and without sharing the proceeds.
She called it behavioral modification at scale. She called the emerging power structure “instrumentarian,” not totalitarian in the old sense, but something new: control exercised not through coercion but through the engineering of behavior itself.
LinkedIn’s Spectroscopy script is not a policy failure. It is instrumentarian power in its default state.
The Consent Laundering Operation
The machine needed one more component to function: the appearance of consent.
This is where the UX industry earned its darkest chapter.
The “I Agree” button is perhaps the most successful confidence trick in human history. Billions of people have clicked it. Functionally zero percent of them have read what they agreed to. The people who designed those flows knew this. The lawyers who wrote those policies knew this. The executives who signed off on both knew this.
This is not ignorance. It is engineered ignorance, consent manufactured at industrial scale by deliberately making the alternative to agreement invisible, technically complex, or professionally costly.
LinkedIn’s terms of service, like most platform agreements, are written in a register that requires a law degree to parse and runs to tens of thousands of words. The design of the consent moment… a button, a scroll, a checkbox, is calibrated to produce compliance, not understanding. Dark patterns: pre-checked boxes, buried opt-outs, consent flows that require seventeen steps to decline but one click to accept.
This is consent laundering. The legal form of agreement is manufactured while the substantive reality of informed choice is systematically destroyed.
And when the law starts to catch up, when regulators like the EU design frameworks like GDPR and the Digital Markets Act to restore some actual meaning to consent… the response is not compliance. It is the Spectroscopy script. It is a 249-page DMA compliance report that mentions “API” 533 times and the internal API running at 163,000 calls per second exactly zero times. It is regulatory theater performed for the commission while the extraction operation expands behind it.
The Moral Claim They Never Made Explicit
Here’s the part that deserves to be said plainly, because the industry never says it plainly:
Surveillance capitalism rests on the moral claim that your inner life, your attention, your behavior, your relationships, your fears, your beliefs, your health, your political opinions, your job anxiety, is a natural resource that you have no inherent right to withhold from extraction.
That’s the claim. Not privacy settings. Not terms of service. Not “we value your trust.” The operating assumption underneath all of it is that you, the human being using the platform, are the raw material… not the customer, not the user in any meaningful sense, but the input to a process whose output is someone else’s profit.
The people who built this system are not stupid. They understood what they were building. The documentation, the internal naming conventions, the deliberate omissions from privacy policies, these are not the artifacts of negligence. They are the artifacts of a system designed by people who made a choice about whose interests mattered and whose did not.
That choice has a moral name. It’s not “disruption.” It’s not “innovation.” It’s not “the price of free services.”
It’s exploitation. Systematic, scaled, deliberately obscured exploitation of a billion people who were told they were being given something for free.
Nothing is free. You were the product. You were always the product.
The person who told me that in the 1990s was right.
Where This Goes: The 50,000-Foot View
There is no clean ending to this story. Anyone who offers you one is selling something.
The regulatory frameworks exist: GDPR, the DMA, California’s CCPA. The legal theory is mostly sound. The problem is enforcement: captured, under-resourced, and operating on a timeline measured in years while the extraction operates in milliseconds.
What changes the calculus, historically, is not regulation catching up, it’s the cost-benefit structure for the people making decisions shifting enough that the behavior changes. That requires consequences that are personal, not corporate. Fines absorbed by quarterly earnings statements are not consequences, they are licensing fees. Personal criminal liability, asset forfeiture, professional disqualification, these are consequences that land on the humans who made the decisions, not on the entity that insulates them.
The second force is technical. The browsergate exposure happened because the code was readable. Independent researchers verified it. Brave and Firefox blocked the endpoints. The transparency of the web’s architecture, the fact that you can open developer tools and read the JavaScript is the last remaining check on behavior that would otherwise be entirely invisible. Efforts to close that transparency, to move surveillance into compiled binaries and hardware enclaves, are the next frontier of this fight.
The third force is cultural. The early internet instinct, assume nothing is private, behave accordingly, was correct. Thirty years of “trust the platform” eroded it. The people who never lost it are better positioned than they know. The people rebuilding it are asking the right questions.
The question isn’t whether you have something to hide.
The question is whether you consent to being farmed.
What You Can Actually Do
This isn’t a self-help article, and a 50,000-foot view doesn’t come with a ten-step plan. But a few things are just true:
Firefox or Brave over any Chromium browser for anything you care about. Brave already blocks LinkedIn’s Spectroscopy endpoints and the li.protechts.net iframe by default. Firefox blocks extension probing architecturally.
Compartmentalize. A dedicated browser profile with no extensions for LinkedIn specifically means the scan surfaces nothing meaningful even if it runs.
Treat every platform as a public ledger. The AOL-era wisdom still holds. Your DMs, your job search activity, your private groups, assume they are available to the platform, available to regulators, available to breach, available to sell. Behave accordingly.
Support the people doing the work. Fairlinked e.V. is pursuing legal action. The Electronic Frontier Foundation exists. The organizations pushing for meaningful enforcement are under-resourced by design, because the entities being regulated have essentially unlimited resources to spend on ensuring they stay that way.
The surveillance capitalism machine is large, profitable, and deeply entrenched. It will not dismantle itself.
But it does not get to define what’s normal. That part is still up for negotiation.
The BrowserGate investigation and supporting technical documentation are publicly available at browsergate.eu. The extension scan list is searchable. The code is verifiable. Look for yourself.



