Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

erronis

(23,882 posts)
Wed Apr 1, 2026, 11:48 AM 15 hrs ago

Claude Code source leak reveals how much info Anthropic can hoover up about you and your system

https://www.democraticunderground.com/10143641683

If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code

We'd be extraordinarily naive to think that any of the commercial AI technologies aren't hoovering up everything they can. (Fun fact: "hoovering" also means a type of behavior where the narcissist tries to suck you back into their life, much like a vacuum cleaner sucks up lint. )
Also see: https://www.democraticunderground.com/10143641683#post5

Anthropic's Claude Code lacks the persistent kernel access of a rootkit. But an analysis of its code shows that the agent can exercise far more control over people's computers than even the most clear-eyed reader of contractual terms might suspect. It retains lots of your data and is even willing to hide its authorship from open-source projects that reject AI.

The leak of the company's client source code - details of which have been circulating for many months among those who reverse-engineered the binary - reveals that Claude Code pretty much has the run of any device where it's installed.

Concerns about that came up in court recently in Anthropic's lawsuit against the US Defense Department (Anthropic PBC v. U.S. Department of War et al) for banning the company's AI services following the company's refusal to compromise model safeguards.

As part of its justification for declaring Anthropic a supply chain threat, the US government argued [PDF], there was "substantial risk that Anthropic could attempt to disable its technology or preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations..."

. . .
1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Claude Code source leak reveals how much info Anthropic can hoover up about you and your system (Original Post) erronis 15 hrs ago OP
That train left the station in the early 2000s ThreeNoSeep 14 hrs ago #1

ThreeNoSeep

(306 posts)
1. That train left the station in the early 2000s
Wed Apr 1, 2026, 01:39 PM
14 hrs ago

AI is not the problem.

Unless you use some sort of Tor network, clear your cache after every website, and never make any online purchases of anything, the ubiquitous "they" have your browsing history, models of your purchasing habits, birthdates, friends, family, addresses, and essentially everything a stalker would need.

We must elect people who outlaw the use of private information without explicit permission.

Latest Discussions»General Discussion»Claude Code source leak r...