Anthropic’s Claude Code collects extensive system data without clear disclosure
Source: The Register
Anthropic’s AI coding agent vacuums up detailed information about user systems—file contents, environment variables, system architecture—with minimal transparency about what happens to that data or how long it’s retained, raising the same privacy concerns that dogged Microsoft’s Recall announcement. The gap between what Claude Code actually does (system introspection) and what users understand they’re consenting to mirrors a pattern where AI assistants demand machine-level access justified by “helpfulness” while companies defer hard questions about data governance. As coding agents become standard in enterprise AI, the default posture of data collection first and privacy policy later is becoming normalized in a category where developers have genuine system access to protect.