The Hidden Risk Behind Free AI Tools: Why ChatGPT and Other LLMs Are Not Secure

“Computers don’t do privacy.” That argument, mentioned by Sam Altman in a recent interview, captures the risk behind the rise of AI chatbots at work and at home. We type the most sensitive facts of our lives and businesses into systems that feel intimate. But they are, in fact, cloud services with retention policies, logs, and complex data pipelines. Meanwhile, “shadow AI” keeps rising inside companies. Employees quietly paste snippets of code, contracts, and client info into public LLMs to “get it done.”
According to eSecurityPlanet, 77% of employees past confidential data into public AI chatbots. This behavior bypasses company DLP systems entirely. And this process is growing fast.
The illusion of safety is what makes it dangerous. ChatGPT “feels” private. It writes like a person, remembers context, and responds instantly. But are they really safe to use?
AI privacy myth
In an interview with Theo Von, Altman described what he called “a culture of oversharing” around AI. Millions of users, particularly younger ones, have turned to ChatGPT as an emotional support — a “therapist” or “life coach.”
“People talk about the most personal parts of their lives,” Altman said. “With a doctor or lawyer, there’s confidentiality. We haven’t figured that out yet for ChatGPT.”
Trevor Long, a cybersecurity expert interviewed on EFTM, went even further: “Every request you make sits on a server somewhere.” No matter logged in or anonymous, your session data exists. So, it can be breached or simply retained for longer than you realize.
Think of ChatGPT not as a private notepad, but as a public cloud service that just happens to talk like a person. That distinction is crucial.
So, when you speak to ChatGPT, you’re not confiding in a therapist. Your input simply contributes to a dataset.
Shadow AI at work: corporate risk
Organizations are already seeing the cost of oversharing. eSecurityPlanet reports that 2024 was a record year for data exfiltration via generative AI, particularly in the finance, healthcare, and legal sectors. In one case, a U.S. defense contractor discovered employees had uploaded internal threat models into ChatGPT to summarize a report. That single action triggered an overall company data incident review, which cost weeks of investigation.
Most enterprises aren’t ready for this, as a security researcher at Information Age admits. You can’t manage what you can’t see, and right now, shadow AI usage is invisible to traditional DLP systems.
DLP tools are designed to catch patterns like credit card numbers, SSNs, or project codes across email or storage. But when employees paste text into an AI chat, that data leaves the company data encrypted in an HTTPS request to an external endpoint. To internal monitoring, it looks like ordinary web traffic.
Once sent, it can’t be retrieved.
The rise of AI data leaks
In early 2025, CyberPress reported several confirmed data leaks linked to ChatGPT. The most notable involved a prompt-injection trick that pulled bits of other users’ chats from cache. OpenAI fixed it, but the incident showed a core issue: these systems aren’t sealed off from the rest of the web.
Every chat doesn’t just go “you – model – reply.” It moves through several stops: your device, edge caches, the model endpoint, logging systems, and sometimes pipelines used to improve models later. Even if data is “anonymized,” it can be pieced back together when matched with context like writing style or location.
A related Wired report showed another risk: a poisoned document can make ChatGPT reveal information it shouldn’t. “It’s not a breach in the traditional sense,” said Wired’s report, “but a failure of trust boundaries. The model doesn’t distinguish between ‘input’ and ‘instruction.’”The model often can’t tell the difference between “what a user wrote” and “what it should obey as an instruction.”
That’s why this problem is tricky. A malicious input from one person could expose data from someone else. And because models are updated and retrained over time, outside users can’t easily see when (or if) these gaps reappear.
This vulnerability blurs the line between user and attacker. What if a compromised input from one user triggers data exposure from another? With models continuously updated, users can’t know.
OpenAI’s position on data and safety

In its official response to The New York Times, OpenAI states that models are trained on publicly available data and that no private user information is intentionally used for model training without consent.
However, critics point out that “publicly available” often includes scraped web data. It can be republished, copyrighted, or derived from sources users didn’t expect to be used for machine learning.
The problem is that users have no clear visibility into how long prompts are stored. Also, it is unknown what metadata is logged and how the model treats conversational data.
OpenAI’s retention policies state that conversations may be stored for up to 30 days for abuse monitoring, but third-party integrations (such as plugins or APIs) may treat data differently.
In short: clearing your chat history doesn’t necessarily clear your data.
The data economy behind free LLMs
The business model for free AI tools is simple: attract massive user bases, collect behavioral data, refine the product, and monetize insights through enterprise APIs or partnerships.
While these mainstream systems are built on cutting-edge architecture, their value grows with your data. Every prompt refines the model. Every correction strengthens its linguistic memory. So, every user becomes an unpaid contributor.
Many users assume that privacy risks end with data leaks. In reality, AI-driven data creates multi-layered vulnerabilities:
- Corporate espionage: Sensitive project information entered into ChatGPT could be indirectly used through model leakage.
- Compliance violations: Uploading client data or medical information may breach HIPAA, GDPR, or other data-protection laws.
- Reputation damage: Leaked prompts can reveal internal culture, financial data, or personal details about staff.
- Hallucinated advice: Relying on unverified AI outputs for legal, medical, or HR decisions creates downstream liability when those recommendations are wrong.
In other words, every prompt is a potential risk.
Safer alternatives: private, niche AI
Privacy-centric niche AI systems are rising to fill the security gap. JustDone, for example, offers on-device or region-locked AI processing that guarantees zero data retention. Unlike cloud-based chatbots, it doesn’t send user data to remote servers for retraining. JustDone’s AI model, known as “closed-loop inference”, processes prompts locally or within a private cloud instance. Outputs are generated in memory, not logged to persistent storage.
In practice, this means your queries never leave your secure environment. For companies in finance, law, or healthcare, such strict data-handling rules allow them to comply and still use AI to automate their tasks safely.
By comparison, ChatGPT Enterprise claims to offer “no training on customer data”, but relies on the user’s trust that OpenAI’s internal controls prevent accidental retention. Independent auditing of those claims remains limited.
AI data governance is coming
Gartner analysts predict that by 2026, over 70% of Fortune 500 companies will have formal AI usage policies that restrict or ban the use of unvetted public LLMs for internal tasks.
The trend can be compared with the early 2010s “cloud adoption phase,” when organizations moved from ad-hoc use of Dropbox and Google Docs to enterprise-secured storage with audit trails and encryption.
AI will follow the same path — from open tools to controlled environments.
Already, sectors like finance and defense are piloting “AI isolation zones” — private sandboxes where approved models run on sanitized data. This gives teams the benefits of automation without external exposure.
Regulators are stepping in
European data authorities have started investigating the legal basis for storing and processing user prompts under GDPR. Italy’s temporary ban on ChatGPT in 2023 was only the beginning. Regulators are now demanding transparency reports detailing data sources, retention periods, and access controls.
In the U.S., legislators develop AI data provenance standards similar to financial audit frameworks. These would require companies to track and certify every dataset used in training.
But regulation lags behind technology. Until privacy laws catch up, users are responsible for AI usage.
How to use AI securely
Security in the AI era depends on minimizing its impact. Here’s what most of the cybersecurity experts now recommend:
- Never share confidential data in public LLMs. Treat them like social media and assume everything is stored.
- Use enterprise or private AI deployments. JustDone or on-prem instances of open-source models (Mistral, LLaMA) provide control and transparency that keep your workflow safe.
- Scrub outputs before publication. AI text can accidentally include snippets from training data.
- Train staff on AI hygiene. Awareness is the cheapest form of security.
- Demand auditability. Choose vendors that disclose their data-handling practices and retention policies.
Security vs. innovation
Developers face an impossible balance: open systems fuel innovation but negatively affect users. At the same time, closed systems protect privacy but slow feedback loops. The answer lies not in total restriction, but in granular control. It includes encryption by default, opt-in logging, and tiered access to model telemetry.
OpenAI, Anthropic, and Google have the resources to implement these safeguards. However, as history shows, incentives often favor growth over restraint. Until privacy becomes a competitive feature, it will remain a marketing promise.
So, are free LLMs safe?
Free AI tools come with hidden costs. ChatGPT and similar LLMs are extraordinary engines of creativity, but they operate within opaque data ecosystems that were not built for confidentiality.
As Altman said, “We haven’t figured that out yet.”
Until we do, users must protect themselves. That means choosing JustDone as a secure AI alternative that does not learn from or store your data. It doesn’t enforce internal AI policies or treat every prompt as public until proven private. Because in 2025, privacy is not a setting, but a common decision.






