Privacy on Trial: LinkedIn, AI, and the Future of Data Ethics - privAIcy insights Newsletter #005
Your compass through the maze of AI, privacy, and security compliance 🧭
Hello! In this newsletter, we look at the lawsuit against LinkedIn for allegedly using premium users' private messages to train AI. We’ll also dive into the broader implications for AI governance and trust, plus a roundup of cutting-edge AI tools transforming the legal profession. Let’s dive in!
🚨 Class Action Claims LinkedIn Used Private Messages to Train AI
A lawsuit against LinkedIn alleges that the company used premium users' private messages to train AI models, raising significant concerns about privacy and data ethics in AI development. The case has drawn commentary from privacy advocates and industry observers.
Summary of the Case:
The lawsuit accuses LinkedIn of using premium users' private messages without consent to train generative AI models, allegedly violating the Stored Communications Act (SCA) and California’s Unfair Competition Law.
Plaintiffs also claim breach of contract, arguing that LinkedIn violated privacy assurances and confidentiality provisions in its Premium Subscription Agreement.
These actions allegedly exposed sensitive InMail communications—often containing confidential business or professional discussions—to third parties, creating risk of professional harm and privacy violations.
Privacy and Trust in the AI Age
One of the central issues is LinkedIn's alleged failure to obtain user consent, a critical aspect of privacy in the AI age. Premium users often pay a subscription fee expecting enhanced privacy and security, making the alleged breaches particularly troubling.
The case also illustrates the risks of stealth privacy changes. LinkedIn's initial use of an opt-out model—rather than an opt-in approach—highlights unresolved questions about the power imbalance in data-sharing decisions. Is consent buried in terms and conditions truly consent?
The plaintiffs’ claims about jeopardized careers and opportunities emphasize another nuance: professional risks associated with data misuse. LinkedIn InMail is often used for sensitive communications—like recruitment, business deals, or confidential advice. A perception of misuse erodes trust in the platform and raises ethical concerns about the unintended "human cost" of AI.
Legal Precedents and Potential Outcomes
If successful, this case could establish significant legal precedents. Plaintiffs are seeking damages under the Stored Communications Act (SCA), breach of contract, and California competition laws. With statutory damages potentially reaching $1,000 per user, LinkedIn could face billions in liability (on top of any reputational damage).
The case also has broader implications for the tech industry. Strict penalties could reshape how companies think about their data practices, while a weak enforcement outcome might encourage more opaque practices.
Implications for AI Governance
The case underscores ongoing "gray areas" in AI governance, particularly in transparency, consent, and accountability. Key questions include:
Transparency: Should companies disclose not just what data they use for AI but how it’s utilized? What level of detail suffices for meaningful transparency? When, where, and how should the disclosure be made?
Consent: Is opt-out a sufficient standard, or should opt-in become the norm—and how should this choice be provided?
Accountability: What mechanisms are required to ensure companies comply with ethical AI training practices? What are acceptable methods to audit and validate a company’s claims about AI training datasets?
The U.S. lacks comprehensive federal AI governance frameworks, making this an important case to watch.
What Happens Next
Procedurally, the plaintiffs must first secure class certification by demonstrating the case meets legal critical, such as showing shared claims are common to all class members, the class is sufficiently numerous, and the named plaintiffs can adequately represent the class. If certified, the case will move to discovery, where evidence is exchanged. Possible outcomes include settlement, summary judgment, or trial.
🤔 Closing Thoughts
This lawsuit reflects the growing tension between rapid technological advancement and traditional privacy frameworks. As AI evolves, so must the legal and industry standards governing its development. The case's outcome could shape AI policy and influence user trust in technology for years to come.
AI in Legal Tech
The legal field has always been rooted in tradition, but the rise of artificial intelligence in legal technology is challenging the status quo. Skepticism is natural, but dismissing AI outright risks overlooking its potential to improve aspects of our profession for the better. Here are some tools I'm cautiously optimistic about (not sponsored — just my current thoughts):
Casetext’s CoCounsel might handle repetitive tasks like research and contract review (but can’t replace human judgment or creativity). This could reduce time spent on the mundane, allowing attorneys to focus on strategy. Approximate cost: $500 per user per month.
ChatGPT Tasks might help lawyers streamline reminders by automating task scheduling, client follow-ups, and key date notifications, freeing up time for strategic work. Approximate cost: $20 per month for Plus plans.
Luminance might assist lawyers with document review, due diligence, and regulatory compliance. It could be particularly helpful with identifying patterns and uncovering risks in corporate transactions and litigation. Approximate cost: pricing starts at around $1,000 per user per month.
NotebookLM might help lawyers organize and analyze information more effectively by summarizing documents, answering queries based on uploaded files, and streamlining insights. Approximate cost: currently free as part of an experimental program.
GCAI might help lawyers tackle legal research, document drafting, and business analysis faster. It seems particularly useful for helping in-house legal teams speed up tasks they don’t have the time nor resources to manually sludge through. Approximate cost: starts at $49 per user per month.
Looking Ahead
Much like the shift from print casebooks to digital databases, or from handwritten filings to e-filing systems, the adoption of AI in legal is an evolution, not a revolution.
If you’re a lawyer hesitant about AI, start small. Explore a free trial of an AI-enhanced platform, attend free "Intro to AI" webinars such as those offered by bar associations, or even experiment with ChatGPT for summarizing legal news (while keeping confidentiality and ethical rules in mind).
What’s your take on AI legal research? Are you using AI tools in your practice? Share your experiences or challenges in the comments, and let’s discuss!
Learn More:
American Bar Association: AI Tools for Legal Work: Claude, Gemini, Copilot, and More
TechGC: AI Tools and Legal Implications
Ironclad, Virtual Roundtable: State of AI in Legal
American Bar Association: AI Essentials For Lawyers: What You Need To Know To Protect Your Clients In The Digital Age
The National Law Review: AI in the Legal Profession: Separating Substance from Hype
💡On My Mind
When AI agents make decisions, who shoulders the responsibility? From principal-agent liabilities to unforeseen consequences, does the rise of autonomous AI demand a fundamental rethink of accountability frameworks?
For example, how should legal systems adapt to regulate entities without intentions—agents that operate without mens rea? Should companies be held accountable for their AI's outputs, or should liability extend to developers and users alike? These are no longer theoretical questions but pressing concerns as AI integrates deeper and deeper into decision-making processes.
My take? It’s time for more Continuing Legal Education (CLE) events that go beyond summarizing laws with dense PowerPoint slides. Let’s encourage programming that fosters meaningful dialogue, in-the-weeds analysis, and collaborative roundtables with legal, technical, and ethical experts. These conversations could pave the way for actionable solutions in this rapidly evolving space.
📣 AI is rewriting the rulebook for tech contracts, and lawyers need to evolve with it.
Tech contracts need fresh thinking as AI reshapes industries and outgrows outdated, copy/paste SaaS-era templates.
In my latest article, I explore how to:
tailor contracts to specific use cases,
understand the distinct roles of AI providers, and
build flexibility into agreements to address evolving risks.
It was fun writing this, drawing on my experience with both static and adaptive contract frameworks.
What are your thoughts on modernizing AI contracts?
AI Contracts Need New Thinking Beyond Copy-and-Paste SaaS Terms
Raise your hand if you’ve been personally victimized by opposing counsel clinging to contract language from a decade ago?
🧠 Join me on the privAIcy Podcast!
I’m launching a podcast for legal, privacy, security, and AI governance practitioners who want to dive deep into how we can bridge the gap between regulatory theory and practical, day-to-day implementation.
Want to be a guest and share your insights? Let’s brainstorm, debate, and collaborate on the challenges and opportunities shaping our fields. Interested? Let me know—I’d love to feature your perspective!
📚 Thank you for reading!
I hope you enjoyed reading it as much as I enjoyed writing it.


