Tech Lawyers: Let's Stop Clinging to 'SaaS Playbooks' in AI Contract Negotiations
Are we negotiating AI contracts like it's 2010?
For years, tech lawyers have leaned on software-as-a-service (SaaS) playbooks as the default approach in contract negotiations. The easiest move in the book? Labeling a preferred term as “industry standard.”
But AI isn’t just SaaS 2.0—and clinging to outdated negotiation tactics does a disservice to our clients, our companies, and the public at large.
And yet, I see it constantly in AI contracts. When an AI vendor wants a favorable term, it’s “standard SaaS.” But when the conversation shifts to intellectual property ownership, suddenly, those defaults disappear.
This matters. If you're signing up for generative AI, you might be agreeing to more than you realize—without fully understanding what you’re giving up. The contracts we negotiate today will shape the future of AI governance.
Are we getting them right?
Who Really Owns AI Outputs?
In traditional SaaS, the thinking was simple: if a platform processes your data, you own what goes in and what comes out. Traditional CRM software, document collaboration tools, and analytics platforms generally followed this logic.
Most AI vendors claim the same principle applies—they assure customers that they own AI-generated outputs. At first glance, that sounds reassuring.
But here’s the catch:
AI vendors often disclaim liability by emphasizing that customers own outputs.
Simultaneously, they impose usage restrictions, like banning the customer from using outputs in other AI systems.
Think of it this way: If Cool Company pays for Very Cool Vendor’s AI service to underpin Cool Company’s product, does Cool Company own:
🤔 The data, insights, and content generated by the AI?
🤔 The reports or transcripts created on its input data?
🤔 Any transformed or enriched version of its original data?
The contract may say “yes,” but then limit what Cool Company can do with that data. The contract may stipulate: you can’t train, fine-tune, enhance, improve, or leverage any AI technology with that data (terms that are themselves vague and broad).
These restrictions undermine the concept of ownership.
The Trap
Most businesses today aren’t developing foundational models. They are:
✔️ Leveraging pre-trained AI models (OpenAI, Anthropic, Google)
✔️ Enhancing AI features with their own data (RAG architectures)
✔️ Procuring third-party AI and integrating it into their products
Yet, many AI vendors assume every company is a direct competitor. That leads to overreaching terms that restrict how businesses can use AI-generated outputs—even when they pose no actual competitive risk.
Where does this happen? Examples include:
1. Automated Transcription & Summarization
You use an AI tool to transcribe meetings. You’d assume you fully own those transcripts—after all, it’s your data going in. But some AI providers restrict your ability to analyze them with another AI tool or repurpose them in different contexts.
2. Generative AI Content Tools
AI-generated marketing copy? Great. But some vendors limit how you can modify, analyze, or reuse that content beyond their platform.
3. AI-Powered Data Enrichment Services
Say you send customer data to an AI vendor for sentiment analysis. Even if they claim you own the processed results, they might restrict you from using that data with any other AI-powered tool outside their ecosystem.
These restrictions don’t just create legal friction but also limit how businesses can innovate and operate.
Contract Red Flags
Many AI contracts feel aggressively one-sided in favor of the provider. Here’s what to watch for:
🔴 “You Own Outputs, But…”
If you own AI-generated outputs, ownership should mean full control—not ownership with an asterisk.
🔴 “You Own Inputs, But…”
Make sure your rights aren’t quietly undercut by buried language letting the provider use input data however they see fit. If you own it, why should the AI vendor get a free pass to benchmark, train, or otherwise repurpose it? Are you getting anything meaningful in return? If not, that’s a red flag.
🔴 ”We only negotiate if you spend a certain amount…”
Some vendors only allow contract redlines if you pay for an enterprise plan. Don't get me wrong, I understand the need for “spend thresholds” in B2B software deals to empower internal operational efficiency. I've worked in those trenches.
But with AI, rigid spend thresholds don’t make sense. The tech and legal landscapes are evolving in real time; if we don’t adjust, we risk locking in outdated or unfair terms.
Also: since when did compliance become a luxury add-on? Let’s say you want to negotiate a redline to bring the contract into alignment with the EU AI Act. If the vendor refuses to entertain it without more $$$ and you’re not getting anything meaningful in return, that’s a red flag.
None of this is to say reasonable protections aren’t fair. AI vendors have the right to safeguard their proprietary technology. It makes sense for them to say:
You can’t reverse engineer our proprietary models.
You can’t fraudulently replicate our core technology.
We need protections for our intellectual property.
But protections should cut both ways.
If AI vendors want strong protections for their technology, they should extend similar respect to their customers' data rights. Anything less isn’t protection—it’s control.
Lawyers, Let’s Do Better
During a recent panel I spoke on, someone asked: What’s the biggest impediment you face as a tech General Counsel when it comes to AI? My first thought was lack of resources—after all, small legal teams are stretched thin. But then I laughed and said: We all know what limited resources feel like. Instead, let’s challenge ourselves: the biggest impediment may be us—the legal profession itself.
Too many lawyers are clinging to outdated notions of what SaaS was. We fall back on “this is standard” instead of questioning whether those standards even apply to AI. SaaS as we know it may be a dying thing, and if we don’t start adapting, we’re not just doing a disservice to our own companies—we’re failing the public at large.
Too many contracts feel like they were written in isolation from the realities of AI development and deployment. And that’s because, too often, lawyers are only learning from and discussing AI with other lawyers. Meanwhile, what they need to learn happens elsewhere: at engineering conferences, in product development meetings, in research labs, and on the ground with businesses deploying AI solutions.
That’s why in-house legal teams need more than just contract thinking. AI governance requires broader thinking. Sure, you might win a small negotiation on one contract, but when you zoom out and aggregate those clauses across the industry, the real question emerges: What are we actually building?
If we accept murky clauses around data ownership and lineage today, what precedent are we setting? What happens when legal teams blindly accept restrictive AI terms without considering their long-term implications? Lawyers have an ethical obligation to advocate for their clients, but we also have a duty to the broader legal ecosystem.
Balancing these priorities isn’t easy. We discussed this on the panel: having strong ethical convictions about AI and still maintaining our ethical duties to our individual clients (our companies). It’s a tightrope walk.
But the legal profession has walked this line before with each new emerging technology it has faced.
We need to get better at doing it in the AI era.
I believe we can.
🧠 Join the privAIcy Podcast
Let’s brainstorm, debate, and collaborate on the challenges and opportunities shaping AI, emerging tech, privacy, and law. Want to be a guest? Complete this short survey:
💬 Let’s Talk
Where have you seen “industry standard” used as a smokescreen in AI contracts? Drop your thoughts in the comments!
📚 Thank you for reading!
I hope you enjoyed reading it as much as I enjoyed writing it.
-Rachel