Chris Lehane is likely one of the greatest within the enterprise at making unhealthy information disappear. Al Goreβs press secretary in the course of the Clinton years, Airbnbβs chief disaster supervisor by each regulatory nightmare from right here to Brussels β Lehane is aware of spin. Now heβs two years into what may be his most unimaginable gig but: as OpenAIβs VP of world coverage, his job is to persuade the world that OpenAI genuinely provides a rattling about democratizing synthetic intelligence whereas the corporate more and more behaves like, properly, each different tech large thatβs ever claimed to be totally different.
I had 20 minutes with him on stage on the Elevate convention in Toronto earlier this week β 20 minutes to get previous the speaking factors and into the true contradictions consuming away at OpenAIβs rigorously constructed picture. It wasnβt simple or totally profitable. Lehane is genuinely good at his job. Heβs likable. He sounds cheap. He admits uncertainty. He even talks about waking up at 3 a.m. anxious about whether or not any of this may truly profit humanity.
However good intentions donβt imply a lot when your organization is subpoenaing critics, draining economically depressed cities of water and electrical energy, and bringing useless celebrities again to life to claim your market dominance.
The corporateβs Sora downside is de facto on the root of every little thing else. The video technology device launched final week with copyrighted materials seemingly baked proper into it. It was a daring transfer for a corporation already getting sued by the New York Occasions, the Toronto Star, and half the publishing trade. From a enterprise and advertising standpoint, it was additionally sensible. The invite-only app soared to the highest of the App Retailer as individuals created digital variations of themselves, OpenAI CEO Sam Altman; characters like Pikachu and Cartman of βSouth Parkβ; and useless celebrities like Tupac Shakur.
Requested what drove OpenAIβs determination to launch this latest model of Sora with these characters, Lehane provided that Sora is a βnormal objective know-howβ just like the printing press, democratizing creativity for individuals with out expertise or sources. Even he β a self-described inventive zero β could make movies now, he mentioned on stage.
What he danced round is that OpenAI initially βletβ rights holders decide out of getting their work used to coach Sora, which isn’t how copyright use sometimes works. Then, after OpenAI observed that individuals actually appreciated utilizing copyrighted pictures, it βadvancedβ towards an opt-in mannequin. Thatβs not iterating. Thatβs testing how a lot you may get away with. (By the way in which, although the Movement Image Affiliation made some noise final week about authorized threats, OpenAI seems to have gotten away with rather a lot.)
Naturally, the state of affairs brings to thoughts the aggravation of publishers who accuse OpenAI of coaching on their work with out sharing the monetary spoils. After I pressed Lehane about publishers getting reduce out of the economics, he invoked honest use, that American authorized doctrine thatβs alleged to stability creator rights in opposition to public entry to information. He referred to as it the key weapon of U.S. tech dominance.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Possibly. However Iβd just lately interviewed Al Gore β Lehaneβs previous boss β and realized anybody may merely ask ChatGPT about it as an alternative of studying my piece on Trendster. βItβs βiterativeβ,β I mentioned, βnevertheless itβs additionally a substitute.β
Lehane listened and dropped his spiel. βWeβre all going to wish to determine this out,β he mentioned. βItβs actually glib and simple to take a seat right here on stage and say we have to determine new financial income fashions. However I believe we’ll.β (Weβre making it up as we go, is what I heard.)
Then thereβs the infrastructure query no one needs to reply truthfully. OpenAI is already working a knowledge heart campus in Abilene, Texas, and just lately broke floor on a large information heart in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the adoption of AI to the appearance of electrical energy β saying those that accessed it final are nonetheless enjoying catch-up β but OpenAIβs Stargate undertaking is seemingly concentrating on a few of those self same economically challenged locations to arrange services with their attendant and large appetites for water and electrical energy.
Requested throughout our sit-down whether or not these communities will profit or merely foot the invoice, Lehane went to gigawatts and geopolitics. OpenAI wants a few gigawatt of vitality per week, he famous. China introduced on 450 gigawatts final yr plus 33 nuclear services. If democracies need democratic AI, he mentioned, they need to compete. βThe optimist in me says this may modernize our vitality programs,β heβd mentioned, portray an image of re-industrialized America with remodeled energy grids.
It was inspiring, nevertheless it was not a solution about whether or not individuals in Lordstown and Abilene are going to look at their utility payments spike whereas OpenAI generates movies of The Infamous B.I.G. Itβs very value noting that video technology is probably the most energy-intensive AI on the market.
Thereβs additionally a human value, one made clearer the day earlier than our interview, when Zelda Williams logged onto Instagram to beg strangers to cease sending her AI-generated movies of her late father, Robin Williams. βYouβre not making artwork,β she wrote. βYouβre making disgusting, over-processed hotdogs out of the lives of human beings.β
After I requested about how the corporate reconciles this type of intimate hurt with its mission, Lehane answered by speaking about processes, together with accountable design, testing frameworks, and authorities partnerships. βThere isn’t a playbook for these things, proper?β
Lehane confirmed vulnerability in some moments, saying he acknowledges the βhuge tasks that includeβ all that OpenAI does.
Whether or not or not these moments have been designed for the viewers, I consider him. Certainly, I left Toronto considering Iβd watched a grasp class in political messaging β Lehane threading an unimaginable needle whereas dodging questions on firm selections that, for all I do know, he doesnβt even agree with. Then information broke that difficult that already difficult image.
Nathan Calvin, a lawyer who works on AI coverage at a nonprofit advocacy group, Encode AI, revealed that on the similar time I used to be speaking with Lehane in Toronto, OpenAI had despatched a sheriffβs deputy to Calvinβs home in Washington, D.C., throughout dinner to serve him a subpoena. They wished his non-public messages with California legislators, faculty college students, and former OpenAI staff.
Calvin says the transfer was a part of OpenAIβs intimidation techniques round a brand new piece of AI regulation, Californiaβs SB 53. He says the corporate weaponized its ongoing authorized battle with Elon Musk as a pretext to focus on critics, implying Encode was secretly funded by Musk. Calvin added that he fought OpenAIβs opposition to Californiaβs SB 53, an AI security invoice, and that when he noticed OpenAI declare that it βlabored to enhance the invoice,β he βactually laughed out loud.β In a social media skein, he went on to name Lehane, particularly, the βgrasp of the political darkish arts.β
In Washington, that may be a praise. At an organization like OpenAI whose mission is βto construct AI that advantages all of humanity,β it seems like an indictment.
However what issues rather more is that even OpenAIβs personal individuals are conflicted about what they’re changing into.
As my colleague Max reported final week, various present and former staff took to social media after Sora 2 was launched, expressing their misgivings. Amongst them was Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it’s βtechnically superb nevertheless itβs untimely to congratulate ourselves on avoiding the pitfalls of different social media apps and deepfakes.β
On Friday, Josh Achiam β OpenAIβs head of mission alignment β tweeted one thing much more exceptional about Calvinβs accusation. Prefacing his feedback by saying they have been βprobably a threat to my entire profession,β Achiam went on to put in writing of OpenAI: βWe willβt be doing issues that make us into a daunting energy as an alternative of a virtuous one. We’ve an obligation to and a mission for all of humanity. The bar to pursue that obligation is remarkably excessive.β
Itβs value pausing to consider that. An OpenAI govt publicly questioning whether or not his firm is changing into βa daunting energy as an alternative of a virtuous one,β isnβt on a par with a competitor taking pictures or a reporter asking questions. That is somebody who selected to work at OpenAI, who believes in its mission, and who’s now acknowledging a disaster of conscience regardless of the skilled threat.
Itβs a crystallizing second, one whose contradictions could solely intensify as OpenAI races towards synthetic normal intelligence. It additionally has me considering that the true query isnβt whether or not Chris Lehane can promote OpenAIβs mission. Itβs whether or not others β together with, critically, the opposite individuals who work there β nonetheless consider it.





