Why Are Palantir and OpenAI Scared of Alex Bores? |
Why Are Palantir and OpenAI Scared of Alex Bores?
Produced by Annie Galvin
Why Are Palantir and OpenAI Scared of Alex Bores?
This is an edited transcript of “The Ezra Klein Show.” You can listen to the episode wherever you get your podcasts.
If you are living in New York’s 12th Congressional District, you may have seen these endless attacks on Alex Bores, one of the Democrats running there.
Archival clip of political attack ad: He made hundreds of thousands of dollars building and selling the tech for ICE, enabling ICE and powering their deportations while making bank. Now he’s running from his past. ICE is powered by Bores’s tech.
Archival clip of political attack ad: He made hundreds of thousands of dollars building and selling the tech for ICE, enabling ICE and powering their deportations while making bank. Now he’s running from his past. ICE is powered by Bores’s tech.
Yikes. Bores did work for Palantir. The rest of that attack is not what you might call true, but what interests me is who is paying for it: the super PAC Leading the Future and its subsidiary Think Big.
Who funds the super PAC Leading the Future? Well, among their largest donors are the co-founders of OpenAI, Andreessen Horowitz and — wait for it — Palantir.
So why is a co-founder of Palantir, Joe Lonsdale, in this case, funding a super PAC to try to destroy a candidate on the grounds that he once worked for Palantir? The reason is that Leading the Future is a super PAC dedicated to destroying anyone who might regulate the tech industry, in general, or A.I., specifically, in a way these funders don’t like.
And Bores is a member of the New York State Assembly. He co-wrote and passed the RAISE Act, one of the first pieces of A.I. regulation passed in any major state.
Sam Altman, a co-founder of OpenAI — who, it should be said, has been horribly targeted in recent violent attacks by anti-A.I. individuals — was trying to cool down temperatures here. He wrote: “It is important that the democratic process remains more powerful than companies.”
There is a principle here that is much more important than any single congressional seat. You’ll hear it, honestly, if you just listen to A.I. founders talk; they say they believe in it.
But it’s his co-founder, Greg Brockman, who is one of the major donors for Leading the Future, who is trying to make sure the democratic process is subordinate to the companies. He is trying to do it by funding a super PAC that can unleash enough money to crush any legislators who cross them.
Bores, in general, has been a pretty effective legislator. In just over three years at the New York State Assembly, he has passed 30 bills and has been recognized by the Center for Effective Lawmaking as one of the most effective freshmen legislators.
But it’s his ideas on regulating A.I. that particularly interest me, in part because I think they make sense and are worth discussing — things like an A.I. dividend — but in part because I just really do not want to live in the world that Leading the Future is trying to create. A world where, if the A.I. industry hoovers in enough money, they can then destroy anyone who might try to regulate them.
What’s funny about all this is: Alex Bores is not an anti-A.I. kind of guy. I think he gets A.I. pretty well. I think he’s trying to balance its risks and its possibilities.
But if you’re looking for a pure A.I. backlash candidate, he’s not it. And I think that tells you something: that what Leading the Future and super PACs and groups that might emerge like them are actually trying to do is to stop anyone from legislating on A.I.
If the democratic process is actually going to mean something here, ideas are going to have to speak louder than this kind of money. So I wanted to hear what Alex Bores would actually do if given the chance.
Ezra Klein: Alex Bores, welcome to the show.
Alex Bores: Thanks for having me.
I want to begin with your early political memories. How did your politics begin?
Well, it began with something that I wouldn’t necessarily call politics — only in retrospect would I put that phrase on it. But it was with my parents in union fights.
In second grade, my dad and his colleagues were locked out by Disney for fighting for better health care. There were contract disputes for over a year, and Disney wouldn’t budge.
Finally, the workers went on strike. In response, Disney locked them out for three months and cut off their health care benefits, including my dad’s friend’s, who was about to start chemotherapy.
Thankfully, the union stepped in, and they paid for the treatment, and he survived. But my dad would pick me up from second grade and bring me to the picket line, and that was my first experience of people working together for change.
He would put me in front of the Disney Store. We’ve all seen people walk past picket lines — it’s not hard to do. It’s a lot harder to walk past an 8-year-old with a sign that says: Disney is mean to my dad.
So that was my first lesson — that health needs to be universal, but also that the way we win is by working together.
That if you’re one worker, you’re one person, you’re one anything advocating, it’s easy to get crushed. But if you have a union, you have an organization, you have a campaign, you have a movement — well, then you stand a chance.
What did your dad do for Disney?
My dad worked for “Monday Night Football” at the time. He did graphics and videotape and instant replay. He worked in the trucks, eventually became a technical director. But he was one of the people who was actually sending out the signal before it hits your TV.
So you then studied industrial labor relations at Cornell and got a computer science degree. I’m curious about what those two very different disciplines taught you.
Well, they sound very different, but every day they seem to be more and more intertwined. At the School of Industrial and Labor Relations, I learned economic theory. I learned collective bargaining. I learned how to run campaigns and organizations in ways that actually can change power and win things.
And I learned to stand up for working people and to view a lot of interactions in the world through that lens.
Wait, be specific about that. What did you learn about how to stand up for working people?
Well, my freshman year, we ran a campaign against Nike. Cornell was sponsored by Nike; our athletic teams were sponsored by Nike.
I was part of a group called Cornell Students Against Sweatshops. It was affiliated with USAS, United Students Against Sweatshops.
They taught us how to build a campaign over time. We learned how to be strategic. You start with a clear demand.
In this case, Nike had laid off 1,800 workers in Honduras without giving them the legally mandated severance pay. We argued that the Cornell code of conduct required that Nike be responsible for their subcontractor’s actions, that they make the workers whole.
So we put that into the demand. Then you build up over a period of educating. We’d have teach-ins, we’d have sort of ridiculous actions to grab attention.
We did a “working out for workers’ rights,” where we were in the quad and just playing ’80s music and getting people to ask: Hey, what’s going on? And we’d say: Let me talk to you about what’s going on in Honduras.
Then you build up to more aggressive actions that require a reaction from the administration. We ended up being successful in that campaign. Cornell decided it was going to cut its contracts.
I think something like three weeks after Cornell made that announcement, Nike about-faced, paid the workers all the money they were owed and gave them job training and health care for a year.
You’re telling me about how you learned to do activism in college, which is interesting.
But I want to go a level deeper than that. You’re doing industrial and labor relations. What is the deeper theory or thesis of the relationship between workers and corporations, between labor and capital, that you came out of that with?
There’s so much that’s in contention between workers and capital.
But in the best world, you’re actually working together to grow the economy. Workers are not out there to bankrupt any company. They want the company to grow. There are fights over how you distribute the pie, but theoretically, both want to grow that pie.
Then there are really interesting relationships internationally. One of the things that I discovered was that for so many of the countries where we thought labor conditions were awful, the laws on the books were actually quite good. The question was with enforcement, and if the home countries actually tried to do enforcement, the factories would just up and leave and go somewhere else.
The lever where maybe you could change that is in the countries that are buying most of the goods. So we would apply pressure in the U.S. about holding countries to the standards they had already set up for their workers.
I feel like you’re describing to me the education of a young radical here. You’re walking picket lines at 8 years old; you’re studying industrial labor relations; doing anti-corporate malfeasance campaigns; skeptical of globalization.
How do you end up at Palantir?
I really wanted to be a lawyer, but every lawyer I spoke to told me not to be a lawyer.
That was my experience, too.
They were like: Take time off in between. Make sure that’s what you want to do.
I went to an economic litigation consulting firm called Cornerstone Research, where we were preparing expert witnesses for trial. We were doing economic modeling and playing with data. But I was interacting with lawyers all the time.
So I was building a skill set but could see what they were doing. I found I really enjoyed the economic modeling. I really enjoyed playing with data.
Also to that ideology, as I’m growing up, I’m a Democrat. I believe that government can and should be a force for good. But that also means we take on the burden of proving it.
I was a young believer in — I probably wouldn’t put it in these terms back then — expanding government capacity and making sure government is actually delivering.
Palantir in 2014, in the Obama administration, was about how we could expand government capacity while protecting privacy and civil liberties. So at the time, it felt like very much the natural fit.
I want to stay in this 2014 moment, because this is a period when there is a lot of optimism that technology is going to solve some very fundamental problems of democracy.
We’re going to have all the civic tech; the interface between citizens and the government is going to be much smoother, much better; these companies are fundamentally good.
Google doesn’t want to be evil. Facebook wants to connect the world. Palantir wants to make your data comprehensible.
I think there’s also an underlying view that the answers to our problems are out there somewhere in these masses of data. And if you can just make the whole thing legible, you could get the answers.
Something poisons pretty quickly after 2014. That really feels like a different ideological moment than we’re in.
What was wrong about that? Or what would you add or change to my rendition of that optimism?
A lot of that is true. The Palantir story that was told to prospective employees — and Alex Karp would do this a lot — was that he most feared fascism. He had just finished being a German philosophy student, and he was most afraid of fascism developing.
Fascism happens when a government fails to provide for its citizens, and they start blaming someone else for it, and people then feed that hunger and that hatred. He couldn’t do anything about the latter, but he could do something about the government failing to deliver.
The reason that he wanted to do Palantir was, after Sept. 11, after this real rise in a feeling of being unsafe, could we build the systems that would allow government to make people feel safe — but build them in such a way that was protecting privacy and civil liberties?
That was the pitch. The fundamental idea was that we were there, in many ways, to stop fascism.
Trump was elected in 2016. That was a weird bit.
With the aggressive support of Peter Thiel, one of the early investors in Palantir. Would you call Peter Thiel a Palantir co-founder?
I think so. I think that’s the phrase that is given.
But Alex Karp was very much fighting for Hillary at the time. And if you look at donations of employees at Palantir, they tell a very skewed story toward the Democrats, as well.
Yes. Silicon Valley is very Democratic in this period.
Absolutely. Absolutely.
You have a lot of Obama administration figures who can’t go to Wall Street anymore — that’s not kosher for a Democrat — but you could go to Silicon Valley.
Trump’s election in 2016, but even more so his re-election in 2024, is a real failure of that mission. To now see leaders of the company and Silicon Valley broadly throwing their lot in with what I think is a fascist regime is a real disappointing switch.
So you’re at Palantir from 2014 to 2019. You start as a data scientist. By the end, you’re one of the people leading the relationship with the government.
Yes. I focused on the federal civilian side.
So what is that work?
That was work with the Department of Justice, with the C.D.C. to track epidemics, with Veterans Affairs to better staff their hospitals and give veterans the care they deserve and need. It was helping a lot of the federal civilian agencies.
How much is what we now think of as A.I. and generative A.I. starting to come into the work you all are doing then?
Not at all. And here’s what I mean by that. Palantir was aggressively anti-A.I. in that period. It believed that data integration was the true source of value and that A.I. was a magic layer that would be applied on top. It was all marketing, and we were doing the real work that was getting data to come together.
Can you describe the differences in those two views? What is data integration versus whatever they thought A.I. was?
A.I., in a very naïve sense — we’ll talk about it in other ways now, and this is before agentic models and all of this — is doing analysis of data, and before you can do the analysis of that data, it needs to be organized in a way that A.I. can make sense of it.
But the actual thing that’s difficult is organizing all your data together. That requires hard work, and there’s no magic to do that yet. The software, plus engineers going on site and doing a lot of that hard work to do the manual hookups, was always going to be the true source of value.
So you’re at Palantir into the Obama administration and into the first Trump administration.
Now Palantir’s working with the government is a different animal, depending on which government it’s working with.
How does that change?
I was leading the work at the Loretta Lynch and Barack Obama Department of Justice and then, all of a sudden, the Jeff Sessions and Donald Trump D.O.J. Priorities changed pretty drastically.
The work with the banks was probably wrapping up anyway, just because of time. But clearly, there was no more interest in that work.
Our contract had us choose three mutually agreed-upon case types. So I met with the new leadership after the transition — this is early 2017 — and said: What do you want to prioritize? What do you want to work on? And they said: the opioid epidemic. We said: Great, we definitely want to do that work.
They said: violent crime. We said: Cool, as long as it’s not a dog whistle, we’d love to work on that.
And then they said: civil immigration. And I said: We’re not touching that. That’s not the work that we are building this for.
I was empowered as the lead of the project to do that. I had a contract that allowed me to because it was three mutually agreed-upon case types. While I was there and in the D.O.J. project, we didn’t do any of that work. That’s not how the decision went at every customer or in every project.
So Palantir during this period does begin working on immigration with the Trump administration.
I never worked on any of those projects, so I was never cleared on it. But to the best of my understanding, during that time, it was not stopping the Trump administration from using it for immigration.
I don’t think there was a building of features specifically for deportations, but I could be wrong about that. But even the fact that they weren’t going to stop it from being used in that way got a number of employees — myself included — quite upset.
You leave Palantir in 2019. Why?
Separately from me, on a project that I never worked on, Palantir had signed a contract with a department within ICE called H.S.I., Homeland Security Investigations. During the Obama administration, it was focused on anti-human trafficking, anti-drug trafficking, sometimes counterfeiting — things that are not controversial and that everyone would support.
Then, when Trump comes in, in 2017, they try to change the nature of that work. They try to get another part of ICE called E.R.O., Enforcement and Removal Operations — the part that everyone thinks of as ICE — to get access to the software and to use it for deportations.
There were a lot of conversations internally at Palantir about what was actually happening — as employees we couldn’t always see that if we weren’t cleared on the project. A fundamental question came up: Why not write into the contract those same protections that we have elsewhere, where we can say: Don’t use it for deportations?
Eventually, executives made clear to us that they were not going to do that. They were going to renew the contract without putting in those guardrails.
So I made plans to quit.
There was a Bloomberg story that questioned this, clearly coming from somewhere inside Palantir. It says that there was, shortly before you left — I think it said five days before you left — a warning from H.R. about sexually explicit comments you had made to a co-worker.
And then, separately, when you did your exit interview, you said you were actually leaving because you were burned out and there was too much travel.
So I want to take these as pieces. Was there a sexual harassment claim against you at Palantir, and is that why you left?
No and no. This came out of an attack from executives at Palantir who are upset that I am pushing for A.I. regulation and that I’ve called out Palantir’s work in the past. As I told Bloomberg when they reached out, I had expressed my concerns about the work with ICE internally.
I had begun interviewing months and months before I had an offer in hand.
I then retold a story of something that had happened to me on the job. Someone who didn’t like that retelling had talked to human resources. H.R. had one conversation with me where I shared exactly what had happened, and that was the end of it.
There was no file, no letter, none of the things that are claimed in that story. They dropped the matter immediately.
You weren’t disciplined inside the company or something?
No, nothing like that.
This seemed like what the Bloomberg story said, but I wanted to check it. The infraction was a story you told or something you said, not something done with or toward a colleague.
Correct. The story goes into it. Can I retell the story here? It was a paper-goods manufacturer that was talking about the uses of tissues. It sold tissues. The marketing department was talking about how tissues are used. And I retold that example from the presentation on how tissues were being used as an odd thing that had happened while working at the company.
And then the burnout and travel side of it — the argument there is that you’re making this claim that you took a moral stand against the way it’s being used, but actually, you were just kind of tired of working there.
As has been cited in multiple sources, multiple current Palantir employees have backed me up that they heard me talk about ICE and stand up and do all of that. I have no idea what notes Bloomberg took from the exit interview.
I asked to see them. I was told by the Bloomberg reporter she didn’t actually have them, that this had just been told to her by the executives. So they could claim whatever they want on top of the notes that, again, I never saw.
I know what I had said before and during, and that I had brought this up many times. A year after I left, Palantir emailed and called me, begging me to come back. It feels like if there had actually been a real thing there, they probably wouldn’t have done that.
You just heard me be fairly critical about Palantir; I had been before, as well. The executives there didn’t take kindly to that. And the super PAC that’s attacking me is against any regulation on A.I., and this is just another desperate hit by them.
I have been amused that the super PAC that is attacking you — which is partially funded by Joe Lonsdale, a Palantir co-founder — one of its core attacks on you is that you worked at Palantir.
That’s a pretty strong level of political shamelessness.
I would agree. So, I would say, is lying about an employee’s record.
But they are very terrified. They’re very afraid of me in office, and beyond that, they’ve said publicly that they are trying to make an example out of me. They want to beat up on me so bad that when the idea of regulating A.I. comes in the future, politicians run in the opposite direction.
They’re not primarily concerned with what is honorable or what is true. They are concerned with causing pain.
In 2022, you’re elected to the New York State Assembly. In 2025, you passed the RAISE Act, which gets us into the A.I. regulations you’re alluding to. This is one of the first major pieces of A.I. legislation passed by any state in the country.
Before we get into what it does, what was the philosophy behind it? When you were working on that bill, and I know you had co-sponsors on it, what were you all seeing and what were you all trying........