Trump’s Latest Entry Into The AI Policy Battle

When it comes to federal AI regulation, President Donald Trump seems to be dialing back the antagonism and trying to get what he wants through a more conventional route. It’s no secret that Trump wants as little regulation as possible: He’s worked to cut regulations throughout the government, and he has said that regulations can stymie development. Trump has also said the U.S. must dominate global AI and win the race, and he wants to stop state-level efforts to regulate how AI can be used.

Going on the attack hasn’t worked. A provision for a 10-year moratorium on state AI laws was removed from the administration’s signature One Big Beautiful Bill Act last year, and a similar provision was axed from the National Defense Authorization Act. An executive order directing the Justice Department to file lawsuits against states with AI laws on the books hasn’t yet resulted in any litigation—and it also hasn’t stopped state legislatures from talking about the issue. According to Multistate.ai, more than 1,500 AI-related bills are currently being considered in 45 states.

So Trump has tried a more conventional approach: Publishing a policy framework. In it, he urges Congress to pass federal AI regulations that will preempt state-level ones. It remains to be seen if this less confrontational path will work more effectively—or if the administration will return to its usual tactics soon.

A common cybersecurity problem enabled by AI is deepfakes. This kind of attack can be devastating to a company and is difficult to prevent. I spoke with Incode founder and CEO Ricardo Amper about strategies to address the issue. An excerpt from our conversation appears later in this newsletter.

I will be taking a break next week, and Forbes CIO will not be sent on Thursday, April 2. We’ll be back in your inboxes on Thursday, April 9.

This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.

Last week, President Donald Trump released his policy guidelines for federal AI regulation: A document with seven items he would like Congress to address through legislation. Trump has said he wants the government to have a light touch in regulating AI, giving companies free rein to innovate.

Last on the list—but likely the top priority—is creating a federal policy framework to preempt state-specific AI laws, ensuring that companies have only one “minimally burdensome” standard to follow when working on AI anywhere in the country. Trump signed an executive order in December blocking state AI laws that hasn’t appeared to have much effect; 45 state legislatures across the nation are debating AI regulation bills during the current session.

The framework also calls for copyright protections for content creators and publishers, asking Congress to enable licensing frameworks for AI-generated content use and protection of AI-generated digital replicas of their voices and likenesses. It states courts should decide the issue of fair use of content for AI training, which is the subject of several pending lawsuits between publishers and artists and AI companies.

While AI regulations are important, they may not be a priority for Congress right now. Lawmakers are dealing with several other pressing matters, including the ongoing war in Iran, the partial shutdown of the Department of Homeland Security and Trump’s controversial bill changing voting rules.

Meanwhile, Trump is filling the new President’s Council of Advisors on Science and Technology with AI leaders—including Meta CEO Mark Zuckerberg, Oracle founder Larry Ellison and Nvidia CEO Jensen Huang. The group is charged with advising the president on technology and innovation policy, and many tech CEOs have been policy allies and financial backers of Trump and his initiatives.

ARTIFICIAL INTELLIGENCE

Nvidia isn’t the only company that is expecting a huge windfall from AI chip sales in the near future. Arm Holdings, which had previously licensed intellectual property frameworks, announced it will manufacture its first chip—the AGI CPU—that CEO Rene Haas predicted would generate $15 billion in annual revenue.

The move is a seismic shift for Arm, which will now compete with some of the tech companies it licenses to, including Nvidia, Broadcom and Qualcomm, writes Forbes contributor Steve McDowell. The AGI CPU is a data center chip that the company says will meet the processing, energy efficiency and cooling needs of hyperscale centers.

The announcement sent Arm’s stock price soaring, with the stock rising more than 19% this week. But McDowell notes, many tech companies—including Arm customers—are also supportive. Even though the move essentially makes Arm their supplier, partner and competitor, McDowell writes that some companies, including Nvidia and Broadcom, have long-term Arm licenses, insulating them from competitive pressure—and the expansion validates the architecture they’ve already been building on. Other companies Arm works with, including Ampere Computing and Qualcomm, will now be more direct competitors, and the impact on them remains to be seen.

Bots were once a scourge of the internet. Many of these automated web crawlers sought to artificially inflate pageviews, steal information, spam users, overwork bandwidth, commit advertising fraud and perpetrate cybersecurity attacks. And so IT departments and CISOs created sophisticated strategies to block them.

Today, as the use of AI agents rises, it could be time to rethink those bot strategies. Jim Yu, CEO of search optimization company BrightEdge, shared statistics about AI-related web traffic at the company’s Spark event in New York City earlier this month. The numbers are staggering: 150% month-over-month growth in AI agent crawler traffic throughout the internet. Close to half of that is for training AI systems, and 87% of them are from ChatGPT.

“Most of the time, they’re [CIOs and CMOs] not even aware how much traffic is coming from those agents—which was a surprise to us, too,” Yu told me.

These are different from the bots of years ago, he said, and companies should allow them on their sites—which is a matter of updating infrastructure and expectations across the board, Yu said. AI agents from ChatGPT and Anthropic’s Claude seem to be just looking for basic content to feed their LLMs—not examining the underlying code or data. They also process information differently than the search engine tracking bots of old, and can read and extract useful data from media, including images and video. Which means, Yu said, companies need to optimize these types of files.

“That’s a new kind of infrastructure that I think most businesses are not yet ready for,” he said.

How To Catch Deepfake Fraud

As AI gets better at mimicking real people—their motions and voices—deepfake fraud is becoming one of the most widespread—and potentially damaging—cybersecurity threats. A recent study from DeepStrike found 85% of businesses reported some form of deepfake fraud in 2025. Ricardo Amper, founder and CEO of identity verification technology company Incode, told me that more than half of the fraud businesses see today is identity deepfakes, and much of it is undetectable to the human eye.

I spoke with Amper about why deepfake fraud is such a vexing problem and what CISOs can do about it. This conversation has been edited for length, clarity and continuity.

Why is deepfake fraud such a big problem?

Amper: It is an almost-zero-cost attack. The cost to create a reliable deepfake and inject it into your phone is negligible.

That’s scary because that means you can program algorithms or take AI agents and deploy attacks at a scale that we’ve never seen. Before, you would have to have a person with a camera. It would change your voice and your face, and you’d start an interview or a process where you’re trying to verify identity or talk to the help desk. Now, we’re very close to a point where AI is so good conversationally that you can deploy that times one million, so the breadth of the attack is something that we’ve never seen.

One successful attack [can be devastating]. If they’re able to successfully impersonate the CEO or someone who has a lot of access to information, the damage they can do is reputational and potentially in the billions of dollars. And that’s so scary because one of these attacks can make serious damage in the company.

Do you see anything happening in the future that would make deepfakes less of a threat?

Incode is very specific about what we think is the solution. It’s two things: Gen AI has an improvement rate that is incredibly fast, and AI that’s defending AI attacks should be able to learn fast—meaning whenever the system sees there was a specific attack, the system should learn and everybody should get the benefit.

The second is more of a collaboration thing. In a very primitive industry, we have to partner with DMVs and be able to biometrically verify—with your consent—citizens: their face, not the picture in the ID that you can fake, but your face against the face on the DMV.

But the best collaboration is if all companies are sharing all these insights without breaking privacy. You understand the behavior of people, then you are limiting the attack space for bad people. And that is really the solution.

This issue is something that every CISO is aware of. What do you recommend they do to fight it?

Follow the same type of procedures you would in cybersecurity.

The first thing is you need to scan. You have a database of employees. You have employees logging into video conferencing. Make sure that the historic information that you have is there and you’re verifying that people are real. You’re verifying people are who they are, verify their IDs also are real. I think it’s an equivalent of antivirus scanning, but for identity. Make sure you try to size up the problem, regardless of the [cybersecurity] vendor that you choose.

Make sure that when you reset a password, when you interact with someone from the outside—either an employee who’s completely locked out or someone new—make sure that you’re incorporating [deepfake detection] services, which are pretty easy to install. In terms of the quality of how you detect deepfakes, are you just looking at the ID or are you actually pinging the DMV or the government? Are you by yourself trying to figure it out, or are you connected to a group of companies who are trying?

Make sure that you attack the weakest link. [For example, unofficial] “I-9 centers” are a terrible idea. You can pay $500 and then [it will] verify whatever they want.

The broader conversation is this is not a CISO threat. It’s a board discussion. The board determines broadly how much insurance are they going to pay for, what are the big mitigations. What are you going to do with risks that, in theory, are low probability, but could ruin your company very quickly?

Commercial real estate services and investment firm CBRE hired Anuj Kadyan as its new chief technology and transformation officer, effective May 15. Kadyan will join the company from McKinsey & Company, where he works as a senior partner and co-leader of the technology services practice.

Agricultural machinery firm AGCO promoted Jena Holtberg-Benge to its chief digital & information officer role, effective March 16. Holtberg-Benge previously worked as the company’s vice president of aftersales parts.

Agentic automation leader UiPath promoted Raghu Malpani to the expanded role of chief product and technology officer, effective March 25. Malpani joined the firm in 2023 as chief technology officer, and his new appointment combines leadership of the product and engineering teams.

It doesn’t matter how many AI contracts you have in your enterprise, shadow AI is likely a problem at your company. Here are ways to determine how serious the issue is and strategies to curb its use.

Vibe coding makes it easier to turn your dreams into computer code, but—just like actually writing code—it can burn you out. Here are five steps to take to make sure your vibe coding is productive.

Which tech leader said this week he thinks AGI has been achieved—but was speaking in general terms and not talking about anything specific?

See if you got the answer right here.


© Forbes