menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Moving From AI Risk To AI Governance

9 0
09.04.2026

AI’s capabilities are multiplying every week, with more possibilities and problems emerging with each new model or application launched. Anthropic’s Claude Mythos model embodies both: The power to detect long-unseen vulnerabilities in code that has been vital to programming for decades—and the ability to find weaknesses that could paralyze important computer systems.

Some of the things that Mythos has found are astounding: A nearly three-decade-old bug in OpenBSD, one of the most security-hardened operating systems, that would allow anyone to crash a machine remotely; a much-bypassed 16-year-old vulnerability in the video encoding library FFmpeg; and multiple places in the Linux kernel where someone could take over complete control of a computer.

The disclosure of Claude Mythos’s capabilities—and the tightly controlled way in which the platform’s preview is being released for cybersecurity—reinforces Anthropic’s position to the public as the moral center of the AI boom. After the company took a principled stand against the Pentagon’s potential uses of AI technology, keeping a platform that could disrupt the order of online society private shows that the company is considering the broader impact of AI.

For enterprises to work effectively with any AI, they need to develop a governance strategy to ensure their proprietary data is protected and that systems deliver high-quality outputs. I spoke with Andrew Gamino-Cheong, cofounder and CTO of AI governance company Trustible, about how to establish these standards. An excerpt from our conversation is later in this newsletter.

This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.

Anthropic has a new AI cybersecurity model—Claude Mythos—that it is keeping private, only sharing it with a handful of companies that manage critical software through an initiative called Project Glasswing. The platform is a powerful tool for finding cybersecurity issues, and Forbes contributor Jon Markman writes that it identified thousands of zero-day vulnerabilities in critical systems in just a few weeks of testing—hence the company’s decision not to give the public access to it.

“The fallout—for economies, public safety, and national security—could be severe,” an Anthropic blog post states. “Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.”

Markman writes that Claude Mythos will reset cybersecurity as a whole. Powerful AI that helps defenders can make the digital world much more secure, but a tool like this in the hands of bad actors could be devastating. Forbes contributor Paulo Carvão writes that the premium pricing model for companies using Claude Mythos also establishes a vital revenue stream for Anthropic.

ARTIFICIAL INTELLIGENCE

This week, Meta launched Muse Spark, its latest model in its quest for a bigger share of the AI market. The first AI model released under the leadership of Scale AI creator Alexandr Wang, Muse Spark is the beginning of Meta’s overhaul of its AI suite, a step toward what the company describes as “personal superintelligence.” The long-anticipated new model was created after Meta’s Llama 4 model fell behind chatbot-style AI platforms from OpenAI and Anthropic. Meta took a 49% stake in Scale AI to bring Wang aboard last year.

In a blog post introducing the new model, Meta compares Muse Spark to other leading chatbots in understanding, text reasoning, health advice and agentic tasks—including coding, search and office tasks. Muse Spark has comparable performance across the board, though it remains to be seen how well users will rank it. Meta says it is already working on improvements, including a “contemplating mode” to handle more complex problems and coding improvements.

Since its launch on Wednesday, not much has been said about Muse Spark’s vibe coding prowess, but enterprise use doesn’t seem to be the primary thrust of the initial launch. After all, a Meta social media account—used for Facebook, Threads or Instagram—is now needed to use the tool.

A reality of the workplace in 2026: AI is forcing job cuts. That’s what companies are saying, anyway. A quarter of the 60,620 job cuts reported in career services firm Challenger, Gray and Christmas’s March report were the result of something to do with AI. According to the firm, nearly 100,000 job cuts to date have been blamed on AI, with companies saying that AI makes their operations more efficient and that they need fewer people. And many of these job losses have been in the tech sector, which lost 18,720 jobs in March and more than 52,000 in the first quarter of 2026.

A new study from the Return on AI Institute—the research arm of Scaled Agile—finds that many companies are all too willing to use AI as an excuse to reduce their headcount, Forbes senior contributor Joe McKendrick writes. Companies are cutting headcount at a rate 30 times higher than those waiting to see if AI pays off, with 60% making staff reductions because of the technology, and nearly the same number slowing or freezing new hiring as they expect future productivity gains.

The study recommends that businesses be a bit more deliberative—they should make sure they are actually gaining the value and efficiency they expect from AI before eliminating jobs. And one of the best ways to increase the ROI of AI is to give existing employees and company leaders training to use it better—which leads to 23% more value realization.

The latest company to blame AI for layoffs is Oracle, which announced plans to cut up to 30,000 jobs—close to a fifth of its global workforce—last week. Forbes contributor Jon Markman writes that new efficiency from AI is part of the story behind those losses, but the cost of AI infrastructure actually plays a much greater role. Oracle’s aggressive AI infrastructure buildout plans are projected to cost $50 billion in just this fiscal year—$15 billion more than initially anticipated. The company raised $50 billion in debt and equity to pay for it, but Forbes senior contributor Peter Cohan writes borrowing costs are rising because banks are becoming less willing to pay for these upgrades—so Oracle turned to its salary line item. Markman writes that the majority of the jobs eliminated worked on legacy software maintenance, on-premises support and traditional SaaS operations—functions that AI could presumably handle.

How To Establish Solid AI Governance

Governance is a vital part of AI technology in every enterprise, ensuring that applications protect proprietary company data, work responsibly, and deliver accurate and reasonable outcomes that won’t be challenged in court. Andrew Gamino-Cheong, cofounder and CTO of AI governance company Trustible, spoke with me about how to put the best governance strategy in place.

This conversation has been edited for length, clarity and continuity.

How do companies tend to think about AI governance?

Gamino-Cheong: It’s seen as a problem that’s quickly growing. There’s internal complexity around AI governance. There are a number of models and AI systems, and as the system’s been running for a while, you accumulate data. That can be an artifact you want to inspect. Also, the external context is increasing. That is the laws, the regulations, even the best practices for using AI, the way you measure the ROI for these systems.

Those two things combined make the AI governance problem space. We did a recent partnership with the AI Incident Database. It’s the best source of documented instances; organized, annotated instances of where AI’s gone wrong. And that’s what orgs really want to track. It’s like, I deployed a bunch of systems, if somebody else had a problem with it. I want to know about that so I can prevent that from happening to me. Can I prevent the lawsuit before it occurs?

That’s what they’re trying to focus on right now. But it’s tough because AI is changing very quickly. There’s a lot of different ways you can deploy it. You can build your own model, use a third-party model, use it as an API, use an app that’s wrapped around programs and data, an agent that’s operating inside your environment.

Maybe 15% to 20% of companies are moving forward and are just going to figure out the governance afterwards. They’re heavily concentrated in Silicon Valley. We're focused on all the orgs who are taking it seriously because even with AI systems you buy, ‘AI snake oil’ is still going to end up with an incident, a lawsuit on the front page. Business leaders are under obligations to mitigate that.

AI technology is constantly evolving, and new AI-related regulations are being drafted and debated all over the world. What should a company do to develop governance that won’t be outdated in a few months?

The first thing we recommend for orgs is to figure out what are your criteria for the fast track. What is the safe space where you can greenlight projects relatively quickly, with the minimal amount of additional work, and then identify the areas to spend more focus. If you can define that pretty clearly, you’re able to start spending your effort on the right areas and then get guidance on whether you even have enough documentation?

The number one pain point sometimes is that orgs don’t have good business case system descriptions with enough information. The biggest risk is that if you don’t collect enough information upfront, if a new law comes in, you’re not going to know immediately whether it’s in scope.

With good governance, you should never go with the first model you can get access to because it’s the easiest or the cheapest. Governance is about the trade offs we’re going to make between the accuracy and cost.

What advice would you give to a CIO about getting governance established and making sure that it’s effective and solid throughout the enterprise?

Every org is struggling with shadow AI. We’ve talked to a couple of hospital systems that initially banned all gen AI, then every doctor started using their own note taker. Now they’re in a worse position than if they at approved a note taker up-front. We advocate for creating an experimentation garden. Letting people do some experiments in there is better than if they’re using AI on their phone. You have to do AI almost out in the open.

There’s a lot of people using AI poorly. Very few people are trained on prompt engineering, context engineering. We require when you use AI, you have to disclose one of three different levels. First level is it’s almost entirely AI written; it’s a vibe coded thing. Level two is where you did 50% of the work yourself: the outline, points to make, structure, tone. Level three is really just copy editing and figuring out what to cut. By setting the expectations and requiring the disclosure, we’re getting people to be a lot more thoughtful about being transparent. People are going to start assuming something was AI generated until proven otherwise, and getting comfortable with disclosing that is the better path.

Documenting your systems well enough upfront will help you prepare for the uncertainty that’s going to happen.

Home improvement retailer Home Depot appointed Franziska Bell as its new executive vice president and chief technology officer, effective April 6. Bell joins the firm from Ford Motor Company, where she worked as chief data, AI and analytics officer, and has also worked in leadership for BP and Uber.

Financial services and data provider S&P Global selected Firdaus Bhathena as its new executive vice president and chief technology and transformation officer, effective April 27. Bhathena most recently worked as global chief technology officer at FIS Global, and he steps into a newly created role.

Contracting services provider Dycom Industries hired Regina Salazar as its new senior vice president and chief information and digital officer, effective April 6. Salazar most recently worked as senior vice president and chief digital and information officer at Novelis, and she also held leadership roles at the Whirlpool Corporation.

After OpenAI signed the controversial deal with the Pentagon—pushing out Anthropic as it negotiated the ethical terms of the agreement—millions of people quickly uninstalled ChatGPT. Here’s what you should know about user politics and passion when it comes to online tools.

Vibe coding makes it easy to create your own software tools. Here are some ways to avoid SaaS subscriptions with customized systems.

Which big tech company awarded white-hat hackers $17 million last year for finding vulnerabilities in its systems?

See if you got the answer right here.


© Forbes