Editor's Note
This week's stories share a single pressure point: the gap between what AI can now do and the frameworks built to govern it is closing — fast, and from every direction.
Meta's space solar commitment signals that hyperscalers have given up waiting for the grid. The White House blocking Anthropic's Mythos expansion shows governments now treat AI distribution as a national security decision. Goldman's Hong Kong block proves that contract geography has become a live compliance risk. And China's courts have ruled that the cost of automation cannot simply be passed to workers. The technology risk was always visible. The energy, legal, and regulatory risks are the ones catching organisations off guard.
01
Goldman Sachs Blocks Hong Kong Staff From Using Anthropic's Claude
Goldman Sachs has quietly barred all Hong Kong-based employees from accessing Anthropic's Claude, Bloomberg and the Financial Times reported on 29 April. The restriction followed a strict internal review of the bank's contractual arrangements with Anthropic, in consultation with Anthropic itself, which concluded that Hong Kong falls outside Claude's supported markets. The block is geography-specific and applies even to Goldman staff visiting Hong Kong from other offices. Other AI tools — including OpenAI's ChatGPT and Google's Gemini — remain available on the bank's internal platform. Anthropic confirmed that Claude has never been officially supported in Hong Kong.
Why it matters: Goldman may have done the legal due diligence that competitors have not. Every multinational financial institution with an enterprise AI contract needs to audit whether geographic licensing terms create compliance exposure — or an unequal AI capability across its global workforce. The next question is whether other major banks follow.
02
OpenAI Missed Its Own Revenue and User Targets in Early 2026, CFO Warns on Compute Costs
The Wall Street Journal reported on 28 April that OpenAI fell short of several monthly internal revenue targets in early 2026 and failed to reach its goal of 1 billion weekly active users by year-end 2025. Market share losses to Anthropic in coding and enterprise, and to Google's Gemini more broadly, were cited as the primary cause. CFO Sarah Friar privately warned colleagues that OpenAI may struggle to fund future data-centre commitments if revenue growth does not accelerate — a pointed concern given the company's $250 billion Azure compute commitment and a planned IPO at a valuation of approximately $1 trillion. OpenAI and Altman called the report "ridiculous" in a joint statement. Oracle shares fell 4%, Nvidia and Broadcom dropped 3–4%, and CoreWeave fell over 5% on the news.
Why it matters: A company with a $250 billion compute commitment and a planned IPO cannot afford sustained revenue underperformance. If the gap between OpenAI's infrastructure spend and its revenue trajectory widens, pricing pressure on enterprise contracts, product roadmap changes, or IPO delays become live risks for every organisation that has built its AI strategy around OpenAI products.
Source: CNBC, 28 April 2026 | Fortune, 28 April 2026
03
Meta Signs Deal to Beam Solar Energy From Space to Power AI Data Centres
On 27 April, Meta announced partnerships with two energy startups to address the power constraints facing its AI infrastructure. The first, with Overview Energy, reserves up to 1 gigawatt of space-based solar — satellites in geosynchronous orbit will collect continuous sunlight and beam it as near-infrared light to existing terrestrial solar farms, enabling round-the-clock electricity generation. The second, with Noon Energy, reserves up to 1 GW and 100 GWh of ultra-long-duration carbon-based storage capable of running for over 100 hours — described by Meta as the largest such commitment in the industry. Meta has already contracted more than 30 GW of clean energy and backs 7.7 GW of nuclear capacity. Both partnerships target 2028 pilot demonstrations.
Why it matters: AI's power problem is now a capital allocation problem. Meta is committing balance sheet to energy technologies that don't yet exist at commercial scale — a signal that the hyperscalers have concluded the grid cannot deliver what their AI roadmaps require. Competitors facing the same constraint will be forced to respond with similar bets or accept an infrastructure disadvantage.
Source: Meta Newsroom, 27 April 2026
04
White House Blocks Anthropic's Plan to Expand Its Most Dangerous AI to 70 More Organisations
The White House told Anthropic on 30 April that it opposes the company's plan to expand access to Mythos — its advanced AI model capable of autonomously finding and exploiting software vulnerabilities — from approximately 50 organisations to 120, according to the Wall Street Journal, confirmed by Bloomberg. Mythos, launched in early April under Project Glasswing, was initially made available only to a small group including Amazon, Google, and JPMorgan. The administration cited two concerns: misuse risk and insufficient compute capacity to serve both government agencies and an expanded user base. The NSA is among agencies currently testing Mythos. The Pentagon separately maintains Anthropic is a national security supply chain risk, following a breakdown over autonomous weapons use. Defence Secretary Pete Hegseth called Anthropic's CEO an "ideological lunatic" during Congressional testimony on 1 May.
Why it matters: This is the first time the US government has formally intervened to block an AI company from expanding access to its own commercial product. Every enterprise using Anthropic models should monitor this closely: a company simultaneously barred by the Pentagon, blocked from expanding its most advanced model, and subject to a possible executive order represents an unusual concentration of regulatory risk for any procurement team to absorb.
Source: Bloomberg, 30 April 2026 | Axios, 1 May 2026
05
China's Courts Rule It Illegal to Fire Workers and Replace Them With AI
Courts in Hangzhou ruled on 30 April that a technology company's dismissal of a senior employee — whose role had been automated by AI and who was offered a replacement position at a salary cut from 25,000 to 15,000 yuan — constituted illegal termination. The Hangzhou Intermediate People's Court held that AI adoption is a controllable business strategy, not an unavoidable economic disruption, and that the cost of technological transformation cannot be transferred to the employee. The case is among several emerging from Chinese cities including Beijing, where a data-mapping worker replaced by AI won through arbitration last year on the same principle. Both rulings were timed to coincide with China's May 1 Workers' Day.
Why it matters: Any multinational manufacturer or technology company with significant operations in China now faces legal exposure if AI-driven restructuring results in dismissals without retraining or fair redeployment. The ruling creates a direct cost to automation in the world's largest manufacturing economy — and signals a legislative direction that other jurisdictions may follow.
Source: Caixin Global, 30 April 2026 | NPR, 1 May 2026
This Week's AI Tip
Use AI to Stress-Test Your Own Decisions
Most use AI to gather information. Few use it to challenge their own thinking. The devil's advocate prompt changes that — share a decision you're about to make and ask AI to argue the other side.
It takes two minutes. It forces you to articulate your reasoning. And it surfaces the counterarguments before the room does.
Before
"What are the risks of delaying our AI governance policy?" — you receive a generic list.
After
"I'm planning to delay our AI governance policy until after the EU AI Act deadline. Here is my rationale: [X]. What are the three strongest arguments against this decision?" — you receive a targeted challenge to your specific position.
Use this before any significant decision: a vendor selection, a deployment timeline, a budget commitment. Specific prompts get real answers. Vague ones don't.
🌿 Good News · AI Making a Difference
Google Commits $30 Million to Accelerate AI Breakthroughs in Health and Climate Science
Google Org’s Challenge: AI for Science is distributing $30 million to nonprofits, social enterprises, and academic institutions using AI to accelerate scientific discovery. Priority areas include health and life sciences — among them antimicrobial resistance detection and crop disease resistance — and climate resilience. Selected teams will also receive six months of pro bono support from Google AI engineers alongside Google Cloud credits. Applications closed 1 May 2026. Google has run similar impact challenges since 2010; this is its first focused exclusively on AI-accelerated science.
Source: Google.org, April 2026
If you’ve been forwarded this email, you can subscribe here

