73% of Government AI Projects Flagged as Risky. No Law Addresses It.

The split document that is simultaneously a blank dusty law bill and a 73% combusted purchase order for AI Procurement Rul...

Between September and November 2024, 21 Australian government agencies piloted a new AI assessment tool and found risks in 73% of use cases examined. During the same period, the European Union finalized its AI Act with fines reaching 7% of global revenue, while the United States published a framework that explicitly instructs Congress not to create any new regulatory agencies and urges preemption of state AI laws that “impose undue burdens.” Australia released an AI procurement checklist for systems.

One of these approaches has produced actual safety data. It is not the one with the biggest fines.

Australia’s Digital Transformation Agency piloted its AI Assurance Framework across 21 volunteer agencies. Impact assessments flagged risks in 73% of AI use cases examined , risks that close to two-thirds of participants said existing processes would not have caught. Meanwhile, most EU Member States have yet to designate an enforcement contact for the AI Act. Senate Majority Leader John Thune conceded the US still needs to “figure out how to do this in a way that addresses the concerns that a lot of our members have about not trampling states’ rights.”

Three democracies, three governance strategies , and the country without a single AI statute is the only one producing operational safety data. A year ago none of these positions existed. The speed of divergence is itself a finding.

AI Procurement as Governance

“Procurement is a important turning point where planning meets implementation,” said Lucy Poole, Deputy CEO of Australia’s Digital Transformation Agency (ARNnet). Note the key word: implementation. Not regulation, not enforcement , a purchase order with a checklist attached.

Between September and November 2024, 21 agencies volunteered to pilot the DTA’s impact assessment tool , roughly one in five.(https://www.dta.gov.au/media-releases/ai-policy-overhauled-new-impact-assessment-tool-and-procurement-guidance) According to the pilot report, approximately 90% found the guidance helpful, and 70% reported the questions were clear enough to complete without specialist training.(https://www.digital.gov.au/policy/ai/ai-assurance-framework-pilot-report/findings-recommendations) That second figure matters most: a risk assessment tool that requires a specialist to operate will never scale beyond compliance teams , it becomes another audit rather than a decision-making instrument. By December 2025, the DTA’s AI Plan for the Australian Public Service moved from pilot to operational policy.

Critically, participating agencies self-selected low-risk, less complex AI use cases for the pilot. Facial recognition in border control, automated welfare eligibility, predictive sentencing , none made the cut. That means the 73% risk flag rate is a floor, not a ceiling.(https://www.digital.gov.au/policy/ai/ai-assurance-framework-pilot-report/findings-recommendations) It describes the safest slice of government AI deployment, and even that slice contained previously invisible risks.

Twelve weeks. No legislation. No enforcement body. No fines. One checklist and a purchase order , and Australia became the only democracy with operational AI safety data.

Laws Without Enforcers, Frameworks Without Teeth

On paper, the EU AI Act carries the most severe penalty structure in AI governance. Article 99 authorizes fines reaching 7% of worldwide annual turnover for prohibited practices, 3% for other infringements, and 1% for incorrect information supplied to regulators. A vendor with €10 billion in global revenue faces maximum exposure of €700 million , a graduated penalty structure that assumes someone is grading the work.(https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-99)

Penalties without enforcers are press releases.

Each member state needed to designate competent national authorities by August 2, 2025, but only 8 of 27 have complied seven months later. Researchers at the European Parliamentary Research Service warn the resulting gaps are “creating enforcement challenges across the single market” , diplomatic phrasing for a penalty regime that currently covers less than a third of EU territory. Extrapolate the compliance rate:

8 states ÷ 7 months = 1.14 states/month → 19 remaining ÷ 1.14 ≈ 16.7 more months → full compliance by approximately July 2027

At current pace, the EU will need roughly two and a half years from the AI Act entering force to stand up enforcement across all member states , and establishing enforcers is merely a prerequisite for producing safety data. Australia generated operational risk data in twelve weeks. That is an 8× speed advantage before the EU has identified its first risk.

€700 million × (19 non-compliant states / 27 total) = approximately €493 million in authorized penalties with no enforcement mechanism behind them

(https://epthinktank.eu/2026/03/18/enforcement-of-the-ai-act/)

EPRS analysis frames this as an enforcement logistics problem. But Australia’s pilot available data indicates a deeper flaw: the EU built a penalty architecture before building a risk identification system. You cannot fine vendors for harms you have not measured , and no EU member state has yet produced safety data comparable to what Australia’s checklist for AI systems generated in twelve weeks.

Across the Atlantic, the White House framework proposes the inverse. Rather than penalties it cannot enforce, it offers regulatory sandboxes with exemptions lasting up to 10 years while seeking preemption of state AI laws that might fill the federal gap. “Another chance for tech companies to launch harmful products with no accountability,” said Brad Carson of Americans for Responsible Innovation (Roll Call). Carson’s organization lobbies for binding rules, which colors the criticism , but a framework that preempts state enforcement without building federal capacity produces a governance vacuum, not a strategy.

What the DTA’s pilot report and the EPRS enforcement analysis reveal is what this analysis identifies as the Procurement Shortcut , government purchasing power delivers governance outcomes faster than legislation because it requires a purchase order, not a parliamentary majority. The shortcut works. It works faster than anything else on offer.

Now measure how far it reaches.

Where the Shortcut Breaks

Consider what the DTA framework actually governs. Not AI products. Not vendor engineering practices. Not algorithmic behavior in deployment. It governs a transaction , one purchase by one government buyer.

Australia’s Commonwealth public service comprises over 100 departments and agencies. Twenty-one volunteered for the pilot , roughly one in five. Even if the operational policy eventually achieves universal adoption across every federal entity, government procurement represents a fraction of economy-wide AI deployment in any country. Private hospitals, insurers, employers, lenders, landlords, social media platforms , the AI systems Australians are most likely to encounter sit entirely outside the procurement perimeter.

That is the turn. That feature , the absence of legislation , is inseparable from its structural limitation: no jurisdiction beyond the buyer.

A vendor that fails an impact assessment faces no fine, no regulatory action, no public record. It loses the contract. It remains free to sell the identical product , unassessed, unmodified , to hospitals, schools, and private companies across Australia or anywhere else. The predictable second-order consequence: vendors will bifurcate offerings, maintaining assessment-ready products for government contracts and cheaper, unaudited versions for everyone else , widening the safety gap AI procurement aimed to close.

Products that triggered safety flags in most government assessments reach private-sector buyers with no assessment at all. Analysis of the UK’s voluntary copyright framework exposed the identical dynamic: opt-in governance protects sophisticated institutional participants while leaving individuals and smaller organizations without recourse.

Defenders argue lighter regulation avoids stifling innovation , a position the Center for Data Innovation advanced by characterizing the White House framework as avoiding “alarmism” (Roll Call). That characterization underestimates the structural gap. Light regulation and zero regulation produce identical outcomes for anyone encountering AI outside a government contract. Both camps in the American debate , those demanding binding rules and those seeking regulatory restraint , assume the argument is between more law and less law.

It is actually between buyers who can demand safety documentation and everyone who cannot.

Three Models, One Verdict

Multiply the 73% risk flag rate by the approximately 67% invisibility rate , the share of participants who said existing governance would have missed the flagged risks entirely, per Roll Call.

73% × 67% ≈ 49%

Nearly half of government AI deployments carry risks that standard governance processes would never surface , and that figure derives from self-selected low-risk pilot cases, making it a conservative floor. If the risk flag rate scales even modestly with deployment complexity , say 73% for low-risk, 85% for medium, 95% for high , a government AI portfolio weighted 50/30/20 across tiers yields a blended rate of approximately 82%. The invisible-risk share rises accordingly: (per frequently exceeds $7.2 million per incident)

82% × 67% ≈ 55%

Meaning more than half of a representative portfolio harbors risks no standard process would catch, per frequently exceeds $7.2 million per incident.

What that 49–55% costs: organizations deploying AI without structured assessments face project failure remediation that frequently exceeds $7.2 million per incident. A government agency running 20 AI use cases without impact assessments can expect roughly 10 to harbor undetected risks. At a conservative 20% materialization rate, two failures per procurement cycle translate to approximately $14.4 million in avoidable cost , before any regulatory penalty applies. Running an impact assessment takes a procurement officer an estimated three days , roughly $2,400 in staff time at typical government pay scales. That is a 6,000:1 return on a three-day checklist.

Dimension Australia EU US
Legal authority Contract terms Binding regulation Non-binding guidance
Maximum penalty Loss of contract Up to 7% global turnover None
Enforcement status Operational since Dec 2025 8/27 states compliant (full compliance ~July 2027) No enforcement body
Time to first safety data 12 weeks 7+ months and counting (zero data) No timeline
Safety data produced 73% risk flag rate None None
Citizens protected Government users only All EU residents (in theory) None specified

Before the next AI decision crosses any desk, apply this:

Governance Gap = (AI use cases in portfolio) × 0.49

Score > 3 → undetected AI risks likely exceed current governance capacity.

Score > 10 → impact assessments should precede contract renewal, regardless of jurisdiction.

For procurement officers: run the gap calculation against active contracts now. For department heads: adopt Australia’s public assessment framework as a template , the checklist is publicly available and requires no enabling legislation. For vendors: budget for assessment compliance now , Australia’s guidance is already operational, and EU high-risk AI requirements activate by August 2027. For citizens: procurement governance will never reach beyond government transactions , closing that gap requires the legislation three democracies are still struggling to deliver.

Final decisions on AI governance will not come from a court or a legislature. They will come from an AI procurement officer opening a spreadsheet, checking a risk score, and deciding whether to sign. For every other democracy, the question is no longer whether to govern AI , it is whether to wait for a law, or start with a purchase order.

What to Read Next

References

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top