Beyond Silicon Valley: Building AI Governance on the Fair Go Principle — Aubrey Blanche at AI Engineer Melbourne 2026
Beyond Silicon Valley: Building AI Governance on the Fair Go Principle
AI governance looks the same everywhere because the AI industry looks the same everywhere: concentrated in Silicon Valley, shaped by American values, and exported globally as though those values are universal.
But they're not. The assumptions baked into American AI development—individual liberty over collective welfare, technological solutionism as the default response to social problems, winner-takes-all competition as inevitable—don't map cleanly onto cultures with different histories and principles. Australia's "fair go" ethos, rooted in egalitarianism and collective responsibility, stands in genuine tension with the individualism and disruption narratives that dominate AI discourse in the US.
What does responsible AI look like when you take that seriously? Not as a cosmetic adjustment to American frameworks, but as a genuine rethinking built on Australian principles?
This is Aubrey Blanche's question. As a recovering American now based in Australia, she's spent years thinking about how organisational culture shapes technology. Her work at The Mathpath—an unusual name that captures her identity as both mathematician and empath—focuses on embedding equitable design into AI systems from the start, not as an afterthought.
The difference matters. When you build AI governance on "individual liberty," the framework optimizes for choice and innovation, but it tolerates inequality as the cost of those goods. When you build on "fair go," the framework optimizes for equitable access and collective welfare. The outcomes look different.
Consider a simple case: algorithmic hiring. A Silicon Valley approach might emphasize individual merit, assuming that if you expose algorithmic bias, the market will fix itself through competition. An Australian approach rooted in the fair go principle asks: how do we ensure that AI doesn't entrench existing inequalities? How do we build trust through demonstrated fairness rather than just correcting individual errors?
The difference is structural, not surface-level. It's the difference between asking "Is this legal?" and asking "Is this fair to everyone it affects, regardless of their starting position?" It's the difference between optimizing for innovation velocity and optimizing for ensuring no one gets left behind.
There's a deeper issue here about cultural imperialism in technology. American tech companies have enormous reach. They build products used globally. Their values, embedded in algorithms and design decisions, shape how people around the world experience technology. If those values reflect Silicon Valley rather than the communities using the technology, you've exported a particular worldview as though it's universal.
Blanche's work on distinctly Australian responsibility frameworks takes this head-on. What does it mean to build AI that serves the common good? Not as a feel-good mission statement, but as a genuine constraint on how you build? It means transparency about what your systems do and why. It means ensuring equitable access—that AI benefits aren't concentrated among people who can afford premium versions. It means building trust through integrity, not through marketing claims.
This also connects to practical resilience. Australia has been hurt by dependence on US tech before: content moderation decisions made in California, business models built on Australian data but profiting elsewhere, infrastructure vulnerable to geopolitical shifts. Building AI governance frameworks rooted in Australian principles isn't just ethically better; it's practically more robust.
There's no suggestion here that Australian principles are perfect or that Silicon Valley got everything wrong. But the honest conversation—the one many organisations aren't having—is that responsible AI cannot be culturally neutral. It reflects choices about what matters, who benefits, and how we treat people. Those choices should be made locally, by people who understand the culture and values they're serving.
Aubrey Blanche has been featured in Wired, the Wall Street Journal, and the Australian Financial Review precisely because she articulates something that matters to people working in AI: the possibility that we don't have to accept Silicon Valley's definitions of responsibility, innovation, or progress.
Her presentation at AI Engineer Melbourne 2026 (June 3–4) will explore what responsible AI looks like when you build it on the fair go principle instead of importing frameworks from overseas—and why that matters for everyone building technology in Australia today.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.
