Three conversations, one conference: AI Engineer Melbourne
The full schedule for AI Engineer Melbourne is now live.The grid view is the most useful way to scan it. There’s also semantic search, and a recommendation system as well to find related talks to ones you’re interested in.
We’ve also shipped an agent-friendly view with MCP endpoints, llms.txt, and other interfaces designed for agents — the conference about AI engineering, eating our own dogfood! We’d love to hear how you get your agents to work with the schedule.
Below is the first piece in a short series — what’s at the conference, why we built it the way we did, and what each of the three tracks is offering. Three more follow over the next ten days, one for each track.
Three conversations, one conference: AI Engineer Melbourne
The full schedule is now live — the grid view is the most useful way to scan it, with both days and all three tracks side by side.
There are three conversations happening right now in the practitioner community around AI, and they are mostly happening separately. The first is among software engineers, about how the practice of software engineering — not just the writing of code, but the whole craft and the work that surrounds it — is transforming. The second is among AI engineers, about what production AI systems actually require to be reliable. The third is among engineering leaders, about how to lead organisations through a transition whose shape is not yet clear. If you spend much time on the platforms where engineers gather, you will recognise all three — and you will probably notice that the people in any one of them are mostly not in the other two.
AI Engineer Melbourne, on June 3rd and 4th at Federation Square, is the rare event that brings the three conversations into the same room. We’ve built the programme around three tracks corresponding directly to the three conversations, and the speakers in each are people doing serious work on the question their track is engaging with. The result is something that does not exist anywhere else in this part of the world: a place where the three conversations are forced to meet each other for two days, with the people having them in the room together.
Over the next ten days we’ll publish three companion pieces, one for each track, going into the substance of what each conversation looks like. In this piece we want to bring together some of the broader threads — to describe the shape of the conference as a whole, and to make the case that the three conversations belong together more than the current discourse acknowledges.
The first conversation: what craft becomes
The conversation in software engineering is about what happens to the craft when the agent is doing more of the work. It is not a new conversation — engineers have been having versions of it since Copilot — but in the last year it has changed register. The early phase was about whether the agent was useful. The current phase is about what changes about being an engineer when the agent is reliably useful for an increasing fraction of the work.
The answers people are converging on are uncomfortable. Some of the work is being lost. Some of it is being transformed. Some of it is being amplified — engineers who can articulate what they want clearly are getting more leverage; engineers who learned by writing code at scale are finding the skill less central. The middle of the spectrum, where most senior engineers actually live now, is the part of the conversation least well served by the public discourse, because the discourse tends to be at the extremes: either everything-is-fine or everything-is-over.
The SWE/agentic coding track is unusually willing to engage with the middle. Annie Vella‘s keynote on craft in the time of agents will, I suspect, be the most-discussed talk of the conference for this reason. Around her, the track has the strongest cluster of practitioner talks I have seen at any Australian conference — including three from Stile Education describing a single organisation’s journey to long-loop agentic engineering, and a counterweight cluster led by Jason Cornwall arguing that the productivity story is not what it seems.
The second conversation: what AI engineering becomes
The conversation in AI engineering is about whether the discipline is going to become an engineering discipline, or remain a craft. The public discourse has been mostly the second — screenshot-driven threads on social platforms, demo videos with no measurements, architecture diagrams without any data behind them. This works as marketing. It does not work as a way to build production systems that need to be trusted.
The AI engineering track is, almost talk by talk, in the opposite posture. Sceptical of demos. Demanding of evidence. Willing to talk about failure modes in detail. The speakers are people who have shipped production AI systems, watched them break, and learned something from the breakage. The talks describe the verification stack, the failure modes, the architectural decisions, and the integration challenges that actually make production AI engineering work — or fail.
If you are responsible for AI systems that need to do real work for real users, the track is the most concentrated body of practitioner thinking we’ll see in 2026 in this region. Yicheng Guo on what evals caught after a production hallucination, Jack Silman and Abdul Karim on why they fired their LLM judge, Avni Bhatt on a small language model that beat their LLM in production — these are the kinds of talks the track is built on.
The third conversation: what leadership becomes
The conversation in engineering leadership is the hardest of the three, because it does not have technical answers. How do you lead people through a transition that is reshaping their identity? How do you build governance for systems whose behaviour you cannot fully predict? How do you make strategic decisions when your organisational readiness is materially behind your strategic ambition? How do you handle the burnout that the AI transition is producing in your most experienced engineers?
Most public AI-leadership content avoids these questions. It is confident where it should be uncertain, technical where it should be human, and strategic where it should be honest. The leadership track at AI Engineer Melbourne is, almost without exception, in the opposite posture. The speakers are people accountable for outcomes — CTOs, fractional CTOs, heads of engineering — and they are willing to talk about what is genuinely difficult about leading through this moment, rather than performing strategic confidence about it.
If you are responsible for AI in your organisation, this is the track. Christian Dandre on the readiness gap, Andy Kelk on engineers being afraid of becoming junior again, Aubrey Blanche on building governance on the fair-go principle rather than on Silicon Valley’s defaults — there is no other event in the region where the leadership conversation is at this level.
Why the three conversations belong together
The argument for putting the three tracks in the same conference, rather than running three smaller events, is that the three conversations are versions of the same conversation seen from different angles. The engineer feeling craft slip away, the AI engineer trying to make a production system trustworthy, and the leader trying to navigate organisational change are all responding to the same underlying transition. They are responding to it differently because they are accountable for different things. But they need each other.
The engineer benefits from understanding what the AI engineer is grappling with, because the engineer’s work is increasingly downstream of decisions the AI engineer is making. The AI engineer benefits from understanding the leadership conversation, because the constraints on what they can ship are leadership constraints as much as technical ones. The leader benefits from understanding both, because leading through this transition without that understanding produces the kind of strategic decision that looks brilliant in the deck and incoherent in execution.
We’ve built the conference, in this sense, as an argument that the three conversations should not be having themselves separately. The corridor conversations between sessions, the questions in panels, the dinners afterwards are where the three communities will actually meet each other. That is harder to engineer than the talks themselves, but the talks make it possible.
The three conversations are happening whether the people having them are in the same room or not. The case for being in the room is that the conversations sharpen each other, and the people having them sharpen each other, in ways that do not happen on social platforms or in private channels. June 3rd and 4th in Melbourne is where it happens.
A small aside on the programme itself
A practical note that fits the conversation. We’ve shipped the conference programme not just as a human-readable schedule but as an agent-friendly view at data.webdirections.org — including MCP endpoints, an llms.txt, and other interfaces designed for agents rather than people. If you want to point a coding agent at the programme, build something against it, or just see what a dual-interface programme looks like in practice, the agent view is there. There’s a longer piece coming on why we did it this way and what we learned doing it, but for now the data is live and waiting.
SAVE BEFORE MAY 15
BRINGING A TEAM?
If you’re sending five or more people, get in touch and we’ll sort out a team offer — better per-ticket pricing, ticket upgrades, and more. Reply to this email or drop us a line.
We have additional savings for freelancers and people paying their own way, for not-for-profits, for government, and for folks at agencies. If that’s you, reply to this email or get in touch and we’ll sort it out.
MORE ON THE PROGRAMME
- Full schedule (grid view) — both days, all three tracks side by side.
- Speaker list — everyone speaking and what they’re talking about.
- Agent-friendly programme view — MCP endpoints, llms.txt, and other interfaces for agents.
BACKGROUND READING
- How long is your loop? — the loop-length framing some of the track pieces will build on.
- What the maturity ladders miss — companion piece on AI development maturity models.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.