Rethinking Software Engineering in the Age of AI: Notes from our Melbourne Unconference
Yesterday, we ran an in-person unconference with a big question at its heart: what does software engineering look like in the age of large language models?
We weren’t so much interested in prompt hacks or productivity tools. We weren’t asking how to pass LeetCode with ChatGPT, or automate our Jira boards. We were trying to reckon with something deeper: if the act of writing code is being transformed by tools that can now write it alongside us—or for us—then what does it mean to be a software engineer?
Roughly thirty people joined us for the afternoon: engineers, team leads, researchers. Folks who’ve built things and broken things. Folks who’ve mentored and hired and shipped and debugged under pressure. And folks who are deeply curious—and perhaps a little unsettled—about what AI is doing to the discipline they’ve spent their careers practicing.
We ran things under Chatham House Rule, so nothing here is attributed. But here’s what emerged from the sessions.
Interested in Exploring this idea more? Join our related LinkedIn Group
Software Engineering in the Age of AI & LLMs
Hiring Is Breaking—and No One Has the Answer
One of the topics we kept coming back to was around hiring. AI is already disrupting every part of the pipeline: resumes are machine-written, cover letters are increasingly indistinguishable from AI paste, and even interviews are becoming something you can prep for with an LLM.
Some participants shared their unease at how many early-career applicants are submitting code they likely didn’t write or even understand. Others talked about the awkwardness of banning AI tools during interviews, knowing full well these tools are part of the modern workflow.
What emerged was a shared sense of tension: if everyone is using AI, how do you distinguish between skill, potential, and dependency?
It’s not just about catching people out—it’s about figuring out what we’re even assessing. Technical skill? Critical thinking? Prompt literacy? Cultural alignment? The conversation didn’t yield answers, but it surfaced a lot of sharp, necessary questions.
Key takeaways:
- Traditional hiring signals are eroding fast
- Interviewing practices need to evolve alongside developer workflows
- We’re not sure what we’re actually measuring anymore
Questions to reflect on:
- What do we really want to understand about a candidate?
- How can we responsibly evaluate developers in an AI native environment?
- What new hiring practices might emerge that treat AI literacy as a positive signal?
Juniors Are Being Left Behind
Another topic covered extensively was the question of what happens to junior developers when AI becomes the default assistant.
There’s a real risk that foundational learning—how to read an error message, how to trace a bug, how to design something from scratch—is being bypassed. Not maliciously, but subtly, over time. And without those foundations, growth stalls.
Some teams are trying to adapt mentoring models. Others are rethinking interview processes altogether: asking candidates to critique AI-generated code rather than write it from scratch. But no one’s nailed this yet.
How do we teach the craft when the tools obscure the craft? That question hung over much of the day.
Key takeaways:
- Juniors may not be gaining deep foundational skills
- AI is changing what early-career learning looks like
- Teams need to intentionally redesign mentorship structures
Questions to reflect on:
- What should “learning by doing” look like when AI is doing the work?
- How can we support juniors in building lasting intuition and judgment?
- What mentorship models are suited to AI-assisted development?
We’re Still Not Sure What Makes a “Good Engineer” Anymore
This thread connected many of the conversations: what even is a software engineer now?
In an LLM-enabled workflow, raw output speed isn’t impressive. Knowing a framework inside-out might be less useful than knowing what’s worth delegating to an AI. Some participants suggested the best engineers now are the ones who ask better questions. Or who can filter good AI output from bad. Or who understand how to synthesise ideas, not just implement solutions.
But how do you interview for that? And more importantly, how do you support people to develop it?
Key takeaways:
- Prompt strategy, synthesis, and critique are emerging as core skills
- Traditional engineering archetypes are being redefined
- Teams need new ways to support and evaluate evolving competencies
Questions to reflect on:
- How has your own definition of “good engineer” changed?
- What traits are you seeing become more valuable in your team?
- How can we design career growth around these new skillsets?
Speed, Metrics, and the Illusion of Progress
A few sessions drifted (intentionally) into uncomfortable territory. As AI tools accelerate our output, they also invite bad incentives. If you can ship five PRs an hour, are you adding value—or just motion?
Several folks spoke about metric bloat: counting commits, tickets closed, words generated. There was quiet anxiety that AI is making us faster, but not necessarily better. That performance might be flattening into productivity theatre.
At the same time, there was curiosity: could we use AI to surface more meaningful signals? Like code clarity? Design evolution? Collaborative impact? The room didn’t land on answers—but the questions were sticky.
Key takeaways:
- Output speed isn’t a proxy for quality
- Common engineering metrics may become misleading in the AI era
- There’s a need for better ways to evaluate meaningful work
Questions to reflect on:
- What are you measuring today that might be misleading tomorrow?
- Can AI help us track quality, not just quantity?
- What would a healthier, more honest set of performance signals look like?
This Is a Cultural Shift, Not a Technical One
Maybe the most important thread of all: AI isn’t just a tooling change. It’s a cultural one.
Teams need space to process that. We heard stories of junior engineers embracing LLMs eagerly, seniors resisting them outright, and staff-level folks cautiously experimenting. That dynamic alone creates tension.
There was a lot of appreciation for teams building space to learn together—brown bags, internal prompts libraries, “failure file” talks. Not because it’s a productivity boost, but because it makes sense of the new landscape together.
Key takeaways:
- Cultural adaptation is lagging behind technical adoption
- Teams need psychological and conversational space to adjust
- Shared learning rituals help normalize and refine practice
Questions to reflect on:
- What are the cultural signals around AI use in your team?
- How do you make room for learning without pressure to perform?
- What rituals or routines help your team evolve together?
It’s clear we’re only getting started
If there was one thing this unconference made clear, it’s that the hard questions are only beginning.
- What does it mean to be a “senior engineer” when the machine can do your job faster?
- How do we preserve mentorship when junior devs never struggle in the same ways?
- How do we design teams where humans and machines collaborate—but humans still grow?
- What kinds of engineering values matter in a world of AI scaffolding?
These aren’t hypotheticals. They’re starting to hit teams, workflows, and careers. And if we want to shape what comes next, we need space—like this unconference—to sit with those questions honestly.
I’m grateful to everyone who came, who shared, and who listened. This wasn’t a conference. It was a conversation. And we need a lot more of them.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.