Engineering for the Agentic Web When 50% of Your Traffic is Robots
Over the last two years, our customer web traffic changed: today around 50% of visitors were unknown browsers and AI agents. The era of aligning with the traditional search engine crawlers with Core Web Vitals is shifting; the new challenge is feeding focused low-noise context to autonomous agents and Large Language Models (LLMs).
Traditional Search Engine Optimisation (SEO) relies on techniques such as keyword density, backlink tracking, and human-readable formatting, but when a significant part of your traffic suddenly becomes AI agents, how do you ensure your content is being parsed correctly by machines?
Our team will share the strategic and architectural shifts the organisations are facing to embrace this new web reality. This isn’t about meta-tags; it’s about re-architecting how a business presents itself and its content in the AI-driven internet, including
1. Identifying Agent Traffic: How to identify & separate agent traffic.
2. Realigning Context: Trade-offs in modifying traditional website structure vs. serving structured markdown, llms.txt files, and specialised API endpoints.
3. Omni Channel Content: Serving dual-experiences with web applications for humans versus data streams for agents.
4. Lessons Learned: To block or embrace agent traffic, and how embracing “LLM Instructions” can increase content’s reach.
Janna Malikova
Hi, there. I’m Janna, a Brisbane-based security researcher and software engineer at Tomato Elephant Studio, specialising in frontend performance and security. I’m a passionate advocate for Free and Open Source Software (FOSS) and actively contribute to the codebases and community by sharing her expertise at local meetups and conferences worldwide.