Who Needs a LoRA?
Can you faithfully edit hand-drawn illustrations using an image model you haven’t fine-tuned? Most practitioners assume you need a LoRA for style-faithful editing. I set out to prove otherwise — building a production system that edits character avatars in an educational platform, changing their facial expressions to match the emotion of their dialogue. The constraint: every edit had to look like the original artist drew it themselves.
No blog posts or tutorials existed for this, so academic papers on diffusion-based editing were the starting point. Following the literature’s advice got results fast — but also surfaced failures that were as instructive as the wins. I’ll walk through a gallery of what went wrong: Disneyfication (the model defaulting to generic cartoon features), colour convergence toward a yellowish haze across sequential edits (the “piss filter”), hand-crafted brushstroke imperfections being smoothed away, and the discovery that negative prompts function as reverse psychology for diffusion models.
The three principles that emerged: 1. Prompt entropy predicts drift - the less you describe, the more faithfully the model preserves style 2. Describe movements (“drop the jaw”), not how features look (“oval opening”), so the model applies its own intuition for that specific character 3. Parallel feature edits from the pristine original, recombined by the model, beat sequential chains — because drift is architectural, not a prompting bug.
This talk is for anyone building AI tools for creative workflows, anyone prompting image models for controlled edits, or anyone curious about what’s possible with off-the-shelf image generation today.
Charli Posner
Charli Posner is an AI engineer exploring the limits of modern AI models in real-world systems.
At Stile Education, she builds production AI systems — from pipelines that scan handwritten student work into the platform to image editing workflows for illustrated characters. She also develops infrastructure for evaluating and improving LLM outputs across product features and internal tools.
Her work spans LLM pipelines, vector databases, computer vision, and deep learning, including research in human pose estimation at Toshiba and the University of Bristol. She focuses on the practical challenges of deploying AI systems: handling noisy inputs, unpredictable outputs, and making models reliable in real-world applications.
Outside of work, she builds creative AI projects such as interactive pose-tracking installations and writes technical blogs documenting experiments with emerging AI tools.