Magic for the Magicians: Coding in the Age of AI Agents

Magic for the Magicians: Coding in the Age of AI Agents

June 13, 2025

By Sean Abraham

There's never been a more exciting time to be a software engineer.

AI isn't replacing us — it's amplifying us. The already high-leverage act of producing software is becoming even more powerful. And it all begins with agents.

Agents are finally viable

While we've been talking about AI agents for years, they've only very recently become practically viable, thanks to some big improvements in the models themselves. There are three big ones worth calling out:

  1. Larger context lengths
  2. Reinforcement-trained chain-of-thought "reasoning"
  3. Post-training of models specifically for tool use

Together, these unlock a powerful shift: instead of apps calling LLMs for one-off narrow tasks, we're now handing reasoning agents the steering wheel in the form of a system prompt. We hook up to the traditional, deterministic software via tools and let the agent decide what to use, when, and how to handle the results. The results? Agents are pulling off surprisingly complex tasks this way.

Software is the first frontier

Why does software production seem to be the use case du jour for AI agents? Manipulating bits is easier than manipulating atoms, and the real world runs increasingly on software.

And when you're armed with a text-producing, next-token predicting machine, it follows that the highest leverage place to drop those tokens are in software codebases!

Additionally, the UI/UX patterns for AI agents are not polished yet. In this regard, software engineers are a tolerant audience. We're handy with terminals and capable of doing the heavy lifting necessary to ready our local environments for software that's not necessarily packaged neatly.

This is a special opportunity to not just play witness to the experiments with agent UX but also contribute ideas that could shape the future of how humans interact with computers!

Software-producing agents in particular seem to fall into two UX buckets:

1. Locally-run agents, running on our laptops, terminals, and IDEs

2. Asynchronous remote agents, running on containers in the cloud

Let's unpack both.

Locally-run agents

Local agents showed up first, but they didn't start out as full fledged "agents." It began with copilots that assisted software engineers by autocompleting the code line-by-line, later advancing to inline block suggestions. And only recently have these tools shifted to offer a dedicated "agent panel" chat where you can give the LLM model broader, task-level instructions.

There are many, but of note:

  • Cursor: An AI-enhanced code editor built on VS Code that offers smart, context-aware refactoring and inline assistance
  • Windsurf: A terminal-first agent that autonomously navigates and improves codebases, handling testing and changes with minimal input
  • Zed: A high-performance collaborative editor featuring local agent support for real-time, context-sensitive coding assistance
  • Claude Code: Integrates Anthropic's reasoning models for, context-aware multi-step problem solving in your coding workflow

Asynchronous remote agents

The remote coding agent space has been quiet for over a year, but it recently exploded. Cognition made the first big splash with Devin, but then, nothing. For over a year, it felt like no one else was seriously exploring the space.

In AI, there are years when little happens and weeks when years of progress accelerate. This is one of those weeks. We've seen three new asynchronous remote agents publicly released in as many business days:

  • OpenAI Codex: A remote coding agent designed for end-to-end software tasks; excels at multi-step execution and tool integration, with strong general reasoning
  • Google Jules: Focused on structured workflows and test-driven development; emphasizes reliability and safety in agent-driven code generation
  • Github Copilot Coding Agent: Extends Copilot into autonomous PR creation; tightly integrated with the GitHub ecosystem to streamline repo-specific tasks and reviews

Having been part of OpenAI's Codex pre-release, we've come to understand there are a few important differences with the remote agent that are worth keeping in mind as you use them. You want to act more like a manager, meaning:

  1. Onboard agents well

Just as you'd want to set up a new engineer on the team for success, you should set up your agents for success. While both local and remote agents benefit from great documentation, geared specifically for LLMs, it's especially important for remote agents since you might have to leave them on their own for some time as you tend to other things. Ample documentation is especially important in reducing the number of iterations necessarily.

We don't think the agents are differentiated enough to warrant different documentation, so we write all of it into a .rules file at the root of our repo and symlink all the known places agents look for instructions to that .rules files.

.cursorrules → .rules  
.windsurfrules → .rules  
AGENT.md → .rules  
AGENTS.md → .rules  
CLAUDE.md → .rules
  1. Kick off several tasks in parallel

You're not constrained by your local resources, so you can afford to kick off many tasks in parallel. You don't have to keep everything — keep the promising attempts and iterate from there.

  1. Sharpen your code review skills

You didn't watch the code being written, so it's more important than ever to be both fast _and_ thorough with the code review. Make it easy to quickly pull down the PR for local testing. It's worth the setup time.

Both forms of coding agents, local and remote, are rapidly improving, and I'm extremely excited to see how this plays out. While it may feel like we're approaching an existential moment for the field, I'm of the opinion that quality software engineering skills will only become more relevant. A deep, nuanced understanding of the full software stack and hard-won experience deploying and operating production software will soon become more valuable. To that end, there are some things to keep in mind.

Embracing the journey as a software engineer

I believe these are the keys to survive thrive as a software engineer in an agentic future:

  • Don't delegate thinking to AI, instead enhance your own thinking and productivity
  • Reading and writing matters more than ever
    • Read for comprehension: Leverage LLMs heavily to fit the content to your brain
    • Write for clarity: Writing is essential in synthesizing thought and also increasingly valuable for prompting
    • Build a rich inner world: Drop down a layer of abstraction to root your semantic tree ever deeper, and layer in branches ever more effortlessly

In short, actively use the latest tools available to make yourself smarter and more capable. Given the incredible tooling afforded to us, a seasoned software engineer who uses such tools to produce quality software at scale as well as enrich their own mind will surely thrive.

Software engineering has always been a discipline that's aptly compared to magic. It's a comparatively new phenomenon in human history that you can think hard then move your fingers slightly over a keyboard, and suddenly real value is produced in the world. Modern AI agents accompanying us in the process of producing software is an extension of this phenomenon — magic for the magicians.

Here at WorkWhile, we're harnessing software — and now agentic AI — to solve one of the most critical real-world problems: connecting great workers to great work opportunities. By leveraging this new wave of AI-driven tools, we're accelerating how we build, iterate, and deliver value to both workers and businesses. If you're excited about the future of software and the unprecedented leverage agentic AI provides, join us in making the lives of millions of Americans better, every single day.