effort-free
Learn

Agents

(letting it work unsupervised)

An agent is an LLM in a loop. The model reads the goal, takes one step, reads the result, takes the next step. It keeps going until the goal is done or the model gives up. A regular LLM answers in one shot. An agent can work for twenty minutes on a task that needs twenty small decisions.

USERinputLLMlanguage modelRESPONSEKNOWLEDGEyour documents📧🔍🔌MCPTOOLScallable functionsAGENT LOOP

Agent loop turned on. The model's output feeds back as its next input. This is the full stack.

The process

The agent starts with a goal. “Fix this failing test.” “Book a flight for Tuesday.” “Summarise every email from last week.”

The model reads the goal and picks the next action. It might call a tool, read a file, or write some code. The tool runs. The result comes back. The model now has a new situation: the goal, plus what it has done so far, plus what came back from the last tool.

The model picks the next action based on the new situation. The loop continues. Each step, the model sees a little more of the work and a little less of the unknown. When the goal is met, the model stops. When the model gets stuck, it gives up and reports what it tried.

You’ve encountered this when…

You used Claude Code to implement a feature. You queued prompts in Cursor Composer and watched it work through them. You asked Claude to research a topic across many sources. Those are all agents. The model was not giving one answer. It was running a loop.

A familiar example

Think about asking a new hire to “get this project shipped.” They don’t ask you every thirty seconds what to do next. They pick the next step. They try it. They see what happened. They pick the next step after that. A new hire who has to ask about every decision is not much use. A new hire who can loop on their own for an afternoon is useful.

A modern coding agent can run 30 to 100 tool calls on a single task before finishing. Each call feeds the next. The agent’s cost on one task can reach a few dollars of model time.

Variants include

Single-agent systems

One LLM running a loop by itself. Simplest setup. Claude Code, Cursor Composer, and most of the early agent products work this way.

Multi-agent systems

Several LLMs working together. One agent plans, another executes, a third reviews. More powerful on complex tasks, more expensive, harder to debug. The research world is heavily focused on this.

Background agents

Agents that run without a user watching. They check a schedule, monitor a dashboard, or respond to incoming events. Useful for tasks that need to happen while you sleep, risky because a bug runs for hours before anyone notices.

The breaking point

Agents fail in weird ways. They get stuck in loops. They invent file paths that do not exist. They confidently execute the wrong plan for forty minutes. The interesting engineering problem right now is containing agent failures, not extending agent abilities. Production teams spend most of their time building guardrails that catch the agent doing something dumb.

Your takeaway

“AI will automate work” means agents. Not ChatGPT. Not autocomplete. A loop that picks the next step on its own. The technology is new, unreliable in places, and moving faster than anything else in the field.

The Zero-Data Promise
Your data never leaves your screen.
01 · No upload
Files stay put.
02 · No training
Your words, your own.
03 · No storage
No logs. No profile.
04 · No catch
Always free.