effort-free
Learn

Tools

(giving it hands)

Tools give the LLM hands. Alone, the model can only produce text. With tools, the model can call a function you wrote. Check the weather. Run a calculation. Send an email. The model decides which tool to call based on your request, the tool runs, and the result comes back for the model to use in its answer.

USERinputLLMlanguage modelRESPONSEKNOWLEDGEyour documents📧🔍TOOLScallable functions

Tools turned on. The model can now call functions and act in the world.

The process

You define a set of tools. Each tool has a name, a short description, and a list of inputs. For example: a tool named “get_weather” with a description “returns the current weather for a given city” and one input “city.”

When the user asks a question, the LLM reads the question and the list of tools. If the question matches a tool (“what’s the weather in Paris?”), the model emits a structured request: “call get_weather with city=Paris.” Your code runs the tool, gets the answer, and passes it back to the model. The model then writes the human-readable reply.

If no tool matches, the model answers directly from its training. Tools are used only when needed.

You’ve encountered this when…

You told ChatGPT in voice mode to set a timer, and it set the timer. You asked Claude in the desktop app to check today’s weather before suggesting an outfit, and it checked. You asked an AI assistant to send an email, and it drafted and sent one. All of those are tool calls.

A familiar example

Think about hiring a research assistant. You give them a phone, a laptop, and a list of things they are allowed to do: book travel, check prices, email the team. They figure out which phone number to dial or which website to check for each task. Tools give the LLM the same kind of list. The model reads the list and picks the right one for your request.

Variants include

Function calling (OpenAI and Anthropic APIs)

The standard way to wire tools into an LLM. Your code sends the LLM a list of tool definitions. The model returns either a text answer or a structured tool call. Both OpenAI and Anthropic support this directly in their APIs.

Structured outputs

A related feature. The LLM is asked to return a specific JSON shape, like a product record or a form submission. The model’s output is checked against the shape before returning. This makes LLMs safer to plug into normal software.

The breaking point

The model picks which tool to call based on the tool’s name and description. Write a vague description, the model ignores the tool. Write two tools with overlapping descriptions, the model picks one at random. Tool design is more prompt engineering than software engineering, and teams spend weeks tuning descriptions to get reliable behaviour.

Your takeaway

Tools are what turn an LLM from a chat partner into a worker. Every time ChatGPT or Claude does something that needs real-time data or action in the world, a tool call made it possible.

The Zero-Data Promise
Your data never leaves your screen.
01 · No upload
Files stay put.
02 · No training
Your words, your own.
03 · No storage
No logs. No profile.
04 · No catch
Always free.