Tool Calls
Tool calling (also known as function calling) is a powerful feature that allows Large Language Models (LLMs) to interact with external systems and data sources. Instead of being limited to generating text based on their training data, LLMs with tool calling capabilities can execute functions, query databases, call APIs, and perform actions in your software.
How LLMs Use Tools
When you enable tool calling, the interaction between your application and the LLM follows this pattern:
-
Tool Registration: You define available tools with their names, descriptions, and input schemas. These tools are registered with your chat instance.
-
User Message: A user sends a message that requires information or actions the LLM doesn't have access to directly.
-
Tool Decision: The LLM analyzes the user's request and decides which tool(s) to call. It generates structured parameters based on the tool's input schema.
-
Tool Execution: Your application receives the tool call request, executes the corresponding function, and returns the result.
-
Final Response: The LLM receives the tool results and generates a natural language response for the user, incorporating the data it received.
Example Flow
Here's a concrete example:
User: "What's the weather like in Paris?"
1. LLM thinks: "I need weather data, I should call get_weather"
2. LLM generates: { tool: "get_weather", parameters: { city: "Paris" } }
3. Your app calls: getWeather("Paris") → "22°C, sunny"
4. LLM receives: "22°C, sunny"
5. LLM responds: "The weather in Paris is currently 22°C and sunny."
Registering Tools in Plucky
In the Plucky chat SDK, tools are registered using the registerTool() or registerManyTools() methods. Each tool consists of:
- name: A unique identifier for the tool
- description: A clear explanation of what the tool does (helps the LLM decide when to use it)
- inputSchema: A Zod schema defining the expected input parameters
- cb: A callback function that executes when the tool is called. Can return either a string or an object with
contentand optionalsuccessText - defaultLoadingText (optional): Text to show while the tool is executing
- defaultSuccessText (optional): Text to show when the tool completes successfully (can be overridden per-call by returning an object with
successText)
Basic Example
Here's a simple tool that doesn't require any input parameters:
import { addTools } from '@plucky-ai/chat-sdk'
addTools({
name: 'get_current_time',
description: 'Get the current time',
defaultLoadingText: 'Checking the time...',
defaultSuccessText: 'Time retrieved.',
cb: async () => {
// Simple string return
return new Date().toLocaleTimeString()
},
})
// Or return an object with custom success text:
addTools({
name: 'get_current_time',
description: 'Get the current time',
defaultLoadingText: 'Checking the time...',
cb: async () => {
const time = new Date().toLocaleTimeString()
return {
content: time,
successText: `Got the time: ${time}`, // Custom success message
}
},
})
React Example
If you're using React, you can pass tools directly to the useTools hook. We recommend using the hook in React projects to ensure callback references are properly managed.
import { useTools } from '@plucky-ai/react'
function MyComponent(props: { temperature: number }) {
useTools({
context: {
temperature,
},
tools: [
{
name: 'get_current_weather',
description: 'Get the current weather',
defaultLoadingText: 'Checking the weather...',
defaultSuccessText: 'Weather retrieved.',
cb: async (_, { temperature }) => {
// Simple string return
return `It's currently ${temperature} degrees.`
// Or return an object with custom success text:
// return {
// content: `It's currently ${temperature} degrees.`,
// successText: `Weather checked: ${temperature}°`,
// }
},
},
],
})
return <div>Your app content</div>
}
Passing values to the context parameter and accessing them from the callback allows React to correctly manage callback references and state.
Tool with Parameters
Here's a more complex example with input validation using Zod:
import { addTools } from '@plucky-ai/chat-sdk'
import { z } from 'zod'
const WeatherInputSchema = z.object({
city: z.string().describe('The name of the city'),
units: z.enum(['celsius', 'fahrenheit']).default('celsius'),
})
addTools({
name: 'get_weather',
description: 'Get current weather information for a city',
defaultLoadingText: 'Fetching weather data...',
defaultSuccessText: 'Weather data retrieved.',
inputSchema: WeatherInputSchema,
cb: async (input) => {
const { city, units } = WeatherInputSchema.parse(input)
// Call your weather API here
const weatherData = await fetchWeatherAPI(city, units)
const content = `Temperature: ${weatherData.temp}°${units === 'celsius' ? 'C' : 'F'}, Conditions: ${weatherData.conditions}`
// Return a simple string:
// return content
// Or return an object with custom success text:
return {
content,
successText: `Got weather for ${city}`,
}
},
})
Tool Callback Return Values
Tool callbacks can return results in two ways:
1. Return a String (Simple)
The simplest approach is to return a string directly:
cb: async (input) => {
const result = await fetchData(input)
return `Data fetched: ${result}`
}
2. Return an Object with Custom Success Text
For more control over the UI feedback, return an object with content and optional successText:
cb: async (input) => {
const game = await scheduleGame(input)
return {
content: `Game scheduled between ${game.homeTeam} and ${game.awayTeam} on ${game.date} at ${game.time}.`,
successText: `Game added on ${game.date}`, // Shows in the UI
}
}
The successText field allows you to display a concise, dynamic message to the user while still providing the full details to the LLM through content. This is useful when:
- You want to show a brief confirmation message based on the actual data (e.g., "Game added on March 15")
- The full content is verbose or technical
- You want to provide better UX feedback for different outcomes
If successText is not provided, the tool will fall back to the defaultSuccessText defined in the tool configuration.
Tool Lifecycle Management
Removing Tools
You can remove tools when they're no longer needed:
api.removeTool('get_weather')
Best Practices
1. Write Clear Descriptions
The LLM uses tool descriptions to decide when to call them. Be specific and clear:
// ❌ Bad
description: 'Gets data'
// ✅ Good
description: 'Get current weather information for a specific city including temperature, conditions, and humidity'
2. Use Descriptive Schema Fields
Add descriptions to your Zod schema fields to help the LLM understand what values to provide:
const schema = z.object({
city: z.string().describe('The name of the city, e.g., "Paris", "New York"'),
date: z
.string()
.describe('ISO 8601 formatted date, e.g., "2025-10-28T15:30:00Z"'),
})
3. Return Structured Information
Return well-formatted data that the LLM can easily interpret:
// ✅ Good - structured and clear
return JSON.stringify(
{
temperature: 22,
conditions: 'sunny',
humidity: 45,
},
null,
2,
)
// ✅ Also good - clear text
return 'Temperature: 22°C, Conditions: sunny, Humidity: 45%'
4. Handle Errors Gracefully
Provide helpful error messages:
cb: async (input) => {
try {
// ... your code
} catch (error) {
return `Error: Unable to fetch weather data. ${error.message}`
}
}
Next Steps
Now that you understand tool calling, you can:
- Integrate external APIs into your chat experience
- Allow users to perform actions through natural language
- Give your AI assistant access to real-time data
- Create dynamic, context-aware conversations
For more information on initializing the Plucky SDK and configuring your chat instance, see the Quickstart guide.