Logo
    Login
    Hackerspace
    • Learn
    • Colleges
    • Hackers
    Career
    • Jobs
    • Applications
    Profile
    • Login as Hacker
    Vercel Fanboys College

    Builders Guide to the AI SDK

    0 / 16 chapters0%
    Course Introduction
    Fundamentals
    Introduction to LLMs
    Prompting Fundamentals
    AI SDK Dev Setup
    Data Extraction
    Model Types and Performance
    Invisible AI
    Introduction to Invisible AI
    Text Classification
    Automatic Summarization
    Structured Data Extraction
    UI with v0
    Conversational AI
    Basic Chatbot
    AI Elements
    System Prompts
    Tool Use
    Multi-Step & Generative UI
    Conclusion
    1. Builders Guide to the AI SDK
    2. Tool Use

    Tool Use

    Your chatbot has personality (system prompts) and a beautiful UI (Elements), but it lacks real-time knowledge. It doesn't know today's weather, can't check prices, or access current data.

    Tools let your AI call functions to fetch data, perform calculations, or interact with external APIs. They bridge the gap between the AI's static knowledge and the dynamic real world.

    Building on Elements

    We'll add tool calling to our Elements-powered chat interface. Building on the basic chatbot and system prompts lessons, we'll extend our chat with real-world data access. The professional UI will make tool invocations visible and interactive!

    The Problem: LLM Limitations

    Base LLMs operate within constraints:

    • Knowledge Cutoff: Lack real-time info (weather, news, stock prices). LLMs are training on a static dataset, so typically only have data earlier than their knowledge cutoff date.
    • Inability to Act: Cannot directly interact with external systems (APIs, databases). LLMs produce text. They don't have capabilities beyond that.

    Asking "What's the weather in San Francisco?" fails because the model lacks live data access. The model has no idea what the current weather is in San Francisco. AI is amazing, but the model is always a snapshot of the past.

    Thankfully this problem can be solved with "tool calling" which gives your model the ability to run code based on your conversation context. The results of these function calls can then be fed back into your prompt context to generate a final response.

    Calling Tools with the AI SDK (Function Calling)

    Tools allow the model to access functions based on conversation context. They are like a hotline the LLM can pick up, call a pre-defined function, and pop the results back inline.

    Calling tools

    Here's the Flow:

    1. User Query: Asks a question requiring external data/action.
    2. Model Identifies Need: Matches query to tool description.
    3. Model Generates Tool Call: Outputs structured request to call specific tool with inferred parameters.
    4. SDK Executes Tool: API route receives call, SDK invokes execute function.
    5. Result Returned: execute function runs (e.g., calls weather API), returns data.
    6. Model Generates Response: Tool result is automatically fed back to model for final text response.

    If you've used a coding environment like Cursor, you've seen this flow in action. That's how Cursor and similar tools interact with your codebase.

    Remember that tools grant LLMs access to real-time data and action capabilities, dramatically expanding chatbot usefulness.

    To see this in action you'll build a tool to check the weather.

    Step 1: Define getWeather Tool

    Create a new file app/api/chat/tools.ts to define our weather tool.

    1. Start with the basic structure:
    typescript
    1. Add the description to help the AI understand when to use this tool:
    typescript

    The description is what the AI reads to decide if this tool matches the user's request.

    Prompt Engineering for Tools

    The description field is crucial - it's how the AI understands when to use your tool. Be specific and clear:

    • ✅ Good: "Get current weather for a specific city. Use when users ask about weather, temperature, or conditions."
    • ❌ Bad: "Weather tool"

    The AI uses semantic matching between the user's query and your description to decide which tool to call.

    1. Define the input schema - what parameters the tool needs:
    typescript

    The AI will extract the city name from the user's message and pass it to your tool.

    💡 Need Help Designing Tool Schemas?

    Unsure about what parameters your tool should accept or how to structure them? Try this:

    text
    1. Implement the execute function with a simple weather API:
    typescript

    What just happened?

    You built a complete tool in 4 progressive steps:

    1. Description: Tells the AI when to use this tool
    2. Input Schema: Defines what parameters the AI should extract
    3. Execute Function: The actual code that runs when called
    4. Return Value: Structured data the AI can use in its response

    The Open-Meteo API is free and requires no API key - perfect for demos!

    Step 2: Connect the Tool to Your API Route

    Now update your API route to use this tool. Modify app/api/chat/route.ts:

    typescript

    Key changes:

    • Import the getWeather tool from ./tools
    • Add tools: { getWeather } to register it with the AI

    Your chatbot now has access to the weather tool! Try asking "What's the weather in Tokyo?" - but you'll notice the response shows raw JSON data. Let's fix that next.

    Step 3: Handle Tool Calls in the UI

    With tools enabled, messages now have different parts - some are text, some are tool calls. We need to handle both types.

    First, update your message rendering to check the part type. Remember our current code just shows text? Let's evolve it:

    typescript

    Now let's handle both text AND tool calls. We'll use a switch statement to handle different part types:

    typescript

    Test it now: Ask "What's the weather in San Francisco?" and you'll see:

    • Your message appears
    • Raw tool call data showing the city parameter
    • The temperature and weather data returned
    • The AI's final response using that data

    Screenshot of the chat UI showing the raw tool call data

    This raw view helps you understand the tool calling flow!

    Step 4: Make It Beautiful with Elements

    Now that you understand the raw data, let's replace that JSON dump with beautiful Elements components. First, add the Tool imports to your existing imports:

    typescript

    Then replace your raw JSON display with the Elements components:

    typescript

    Test it: Ask "What's the weather in San Francisco?" again. Now instead of raw JSON, you'll see:

    • A beautiful tool card with the tool name and status
    • Formatted input parameters showing the city
    • Nicely displayed output data with temperature and humidity

    Screenshot of the chat UI showing the beautiful tool card with the tool name and status

    The Elements components automatically handle loading states, errors, and formatting - much better than raw JSON!

    Step 5: Test the Complete Implementation

    Start your dev server:

    bash

    Navigate to http://localhost:3000/chat and ask: "What's the weather in San Francisco?"

    You should now see:

    1. Your message - "What's the weather in San Francisco?"
    2. Tool execution card - Shows the weather API call with input city and output data

    Why No Natural Language Response?

    Notice you only see the tool output - no AI explanation of the weather data. By default, the AI stops after executing a tool and returns the raw results.

    To get the AI to provide a natural language response that synthesizes the tool data (like "The weather in San Francisco is 19°C and cloudy"), you need to enable multi-step conversations. We'll cover this in the next lesson!

    Key Takeaways

    You've given your chatbot superpowers with tool calling:

    • Tools extend AI capabilities - Access real-time data, perform calculations, call APIs
    • The tool helper defines what tools can do with description, parameters, and execute
    • Tool registration via tools property - Makes tools available to the model
    • Elements UI displays everything beautifully - Professional presentation of both text and tool activity

    Further Reading (Optional)

    Strengthen your tool-calling implementation with these security-focused resources:

    • LLM Function Calling Security (OpenAI Docs)
      Official guidance on hardening function calls (parameter validation, auth, rate limits).
    • OWASP Top 10 for Large Language Models
      Community-maintained list of the most critical security risks when deploying LLMs.
    • Prompt Injection Payloads Encyclopedia (PIPE)
      A living catalogue of real-world prompt-injection vectors to test against.
    • NVIDIA NeMo Guardrails Security Guidelines
      Practical design principles for safely granting LLMs access to external tools/APIs.
    • Function Calling Using LLMs — Martin Fowler
      Architectural walkthrough of building a secure, extensible tool-calling agent.
    • Step-by-Step Guide to Securing LLM Applications (Protect AI)
      Lifecycle-based checklist covering training, deployment and runtime hardening.

    Up Next: Multi-Step Conversations & Generative UI

    Your model can now call a single tool and provide responses. But what if you need multiple tools in one conversation? Or want to display rich UI components instead of just text?

    The next lesson explores Multi-Step Conversations where the AI can chain multiple tool calls together, and Generative UI to render beautiful interactive components directly in the chat.

    Ready to move on?

    Mark this chapter as finished to continue

    Ready to move on?

    Mark this chapter as finished to continue

    LoginLogin to mark
    Chapter completed!
    NextGo to Next Chapter

    © 2025 Hacklab

    • Privacy
    • Terms