Logo
    Login
    Hackerspace
    • Learn
    • Colleges
    • Hackers
    Career
    • Jobs
    • Applications
    Profile
    • Login as Hacker
    Vercel Fanboys College

    Builders Guide to the AI SDK

    0 / 16 chapters0%
    Course Introduction
    Fundamentals
    Introduction to LLMs
    Prompting Fundamentals
    AI SDK Dev Setup
    Data Extraction
    Model Types and Performance
    Invisible AI
    Introduction to Invisible AI
    Text Classification
    Automatic Summarization
    Structured Data Extraction
    UI with v0
    Conversational AI
    Basic Chatbot
    AI Elements
    System Prompts
    Tool Use
    Multi-Step & Generative UI
    Conclusion
    1. Builders Guide to the AI SDK
    2. Basic Chatbot

    Basic Chatbot

    You've been using AI behind the scenes for classification, summarization, and extraction. Now let's build something that everyone recognizes; a ChatGPT-style conversational interface. Over the next five lessons, you'll start with the fundamentals of streaming chat, then progressively add the features that make these interfaces powerful: professional UI components, system prompts for personality, tool calling to connect with real-world data, and multi-step reasoning with dynamic UI generation.

    We'll begin with the core architecture that powers every AI chat interface:

    • Set up an API route that uses streamText.
    • Implement the frontend with useChat.

    Project Context

    We're working in app/(5-chatbot)/ directory. Same project setup as before, but now we're building both server and client sides.

    Chatbot Architecture Overview

    Your chatbot will have two parts: backend + frontend. The backend connects to the LLM and provides the frontend with an API to use. The backend is required because calling the LLM apis requires secret token, authentication, rate limiting, and other functionality that runs on the server.

    The frontend is what the user interacts with in the browser. It's the UI.

    Chatbot architecture

    Step 1: Create Route Handler

    First, create the API endpoint that will handle chat requests from your frontend.

    What are Next.js Route Handlers?

    Route Handlers are serverless endpoints in your Next.js app. They can live anywhere in the app/ directory (not just /api/), though we'll use the /api/ convention here. No separate backend needed - perfect for AI functionality.

    1. Create the file: app/api/chat/route.ts

    2. Start with this basic structure:

    typescript
    1. Now implement the streaming chat endpoint:
    typescript

    Key components explained:

    • streamText - Enables real-time streaming from the AI model
    • convertToModelMessages - Converts frontend message format to AI model format
    • toUIMessageStreamResponse() - Formats the stream for the frontend to consume

    Step 2: Implement Frontend with useChat

    Now let's build the UI using the useChat hook. Open app/(5-chatbot)/chat/page.tsx and replace the placeholder content.

    1. Start with the imports and basic setup:
    typescript
    1. Add the useChat hook and message display:
    typescript

    Default API Endpoint

    The useChat hook automatically uses /api/chat as its endpoint. If you need a different endpoint or custom transport behavior, check out the transport documentation.

    1. Add the input form:
    typescript

    How it works:

    • useChat() manages the entire chat state and API communication
    • messages contains the conversation history
    • sendMessage() sends user input to your API
    • Messages have parts for different content types (text, tool calls, etc.)

    The combination of streamText and useChat handles most of the streaming complexity for you - no manual WebSocket management or stream parsing needed.

    Step 3: Test Your Chatbot

    Run the development server:

    bash

    Navigate to http://localhost:3000/chat

    Try it out - type a message and hit Enter. Watch the AI response appear in real time!

    simple chat UI. User types 'Hello!' and presses Enter. AI response 'Hello there! How can I help you today?' streams into the chat window.

    Experience the Limitations

    Before moving on, test these scenarios to understand why we need better tooling:

    1. Ask for code: "Write a Python function to calculate fibonacci numbers"

      • Notice how code blocks appear as raw ``` text
    2. Have a long conversation: Keep chatting until messages go below the fold

      • You'll have to manually scroll to see new responses
    3. Ask for formatted content: "Explain AI with headers and lists"

      • Markdown formatting shows as plain text
    4. Refresh the page: All your conversation history disappears

    5. Try to edit a long prompt: The single-line input is limiting

    These aren't bugs - they're missing features that every chat interface needs.

    Model Choice for Streaming

    We use openai/gpt-4.1 for fast, visible streaming responses. Unlike reasoning models like openai/gpt-5-mini (which think for 10-15 seconds before streaming), gpt-4.1 starts streaming immediately for a responsive user experience. Swap out the model in the streamText call to openai/gpt-5-mini to see the difference.

    What you've built so far:

    • Two components: Backend (streamText API route) + Frontend (useChat component)
    • streamText manages server-side AI calls and streaming
    • useChat handles UI state, messages, and API calls
    • toUIMessageStreamResponse() connects backend to frontend
    • display the messages in the UI by parsing the response from the backend

    Feeling the Pain Yet?

    Notice how much custom code we had to write just for basic functionality? Try having a longer conversation and watch the problems pile up:

    Immediate Issues You'll Notice:

    • No markdown rendering - If the AI sends code blocks or formatting, they show as raw text
    • No auto-scrolling - New messages appear below the viewport, you have to manually scroll
    • Basic styling - Just "User:" and "AI:" labels, no proper message bubbles
    • Fixed input weirdness - The input floats awkwardly at the bottom

    Missing Features You'll Need:

    • Loading indicators - No visual feedback while waiting for AI
    • Error handling - If the API fails, users see nothing
    • Multi-line input - Can't compose longer messages easily
    • Message persistence - Refresh = conversation gone
    • Code syntax highlighting - Code examples are unreadable

    You could spend weeks building all this from scratch... or there might be a better way. 🤔

    Next Step: A Professional Solution

    In the next lesson, we'll discover how to transform this basic chatbot into a professional interface with a single command. Get ready to have your mind blown by AI SDK Elements!

    Ready to move on?

    Mark this chapter as finished to continue

    Ready to move on?

    Mark this chapter as finished to continue

    LoginLogin to mark
    Chapter completed!
    NextGo to Next Chapter

    © 2025 Hacklab

    • Privacy
    • Terms