You now have a full‑stack Tanstack Start app with:
- Server functions, server routes, and loaders for data fetching
- A persistent chat list (file‑based storage)
- A working AI chat backed by the AI SDK and OpenAI
This chapter shows how to ship it and where to take it next.
Railway is an excellent platform for hosting Tanstack Start applications. It provides simple deployment with automatic builds and easy environment variable management.
- Commit your code to a Git provider (GitHub, GitLab, Bitbucket).
- Ensure
.env is in .gitignore (never commit secrets).
- Go to https://railway.app and sign up with GitHub
- Click New Project → Deploy from GitHub repo
- Select your repository
- Railway will auto-detect your project settings
In Railway dashboard:
- Click on your deployment
- Go to the Variables tab
- Add your environment variable:
OPENAI_API_KEY: your OpenAI key
Locally, use .env:
Railway should auto-detect Bun. If not, set these in Settings:
- Build Command:
bun install && bun run build
- Start Command:
bun run start
Railway will automatically build and deploy your app. Subsequent pushes to your default branch will trigger automatic deployments.
File storage is great to start, but you'll want a real database for reliability and scale. Popular options:
- Railway Postgres (managed, included with your deployment)
- Neon or Supabase (Postgres)
- Turso (SQLite, edge-ready)
- SQLite for simple/self‑hosted flows
Two common ORMs:
- Prisma (schema‑driven, batteries included)
- Drizzle (lightweight, SQL‑first)
- Install and init
- Set
DATABASE_URL in .env (and in Railway → Variables)
Railway tip: You can add a PostgreSQL database directly in Railway by clicking New → Database → Add PostgreSQL. Railway will automatically set the DATABASE_URL environment variable for you.
- Define a basic schema in
prisma/schema.prisma
- Create the DB and types
- Update
src/db/chat.ts to read/write via Prisma instead of file utils. You may need to invalidate the router cache using router.invalidate() to refresh data after mutations.
Add a Message model with chatId, role, content, and timestamps. Store/send the history in your server route so the AI has context after refreshes.
- Error handling: Return helpful errors from server routes; guard empty inputs with proper validation
- Rate limiting: Protect your AI endpoint (e.g., Upstash Ratelimit or Railway's rate limiting)
- Secrets: Never expose
OPENAI_API_KEY to the client; keep it in server routes only
- Logging/monitoring: Railway provides built-in logs; integrate Sentry or OpenTelemetry for deeper insights
- Performance: Use loaders effectively; leverage Tanstack Router's built-in caching and preloading
- Accessibility: Labels, focus management, keyboard navigation
- Testing:
- Unit: Vitest for utilities and DB logic (Tanstack Start supports Vitest out of the box)
- E2E: Playwright or Cypress for core chat flows
- Multiple models and providers: OpenAI, Azure OpenAI, Anthropic, or local models
- Chat title auto‑generation: Update titles based on the first message using AI
- Message editing / regeneration: Allow users to edit their messages or regenerate AI responses
- Delete chats: Add delete functionality with confirmation dialogs
- Streaming UI polish: Add typing indicators, markdown rendering with syntax highlighting
- Authentication: Use Clerk or Auth.js to protect user data and enable multi-user support
- File uploads: Add support for image or document uploads in chat
- Export conversations: Allow users to download chat history as markdown or PDF
You've built a real, deployable Tanstack Start application! Ship it to Railway, move storage to a database when you're ready, and iterate on UX, reliability, and performance. From here, you can scale features confidently while enjoying Tanstack Start's excellent developer experience and type-safe routing.