Part of Tauri Patterns for Production

Tauri Patterns for Production: Add SQLite + Migrations to Tauri 2

Celest KimCelest Kim

Video: Add SQLite + Migrations to Tauri 2 | Plugin SQL Tutorial by CelesteAI

Take the quiz on the full lesson page
Test what you've read · interactive walkthrough

You shipped v1 of your Tauri 2 app. It has a bookmarks table — id, url, title, created_at. A week later you want to track which bookmarks the user has actually visited. That’s a new column. Users on the old version have rows that don’t have it. Users on the new version need rows that do.

In a browser app you would call out to a backend and let some Rails or Django migration handle it. In a desktop app, the database lives on each user’s disk. You ship code, not a migrated database. The schema has to evolve on launch, on the user’s machine, idempotently, the first time the new version runs and never again.

tauri-plugin-sql is the answer. It bundles SQLite, gives you a JS-callable database handle from React, and runs a list of versioned Migration structs in order — applying only the ones that haven’t already run on the current install. Schema lives in your Rust source. The plugin reconciles it with whatever the user’s disk currently has.

This tutorial walks through the whole loop with a small but real demo: Markit, a bookmark manager with id, url, title, created_at in v1 and a visited_at column added in v2.


Why migrations matter (and why this is the hard part of desktop databases)

When your Tauri app launches on a user’s machine, the database file on their disk is in some version of the schema. You don’t know which version unless you track it. The user’s version is determined by which release of your app they installed first, and which releases they installed since.

If you just hard-code a CREATE TABLE bookmarks (...) and ship the new column inline, you have problems:

  • New users: the table is created with the new column. Works.
  • Existing users: the table already exists, so CREATE TABLE either errors (with IF NOT EXISTS removed) or no-ops (with it in). Either way, the new column is never added.
  • You build defensive code that checks for the column and adds it if missing — that scales until you have a third change, and then a fourth, and you’re maintaining a schema-divergence detector.

Migrations cut this knot. You write a list of versioned changes:

  • v1: CREATE TABLE bookmarks (...)
  • v2: ALTER TABLE bookmarks ADD COLUMN visited_at INTEGER
  • v3: CREATE INDEX idx_bookmarks_visited ON bookmarks(visited_at)
  • v4: …

The plugin stores the current version number in the database itself. On launch, it compares your declared list to the stored version and runs only the new ones. New users get all of them in order. Existing users get only the suffix they haven’t run. Everyone converges to the same schema.

You never edit a shipped migration. Ever. Once v1 is in production, v1 is frozen — you only ever append.


The demo: Markit

Markit is a bookmark manager. Three pieces of UI: a URL input, a title input, an Add button. Below that, a list. Click a row to mark it visited; the row gets a green check.

The whole app is one App.tsx, ~30 lines of lib.rs (most of which is the migration declarations), and one capability entry.

demo-app/markit/
├── package.json                  ← adds @tauri-apps/plugin-sql
├── src-tauri/
│   ├── Cargo.toml                ← adds tauri-plugin-sql w/ sqlite feature
│   ├── tauri.conf.json
│   ├── capabilities/default.json ← sql:default + allow-execute + allow-select
│   └── src/lib.rs                ← declares migrations + registers plugin
└── src/App.tsx                   ← load → select → insert/update → refresh

We are skipping the pnpm create tauri-app part — assume you already have a working Tauri 2 + React + TypeScript app. Adding SQLite + migrations to an existing app is the ~30 lines below.


Step 1 — Install the plugin (both sides)

tauri-plugin-sql has the same two-side install as every Tauri plugin: a Rust crate and a JS package.

Rust sidesrc-tauri/Cargo.toml:

[dependencies]
tauri = { version = "2", features = [] }
tauri-plugin-sql = { version = "2", features = ["sqlite"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"

The features = ["sqlite"] selects the SQLite backend. The plugin also supports mysql and postgres features — useful if you’re connecting to a server-side database, but for a self-contained desktop app sqlite is the right default.

JS sidepackage.json:

"dependencies": {
  "@tauri-apps/api": "^2",
  "@tauri-apps/plugin-sql": "^2",
  "react": "^19.1.0",
  "react-dom": "^19.1.0"
}

Then pnpm install. The first Rust build will take a couple of minutes — sqlx (which the plugin uses under the hood) is a chunky dependency.


Step 2 — Capability gate the plugin

Tauri 2 gates every plugin behind explicit permissions. The SQL plugin’s permissions live in src-tauri/capabilities/default.json:

{
  "$schema": "../gen/schemas/desktop-schema.json",
  "identifier": "default",
  "description": "Capability for the main window",
  "windows": ["main"],
  "permissions": [
    "core:default",
    "sql:default",
    "sql:allow-execute",
    "sql:allow-select"
  ]
}

Three lines worth knowing:

  • sql:default unlocks Database.load() — the function that opens the connection and runs migrations. Without it, the very first JS call errors with permission not granted.
  • sql:allow-execute unlocks every write operation: INSERT, UPDATE, DELETE, ALTER. Anything that mutates the database goes through db.execute(), and execute is gated by this permission.
  • sql:allow-select unlocks reads via db.select().

There are narrower variants — sql:allow-execute-insert, sql:allow-select-by-id, etc. — that you can switch to once your app is mature and you want a tighter threat model. For development and small apps, the three above are sufficient.

If you forget one of these, the failure is loud: the JS call rejects with the permission name. That’s a feature; the system tells you what you missed.


Step 3 — Declare migrations in Rust

The plugin owns a Vec<Migration>. You build it in lib.rs and hand it to the plugin builder:

use tauri_plugin_sql::{Migration, MigrationKind};

#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
  let migrations = vec![
    Migration {
      version: 1,
      description: "create bookmarks table",
      sql: "CREATE TABLE bookmarks (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        url TEXT NOT NULL,
        title TEXT NOT NULL,
        created_at INTEGER NOT NULL
      );",
      kind: MigrationKind::Up,
    },
    Migration {
      version: 2,
      description: "add visited_at",
      sql: "ALTER TABLE bookmarks ADD COLUMN visited_at INTEGER;",
      kind: MigrationKind::Up,
    },
  ];

  tauri::Builder::default()
    .plugin(
      tauri_plugin_sql::Builder::default()
        .add_migrations("sqlite:markit.db", migrations)
        .build(),
    )
    .run(tauri::generate_context!())
    .expect("error while running tauri application");
}

Four fields on a Migration:

  • version — monotonically increasing integer. Start at 1. Always append, never reorder, never reuse a number.
  • description — human label. Shows up in logs and tooling. Doesn’t affect runtime behavior, but you will want it the day you debug a botched migration on a user’s machine.
  • sql — the SQL string. Multi-statement is allowed; the plugin uses sqlx underneath which executes each statement in turn. Wrap related statements in a single migration so they apply atomically.
  • kindMigrationKind::Up for forward migrations. The plugin also supports Down for reversals, though you almost never use it in production — by the time a user has run a forward migration, “down” usually means data loss, and you should ship a new forward migration that fixes the issue instead.

The string "sqlite:markit.db" passed to add_migrations is the connection string. The sqlite: prefix tells the plugin which backend to use; markit.db is the file name. The plugin resolves the full path against the OS app data directory derived from your tauri.conf.json identifier — on macOS that’s ~/Library/Application Support/com.codegiz.markit/markit.db, on Linux ~/.local/share/com.codegiz.markit/markit.db, on Windows %APPDATA%/com.codegiz.markit/markit.db.

You can register multiple databases with multiple add_migrations calls if you want one file per concern — markit.db for bookmarks, cache.db for derived data. Each gets its own migration list and its own version pointer.


Step 4 — Wire it up in React

src/App.tsx:

import { useEffect, useState } from "react";
import Database from "@tauri-apps/plugin-sql";
import "./App.css";

interface Bookmark {
  id: number;
  url: string;
  title: string;
  visited_at: number | null;
}

export default function App() {
  const [db, setDb] = useState<Database | null>(null);
  const [bookmarks, setBookmarks] = useState<Bookmark[]>([]);
  const [url, setUrl] = useState("");
  const [title, setTitle] = useState("");

  const refresh = (d: Database) =>
    d.select<Bookmark[]>("SELECT * FROM bookmarks ORDER BY id DESC")
     .then(setBookmarks);

  useEffect(() => {
    Database.load("sqlite:markit.db").then((d) => {
      setDb(d);
      refresh(d);
    });
  }, []);

  const add = async () => {
    if (!db || !url || !title) return;
    await db.execute(
      "INSERT INTO bookmarks (url, title, created_at) VALUES ($1, $2, $3)",
      [url, title, Date.now()]
    );
    setUrl("");
    setTitle("");
    refresh(db);
  };

  const visit = async (id: number) => {
    if (!db) return;
    await db.execute(
      "UPDATE bookmarks SET visited_at = $1 WHERE id = $2",
      [Date.now(), id]
    );
    refresh(db);
  };

  // ... JSX with inputs, Add button, list of bookmarks
}

The shape is unsurprising if you’ve used any SQL library:

  1. Database.load("sqlite:markit.db") opens the connection. The connection string here is the same one you registered migrations against. The plugin runs any pending migrations before resolving the promise — so by the time setDb(d) runs, the schema is up to date.

  2. db.select<Bookmark[]>("SELECT ...") runs a read query and returns typed rows. The generic <Bookmark[]> is the TypeScript shape; the plugin doesn’t enforce it at runtime, so make sure your interface matches what the columns actually return.

  3. db.execute("INSERT ...", [params]) runs a write. The dollar-numbered placeholders ($1, $2, $3) get replaced with values from the parameter array. The plugin uses bound parameters under the hood — not string concatenation — so SQL injection isn’t a worry as long as you use placeholders.

  4. After every write, refresh(db) re-runs the SELECT and pushes the result into React state. This is the simplest possible reactivity model: write, refresh, render. For a small app it’s fine. For a larger one you’d add finer-grained updates (e.g. update just the changed row), but the cost of always-refreshing 100 rows is negligible.

The setUrl("") and setTitle("") calls inside add clear the inputs after a successful insert — small UX detail that matters.


Step 5 — Run it

pnpm tauri dev

The first launch:

  1. The plugin opens (or creates) ~/Library/Application Support/com.codegiz.markit/markit.db.
  2. It reads the current schema version. For a fresh database, the version is 0.
  3. It sees your declared migrations (v1, v2) and runs both in order. After v2, the bookmarks table exists with the visited_at column.
  4. The version pointer gets updated to 2.
  5. Database.load resolves, your React code runs refresh(), the empty list renders.

On every subsequent launch:

  1. Open DB.
  2. Read version (now 2).
  3. Compare to declared list — nothing newer. Nothing runs.
  4. Resolve, render, you’re done.

When you ship v3 (say, an index on visited_at):

  1. Open DB.
  2. Read version (still 2).
  3. See v3 in the list. Run it. Update pointer to 3.
  4. Resolve, render.

Each user’s database converges to the latest schema exactly once, in order, idempotently.


What’s actually durable, and where

The markit.db file lives at the OS-standard app data path. On macOS that’s ~/Library/Application Support/com.codegiz.markit/markit.db. SQLite also creates two sibling files for write-ahead logging — markit.db-shm and markit.db-wal — that you should never delete manually. They’re how SQLite guarantees durability across crashes.

What survives: - Every row your code inserts. - The schema (current version + all migrations that have been applied). - The visited_at timestamps when users click rows.

What doesn’t survive: - The React state (url, title inputs). These are component state, intentionally cleared on every launch. - Any in-memory derived data that you don’t write back.

What’s not in the DB but is still durable elsewhere: - The user’s window position (managed by Tauri’s window state plugin, if you opt in). - App preferences (typically the store plugin for flat key-value data).

The mental model: SQLite for relational rows, the store plugin for flat preferences, and Tauri’s window manager for window geometry. Use the right tool per concern.


Five patterns you will use

  1. Migrations are append-only forever. Edit a shipped migration and you’ll create divergence between users who ran the old version and users who run the new one. Always add a new version, never edit an old one.

  2. One database file per concern beats one giant DB. markit.db for bookmarks, cache.db for derived data, auth.db for tokens. Multiple Database.load calls each open their own connection; nothing forces you into a single file. Smaller files migrate faster, recover faster on corruption, are easier to back up selectively, and naturally enforce module boundaries.

  3. Bind parameters, never concatenate. The $1, $2, $3 placeholders pass values as bound parameters under the hood. If a bookmark title contains a quote and you string-concat it into a query, you’ll either crash or, worse, execute attacker-controlled SQL. SQL injection is not only a web-app problem — desktop apps that import user data are vulnerable in the same way.

  4. Refresh after every write is good enough for small apps. A blanket refresh(db) after every execute keeps your code simple — there’s no separate “update row N” path, no optimistic-update logic. The cost is one round-trip to SQLite, which for hundreds of rows is sub-millisecond. Reach for finer-grained updates only when you can measure a problem.

  5. Track schema versions like you track app versions. When you debug a user’s broken install, the first question is “which migration did they last run?” The plugin stores this in the database itself; you can query PRAGMA user_version to see it. Build that into your support tooling early.


Where to take this next

The 30-line Markit app exercises every pattern you need for real Tauri+SQLite work. Real-world apps stack a few more things on top:

  • Indexes. Add a v3 migration with CREATE INDEX idx_bookmarks_url ON bookmarks(url) once you have many rows or repeated WHERE clauses on a column.
  • Foreign keys. SQLite supports them but you have to enable them per connection with PRAGMA foreign_keys = ON — the plugin exposes a way to run that on load.
  • JSON columns. SQLite has a json extension; storing complex blobs as JSON inside a TEXT column is often pragmatic. Use json_extract() in queries for filtering.
  • Backups. Copy the .db file when the app is closed (or use SQLite’s BACKUP API while open). Test restore on a clean machine before you trust your backup process.
  • Schema versioning in your CI. Snapshot the database after each migration and diff against the previous snapshot in tests. Catches accidental shipped breakage.

For most desktop apps, though, the 30 lines you just wrote are enough to ship.


This channel is run by Claude AI. Tutorials AI-produced; reviewed and published by Codegiz. Source code at codegiz.com.

Part of Tauri Patterns for Production — full playlist linked in the channel.

Ready? Take the quiz on the full lesson page →
Test what you've learned. Watch the lesson and try the interactive quiz on the same page.