Tauri Patterns for Production: Share State Across Tauri Commands
Video: Share State Across Tauri Commands | Mutex AppState Tutorial by CelesteAI
You have a Tauri 2 app and three commands that need to read and write the same data. A start command flips a flag. A stop command reads that flag and updates a counter. A get-status command reads both. The data isn’t a file or a database row — it’s just in memory, but it has to be the same memory for every command and consistent under concurrent access.
The answer in Tauri 2 is one line: tauri::Builder::default().manage(Mutex::new(AppState::default())). The state lives inside a Mutex, which lives inside Tauri’s managed-state registry. Every command that wants the state declares State<'_, Mutex<AppState>> as a parameter, and Tauri injects the registered value. Locking is your responsibility; the rest is wired up.
This tutorial walks the whole loop with a small demo: Tickr, a stopwatch built around exactly the three fields that make a stopwatch interesting — a running flag, an accumulated elapsed_ms, and an Option<Instant> for when the current run started. The pattern generalizes to any app where multiple commands share mutable data.
Why a Mutex (and not three atomics)
Tickr’s state has three fields:
struct AppState {
running: bool,
elapsed_ms: u64,
started_at: Option<Instant>,
}
A reasonable first reaction is “use atomics — AtomicBool for running, AtomicU64 for elapsed_ms, some unsafe cell for the Option<Instant>.” The atomics are lock-free, faster per operation, and the std library bundles them. Why pay for a Mutex?
Because the fields are coupled. They have to move together or the state doesn’t make sense:
- When the stopwatch is running,
running == trueandstarted_at == Some(_). Both. - When stopped,
running == falseandstarted_at == None. Both. elapsed_msonly updates at the moment ofstop— and the update is “add the current run’s duration to the accumulator.” That’s a read-then-write that depends onstarted_at.
With three independent atomics, you can interleave operations:
- Thread A reads
running(true). - Thread B starts a
stopand flipsrunningto false. - Thread A reads
started_at(nowNone, because B cleared it). - Thread A now has inconsistent observations — it saw the stopwatch running but with no start time.
A Mutex around the struct collapses all three fields into a single critical section. Inside state.lock(), the world is yours. No other thread can observe a half-updated state because no other thread can read the inner fields without holding the same lock.
Atomics are great for one value that’s truly independent — a counter, a last_seen_timestamp. The moment you have two related values, reach for a Mutex.
The demo: Tickr
A stopwatch is the smallest “interesting state” you can build:
- Start button: clock begins ticking.
- Stop button: clock pauses; accumulated time is preserved.
- Start again: clock resumes from where it stopped.
- Reset: clock zeros.
The UI is a giant 00:00.00 display and three buttons. The Rust side owns the state; the frontend polls get_elapsed 20 times a second and renders the result.
demo-app/tickr/
├── package.json
├── src-tauri/
│ ├── Cargo.toml ← only tauri itself; no extra plugins
│ ├── tauri.conf.json
│ ├── capabilities/default.json ← just core:default
│ └── src/lib.rs ← AppState + commands + manage
└── src/App.tsx ← setInterval + invoke
No plugins. No database. The state lives in RAM for the lifetime of the process — that’s the whole point of Mutex<AppState>. Quit the app and the timer resets; for persistence across launches you’d combine this with the store plugin or the sql plugin.
Step 1 — Define AppState
The struct is plain Rust. Use #[derive(Default)] so you can instantiate it with AppState::default():
use std::sync::Mutex;
use std::time::Instant;
use tauri::State;
#[derive(Default)]
struct AppState {
running: bool,
elapsed_ms: u64,
started_at: Option<Instant>,
}
Three fields:
running— is the stopwatch ticking right now?elapsed_ms— the total accumulated time across all previous runs.started_at—Some(Instant::now())while the current run is happening;Noneotherwise.
Instant is monotonic — it always moves forward, doesn’t care about wall-clock changes (DST, NTP sync). For “elapsed time” measurements, you always want Instant, not SystemTime.
Step 2 — Write the commands
Four commands, each takes State<'_, Mutex<AppState>> as a parameter:
#[tauri::command]
fn start(state: State<'_, Mutex<AppState>>) {
let mut s = state.lock().unwrap();
if !s.running {
s.running = true;
s.started_at = Some(Instant::now());
}
}
#[tauri::command]
fn stop(state: State<'_, Mutex<AppState>>) {
let mut s = state.lock().unwrap();
if s.running {
if let Some(t) = s.started_at.take() {
s.elapsed_ms += t.elapsed().as_millis() as u64;
}
s.running = false;
}
}
#[tauri::command]
fn reset(state: State<'_, Mutex<AppState>>) {
let mut s = state.lock().unwrap();
s.running = false;
s.elapsed_ms = 0;
s.started_at = None;
}
#[tauri::command]
fn get_elapsed(state: State<'_, Mutex<AppState>>) -> u64 {
let s = state.lock().unwrap();
let extra = if s.running {
s.started_at.map(|t| t.elapsed().as_millis() as u64).unwrap_or(0)
} else { 0 };
s.elapsed_ms + extra
}
A few things to notice:
State<'_, T> is dependency injection. When a command function takes state: State<'_, Mutex<AppState>>, Tauri’s command-dispatch machinery looks for a registered Mutex<AppState> in its managed-state registry and hands it over. You don’t construct State; you receive it. There’s a lifetime parameter because the underlying reference is borrowed for the duration of the command call.
.lock().unwrap(). Mutex::lock returns Result<MutexGuard<T>, PoisonError<...>>. The error case only happens if a previous thread panicked while holding the lock — which is a bug. For app-owned state, unwrap is the right call: poisoning means something went catastrophically wrong, and you want it loud. Don’t paper over it with .unwrap_or_else(|e| e.into_inner()) unless you’ve thought hard about what an inconsistent state looks like.
The stop command is where coupling matters. Three reads and three writes happen inside one critical section:
1. Read running.
2. Take started_at (set it to None, return the previous value).
3. If it was Some, read its elapsed time and add to elapsed_ms.
4. Write running = false.
Without the Mutex, between step 2 and step 4 another command could observe started_at == None and running == true, which contradicts the state-machine invariant.
get_elapsed is the read. It locks the same Mutex, reads three fields, computes the current visible time. The lock is brief — a few field reads and an arithmetic — so contention with the writers is minimal.
Step 3 — Register the state
Mutex::new(AppState::default()) registers the initial value on the builder:
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
tauri::Builder::default()
.manage(Mutex::new(AppState::default()))
.invoke_handler(tauri::generate_handler![start, stop, reset, get_elapsed])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
.manage() puts the value into a type-keyed registry. The key is the type — so there can be one Mutex<AppState> per app. If you have unrelated states (config, in-memory cache, etc.), use different types: Mutex<Config>, Mutex<Cache>. Each gets its own registry slot.
invoke_handler(tauri::generate_handler![...]) is the macro that turns your #[tauri::command] functions into something the IPC bridge can dispatch. The macro generates the deserialization, the parameter-injection (including pulling State<T> from the registry), and the response serialization.
Order doesn’t matter — you can .manage() after .invoke_handler() or before. Tauri assembles everything at .run(). What does matter: don’t try to manage state inside .setup() if you also need it available immediately — the setup hook runs after the frontend has loaded, and some early invoke calls might fire before it completes. Register state on the builder directly.
Step 4 — Wire the frontend
App.tsx:
import { useEffect, useState } from "react";
import { invoke } from "@tauri-apps/api/core";
import "./App.css";
function format(ms: number) {
const mm = Math.floor(ms / 60000);
const ss = Math.floor((ms % 60000) / 1000);
const cs = Math.floor((ms % 1000) / 10);
return `${String(mm).padStart(2, "0")}:${String(ss).padStart(2, "0")}.${String(cs).padStart(2, "0")}`;
}
export default function App() {
const [elapsed, setElapsed] = useState(0);
useEffect(() => {
const id = setInterval(async () => {
const ms = await invoke<number>("get_elapsed");
setElapsed(ms);
}, 50);
return () => clearInterval(id);
}, []);
return (
<main className="container">
<h1>Tickr</h1>
<div className="clock">{format(elapsed)}</div>
<div className="row">
<button onClick={() => invoke("start")}>Start</button>
<button onClick={() => invoke("stop")}>Stop</button>
<button onClick={() => invoke("reset")}>Reset</button>
</div>
</main>
);
}
The shape:
- A
setIntervalpollsget_elapsedevery 50ms. That’s 20 frames per second of UI updates, smooth enough for a clock display. - Buttons each call
invoke()on the corresponding command. No state propagation back to React from the buttons — the polling loop catches the new values on the next tick.
Why polling and not events? Tauri supports a bidirectional event system — Rust can emit events that the frontend listens for. For a high-frequency readout like a clock, polling is simpler:
- No backpressure problems (the frontend pulls when it wants).
- No event-listener cleanup gotchas (clear the interval on unmount, done).
- One line of state instead of an event handler that calls
setState.
Reach for events when: - The state changes infrequently and the frontend shouldn’t waste CPU polling. - The frontend can’t predict when the change happens (file watcher, network status, push notification).
For a ticking clock, polling at 50ms is the right tool.
Step 5 — Run it
pnpm tauri dev
The flow:
- App launches.
tauri::Builder::default()configures everything..manage()puts a freshMutex<AppState>in the registry withrunning=false,elapsed_ms=0,started_at=None. - The frontend mounts.
useEffectstarts the polling interval. The firstinvoke("get_elapsed")returns 0 (because nothing has started). - You click Start.
invoke("start")resolves on the JS side, the Rust handler runs, the Mutex flipsrunning=trueand stampsstarted_at. - The next polling tick reads
get_elapsed, which now returns the milliseconds elapsed sincestarted_at. The clock displays it. - Click Stop. The handler folds the run’s duration into
elapsed_msand clearsstarted_at. The clock stops updating because subsequentget_elapsedcalls return the staticelapsed_ms. - Click Start again. New
started_at.get_elapsednow returnselapsed_ms + time-since-new-started_at. The clock resumes from where it paused. - Click Reset. All three fields zero out. The clock displays 00:00.00.
Quit the app. Everything in AppState is gone — it lived in RAM. The next launch gets a fresh AppState::default(). If you need stopwatch state to survive restarts, combine this pattern with the store plugin: load the previous elapsed_ms on startup, save it on stop.
Five patterns you will reuse
-
One Mutex per coupled-data unit, registered with
.manage(). If you have a config that’s read often and written rarely, that’sMutex<Config>. If you have a cache,Mutex<Cache>. The type is the key; one Mutex per type. Resist the temptation to make a single “god struct” with everything in it — split by concern. -
Hold the lock briefly. The body of every command should look like: lock, read or mutate, release. No I/O while holding the lock. If you need to call out to the network or disk, clone the data you need out of the lock first, then operate on the clone, then re-lock to write the result back. Long-held locks turn into the desktop equivalent of a server-side deadlock.
-
.lock().unwrap()for app-owned state. Poisoning happens when a handler panics while holding the lock. For state your own code controls, that’s a bug you want to surface immediately. The “graceful” alternative — recovering the inner value with.into_inner()— papers over a corruption you should be fixing in the panic source. -
std::sync::Mutexis the right default. Don’t reach forparking_lot::Mutexortokio::sync::Mutexunless you’ve measured a problem. The std Mutex is well-optimized for short critical sections, and there’s no async context to worry about for synchronous Tauri commands. Reach fortokio::sync::Mutexonly when your command isasync fnand the lock needs to be held across an.await. -
Polling beats events for high-frequency readouts.
setIntervalcallinginvoke()every 50ms is what every desktop dashboard, clock, progress bar, or live-readout should use. Events are for state changes that the frontend can’t predict. State values should be pulled.
Where to take this next
Tickr is intentionally minimal. Real apps stack a few patterns on top:
- Persistence. Combine
Mutex<AppState>withtauri-plugin-store— on app startup, load the lastelapsed_msfrom disk; on everystop, write it back. The Mutex still guards in-memory consistency; the store handles cross-launch durability. - Background ticks. Replace the JS-side polling with a Rust thread that
emits atickevent every 50ms. Smoother for the frontend, but more code to maintain. Worth it if you have many windows that all need the same value. RwLockfor read-heavy workloads. If 99% of accesses are reads (config, settings), swapMutexforRwLockso multiple readers can run in parallel. Same.manage()andState<'_, RwLock<T>>extractor; only the locking call changes.- Async commands. If your command needs to await network or disk, switch to
tokio::sync::Mutexandasync fnfor the command. The State extractor handles either.
The pattern is the same in every case: one shared value, one synchronization primitive, one .manage() call, one State<T> extractor in every command that needs it.
This channel is run by Claude AI. Tutorials AI-produced; reviewed and published by Codegiz. Source code at codegiz.com.
Part of Tauri Patterns for Production — full playlist linked in the channel.