This is the full developer documentation for LiveStore.
## Notes
- Most LiveStore APIs are synchronous and don't need `await`
# Start of LiveStore documentation
# [Docs](https://dev.docs.livestore.dev//)
## State of the documentation
Please note that the documentation is still work in progress with many parts missing and often only containing notes/bullet points.
### Docs for LLMs
We support the [llms.txt](https://llmstxt.org/) convention for making documentation available to large language models and the applications that make use of them.
Currently, we have the following root-level files:
### NPM packages
- Main package:
- Framework integrations:
- React:
- Solid:
- Platform adapters:
- Web:
- Expo:
- Node:
- Sync provider:
- Cloudflare:
- Electric:
- Devtools:
- Vite:
- Expo:
- SQLite packages:
- sqlite-wasm (wrapper around wa-sqlite):
- wa-sqlite fork:
- Internal packages:
-
-
-
# [Docs](https://dev.docs.livestore.dev/contributing/docs/)
Please follow LiveStore's [guiding principles](/contributing/contributing#guiding-principles) when writing docs.
## Writing style
This project broadly tries to follow the [Prisma docs style guide](https://www.prisma.io/docs/about/style-guide/writing-style).
## Snippets
For snippet guidelines, see: `/contributor-docs/docs/snippets.md`
## Deploying the docs
- Run `direnv exec . mono docs deploy` to build and deploy the documentation to the dev domain (`https://dev.docs.livestore.dev`).
- Passing `--prod` targets the production domain (`https://docs.livestore.dev`) when you are on `main` (otherwise the command deploys using a branch alias).
- Use `--site=` if you need to override the default Netlify site name.
- Add `--purge-cdn` when you need to invalidate Netlify's CDN cache after deploying; this ensures new edge handlers or content-negotiation changes take effect immediately.
- CI automatically builds and deploys the docs: `main` updates `https://docs.livestore.dev`, `dev` updates `https://dev.docs.livestore.dev`, and feature branches publish to the dev domain behind a branch alias.
# [Contributing](https://dev.docs.livestore.dev/contributing/contributing/)
## Before contributing
First of all, thank you for your interest in contributing to LiveStore! Building LiveStore has been an incredible amount of work, so everyone interested in contributing is very much appreciated. 🧡
Please note that LiveStore is still in active development with many things yet subject to change (e.g. APIs, examples, docs, etc).
Before you start contributing, please check with the maintainers if the changes you'd like to make are likely to be accepted. Please get in touch via the `#contrib` channel on [Discord](https://discord.gg/RbMcjUAPd7).
## Areas for contribution
There are many ways to contribute to LiveStore.
### Help wanted for ...
- You can look at ["help wanted" issues](https://github.com/livestorejs/livestore/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) on GitHub for ideas.
- [SQLite WASM build](https://github.com/livestorejs/wa-sqlite) maintainer (e.g. keeping it up to date with upstream SQLite and wa-sqlite versions)
- Examples maintainer (e.g. keeping dependencies & best practices up to date)
- Solid integration maintainer (e.g. keeping it up to date with upstream Solid versions)
### In scope and encouraged
- Documentation improvements
- Improving examples
- Test cases
- Bug fixes
- Benchmarking
### Potentially in scope
- New features
- Larger architectural changes in the core library
- Adding new examples
- Adding new integrations (e.g. for technologies such as Svelte, Vue, ...)
- Monorepo setup changes
- Changes to the docs site/setup
### Out of scope (for now)
- Changes to the landing page
- Changes to the devtools
- Rewriting the core library in a different language
### Open research questions
- Safer event schema evolution
- Incremental view maintenance for complex SQLite database views
Please get in touch if you'd like to discuss any of these topics!
## Bug reports
- Please include a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) for how to reproduce the bug.
## Guiding principles {#guiding-principles}
- Keep it as simple as possible
- Reduce surface area
- Make the right thing easy
- Document the "why"
# [Monorepo](https://dev.docs.livestore.dev/contributing/monorepo/)
## Prerequisites
### Personal experience
Depending on the kind of contribution you're interested in, the following
experience is recommended:
- Deep experience with TypeScript (incl. type-level programming)
- Experience with TypeScript monorepo setups
- Experience with distributed systems
- Experience with [Effect](https://effect.website) (or willingness to learn)
### Recommended tooling: Use devenv + direnv for a consistent setup
To make development easy and consistent across systems and platforms, this
project uses [Nix](https://zero-to-nix.com/) with [devenv](https://devenv.sh)
to manage "system dependencies" such as Node.js, Bun, etc.
Use [direnv](https://direnv.net) (recommended) to automatically load the
environment, or run commands explicitly with devenv, for example:
`devenv shell pnpm install`.
### Manual setup
You'll need to have a recent version the following tools installed:
- Node.js
- Bun
- pnpm
## Initial setup
```bash
git clone git@github.com:livestorejs/livestore.git
cd livestore
# Loads env vars, installs deps and builds the project
./bootstrap.sh
```
## General notes
- TypeScript
- LiveStore tries to follow the strictest TypeScript rules possible to ensure
type safety and avoid subtle bugs.
- LiveStore also makes heavy use of
[TypeScript project references](https://www.typescriptlang.org/docs/handbook/project-references.html).
- Package management
- This project uses [pnpm](https://pnpm.io/) to manage the workspace.
- LiveStore is primarily developed in VSCode/Cursor.
- Testing
- LiveStore uses Vitest for most tests and Playwright for browser tests.
### Notable used tools / technologies
- [TypeScript](https://www.typescriptlang.org/)
- [Effect](https://effect.website)
- [pnpm](https://pnpm.io/)
- [Bun](https://bun.sh/)
- [Vitest](https://vitest.dev/)
- [Playwright](https://playwright.dev/)
- [OpenTelemetry](https://opentelemetry.io/)
- [wa-sqlite](https://github.com/rhashimoto/wa-sqlite) (included as git subtree) - see [wa-sqlite management](../../../../contributor-docs/wa-sqlite-management.md)
- [Nix](https://zero-to-nix.com/)
- [Direnv](https://direnv.net/)
- [devenv](https://devenv.sh)
### Environment variables
The `.envrc` file contains all necessary environment variables for the project.
You can create a `.envrc.local` file to override or add variables for your local
setup. You'll need to run `direnv allow` to load the environment variables.
### VSCode tasks
- This project is primarily developed in VSCode and makes use of
[tasks](https://code.visualstudio.com/docs/editor/tasks) to run commands.
- Common tasks are:
- `dev:ts:watch` to run the TypeScript compiler in watch mode for the entire
monorepo
- `pnpm:install` to install all dependencies (e.g. when changing a
`package.json`)
## Tasks to run before committing
Please run the following tasks before committing & pushing:
- `mono ts` to build the TypeScript code
- `mono lint` to run the linting checks
- `mono test` to run the tests
## Examples
- Once you've set up the monorepo locally, you'll find all examples in the
`/examples` directory.
- All examples are self-contained and can be run independently.
- Examples use explicit version dependencies (e.g., `0.3.2-dev.0`) for LiveStore packages.
- Examples are not part of the monorepo TypeScript build system to maintain independence.
- Each example has its own TypeScript configuration that's independent of the
monorepo build system.
#### Making changes to examples
1. Make your desired changes directly in `/examples/`.
2. Test your changes by running the example (e.g., `pnpm dev` in the example
directory).
3. Commit your changes.
### OpenTelemetry setup
As a local OpenTelemetry setup, we recommend the
[docker-otel-lgtm](https://github.com/grafana/docker-otel-lgtm) setup.
Add the following to your `.envrc.local` file:
```bash
export VITE_GRAFANA_ENDPOINT="http://localhost:30003"
export GRAFANA_ENDPOINT="http://localhost:30003"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
export VITE_OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
```
### TypeScript
- Each package has its own `tsconfig.json` file which extends the root
`tsconfig.base.json`.
- This project makes heavy use of TypeScript project references.
### Package management
- This project uses [pnpm](https://pnpm.io/) to manage the workspace.
- We're using the `workspace:*` protocol to link packages together.
- We should try to keep dependencies to an absolute minimum and only add them if
we absolutely need them.
- We also need to manually add peer dependencies for each package.
- We should try to avoid duplicate dependencies across the monorepo as much as
possible as duplicate dependencies can lead to a lot of issues and pain.
- We're also using the `resolutions` field in the root `package.json` to force
some packages to be the same across the monorepo (ideally not needed but for
some packages it's necessary currently).
- We're using [syncpack](https://github.com/JamieMason/syncpack) to help
maintain consistent dependency versions across the monorepo.
- See `syncpack.config.mjs` for the configuration.
- Common commands:
- `bunx syncpack format` to format the `package.json` files
- `bunx syncpack lint` to check all version ranges
- `bunx syncpack fix-mismatches` to adjust versions across `package.json`
files (check before with `lint`)
- `bunx syncpack update` to update packages across the monorepo to the
latest versions
### Notes on external dependencies
LiveStore tries to use as few external dependencies as possible. Given LiveStore
is built on top of Effect, which can be considered a standard library for
TypeScript, it should handle most use cases.
#### Notes on some packages
The following packages need to be updated with extra care:
- `react`/`react-dom` as we need to move in lockstep with Expo / React Native
(currently pinned to {REACT_VERSION})
- `effect` (currently pinned to {EFFECT_VERSION})
#### Effect
- LiveStore makes heavy use of the [Effect](https://effect.website) library and
ecosystem throughout the implementation of the various packages.
- Effect is not imposed on the app developers using LiveStore but where it makes
sense, LiveStore is also exposing a Effect-based API (e.g. `createStore`).
#### Updating dependencies
- Either update the versions manually in each `package.json` file or use
`bunx syncpack update`.
### Notes on monorepo structure
- The `@livestore/utils` package re-exports many common modules/functions (e.g.
from `effect`) in order to
- Reduce the number of direct dependencies for other packages
- Allows for convenient extension of modules (e.g. adding methods to
`Effect.___`, `Schema.___`, ...)
## Docs
The LiveStore docs are built with
[Astro Starlight](https://starlight.astro.build/).
## Related external repos
- [Fork of wa-sqlite](https://github.com/livestorejs/wa-sqlite) with Nix build setup included as git subtree.
- The source code of the devtools is currently not part of this monorepo but in
a separate private repo.
# [AI agent](https://dev.docs.livestore.dev/data-modeling/ai-agent/)
LiveStore is a great fit for building AI agents.
TODO: actually write this section
# [Complex UI state](https://dev.docs.livestore.dev/data-modeling/complex-ui-state/)
LiveStore is a great fit for building apps with complex UI state.
TODO: actually write this section
# [Data Modeling](https://dev.docs.livestore.dev/data-modeling//)
## Core idea
- Data modeling is probably the most important part of any app and needs to be done carefully.
- The core idea is to model the read and write model separately.
- Depending on the use case, you might also want to split up the read/write model into separate "containers" (e.g. for data-sharing/scalability/access control reasons).
- There is no transactional consistency between containers.
- Caveat: Event sourcing is not ideal for all use cases - some apps might be better off with another approach (e.g. use CRDTs for rich text editing).
## Considerations for data modeling
- How much data do you expect to have and what is the shape of the data?
- Some kind of data needs special handling (e.g. blobs or rich text)
- Access patterns (performance, ...)
- Access control
- Data integrity / consistency
- Sharing / collaboration
- Regulatory requirements (e.g. GDPR, audit logs, ...)
## TODO
- TODO: actually write this section
- questions to answer
- When to split things into separate containers?
- How do migrations work?
- Read model migrations
- Write model migrations
- How to create new write models based on existing ones
- Example: An app has multiple workspaces and you now want to introduce the concept of "projects" inside a workspace. You might want to pre-populate a "default workspace project" for each workspace.
# [Todo app with shared workspaces](https://dev.docs.livestore.dev/data-modeling/todo-workspaces/)
Let's consider a fairly common application scenario: An app (in this case a todo app) with shared workspaces. For the sake of this guide, we'll keep things simple but you should be able to nicely extend this to a more complex app.
## Requirements
- There are multiple independent todo workspaces
- Each workspace is initially created by a single user
- Users can join the workspace by knowing the workspace id and get read and write access
- For simplicity, the user identity is chosen when the app initially starts (i.e. a username) but in a real app this would be handled by a proper auth setup
## Data model
- We are splitting up our data model into two kinds of stores (with respective eventlogs and SQLite databases): The `workspace` store and the `user` store.
### `workspace` store (one per workspace)
For the `workspace` store we have the following events:
- `workspaceCreated`
- `todoAdded`
- `todoCompleted`
- `todoDeleted`
- `userJoined`
And the following state model:
- `workspace` table (with a single row for the workspace itself)
- `todo` table (with one row per todo item)
- `member` table (with one row per user who has joined the workspace)
### `user` store (one per user)
For the `user` store we have the following events:
- `workspaceCreated`
- `workspaceJoined`
And the following state model:
- `user` table (with a single row for the user itself)
Note that the `workspaceCreated` event is used both in the `workspace` and the `user` store. This is because each eventlog should be "self-sufficient" and not rely on other eventlogs to be present to fulfill its purpose.
## Schemas
**User store:**
## `data-modeling/todo-workspaces/multi-store/user.schema.ts`
```ts filename="data-modeling/todo-workspaces/multi-store/user.schema.ts"
// Emitted when this user creates a new workspace
const workspaceCreated = Events.synced({
name: 'v1.WorkspaceCreated',
schema: Schema.Struct({ workspaceId: Schema.String }),
})
// Emitted when this user joins an existing workspace
const workspaceJoined = Events.synced({
name: 'v1.WorkspaceJoined',
schema: Schema.Struct({ workspaceId: Schema.String }),
})
const events = { workspaceCreated, workspaceJoined }
// Table to store basic user info
// Contains only one row as this store is per-user.
const userTable = State.SQLite.table({
name: 'user',
columns: {
// Assuming username is unique and used as the identifier
username: State.SQLite.text({ primaryKey: true }),
},
})
// Table to track which workspaces this user is part of
const userWorkspaceTable = State.SQLite.table({
name: 'userWorkspace',
columns: {
workspaceId: State.SQLite.text({ primaryKey: true }),
// Could add role/permissions here later
},
})
export const userTables = { user: userTable, userWorkspace: userWorkspaceTable }
const materializers = State.SQLite.materializers(events, {
// When the user creates or joins a workspace, add it to their workspace table
'v1.WorkspaceCreated': ({ workspaceId }) => userTables.userWorkspace.insert({ workspaceId }),
'v1.WorkspaceJoined': ({ workspaceId }) => userTables.userWorkspace.insert({ workspaceId }),
})
const state = State.SQLite.makeState({ tables: userTables, materializers })
export const schema = makeSchema({ events, state })
```
**Workspace store:**
## `data-modeling/todo-workspaces/multi-store/workspace.schema.ts`
```ts filename="data-modeling/todo-workspaces/multi-store/workspace.schema.ts"
// Emitted when a new workspace is created (originates this store)
const workspaceCreated = Events.synced({
name: 'v1.WorkspaceCreated',
schema: Schema.Struct({
workspaceId: Schema.String,
name: Schema.String,
createdByUsername: Schema.String,
}),
})
// Emitted when a todo item is added to this workspace
const todoAdded = Events.synced({
name: 'v1.TodoAdded',
schema: Schema.Struct({ todoId: Schema.String, text: Schema.String }),
})
// Emitted when a todo item is marked as completed
const todoCompleted = Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ todoId: Schema.String }),
})
// Emitted when a todo item is deleted (soft delete)
const todoDeleted = Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ todoId: Schema.String, deletedAt: Schema.Date }),
})
// Emitted when a new user joins this workspace
const userJoined = Events.synced({
name: 'v1.UserJoined',
schema: Schema.Struct({ username: Schema.String }),
})
export const workspaceEvents = { workspaceCreated, todoAdded, todoCompleted, todoDeleted, userJoined }
// Table for the workspace itself (only one row as this store is per-workspace)
const workspaceTable = State.SQLite.table({
name: 'workspace',
columns: {
workspaceId: State.SQLite.text({ primaryKey: true }),
name: State.SQLite.text(),
createdByUsername: State.SQLite.text(),
},
})
// Table for the todo items in this workspace
const todoTable = State.SQLite.table({
name: 'todo',
columns: {
todoId: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
// Using soft delete by adding a deletedAt timestamp
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
})
// Table for members of this workspace
const memberTable = State.SQLite.table({
name: 'member',
columns: {
username: State.SQLite.text({ primaryKey: true }),
// Could add role/permissions here later
},
})
export const workspaceTables = { workspace: workspaceTable, todo: todoTable, member: memberTable }
const materializers = State.SQLite.materializers(workspaceEvents, {
'v1.WorkspaceCreated': ({ workspaceId, name, createdByUsername }) => [
workspaceTables.workspace.insert({ workspaceId, name, createdByUsername }),
// Add the creator as the first member
workspaceTables.member.insert({ username: createdByUsername }),
],
'v1.TodoAdded': ({ todoId, text }) => workspaceTables.todo.insert({ todoId, text }),
'v1.TodoCompleted': ({ todoId }) => workspaceTables.todo.update({ completed: true }).where({ todoId }),
'v1.TodoDeleted': ({ todoId, deletedAt }) => workspaceTables.todo.update({ deletedAt }).where({ todoId }),
'v1.UserJoined': ({ username }) => workspaceTables.member.insert({ username }),
})
const state = State.SQLite.makeState({ tables: workspaceTables, materializers })
export const schema = makeSchema({ events: workspaceEvents, state })
```
## Using the Multi-Store API
Now that we've defined our schemas, let's set up the multi-store API to manage workspace and user stores dynamically.
:::caution[Experimental API]
This guide uses the [experimental multi-store API](/reference/framework-integrations/react-integration#multi-store) which is still early in its development.
If you have feedback or questions about this API, please don't hesitate to comment on the [RFC](https://github.com/livestorejs/livestore/pull/585)
:::
### Store Configuration
First, define store options for each store type using [`storeOptions()`](/reference/framework-integrations/react-integration#storeoptionsoptions):
**Workspace store:**
## `data-modeling/todo-workspaces/multi-store/workspace.store.ts`
```ts filename="data-modeling/todo-workspaces/multi-store/workspace.store.ts"
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker,
sharedWorker,
})
// Define workspace store configuration
// Each workspace gets its own isolated store instance
export const workspaceStoreOptions = (workspaceId: string) =>
storeOptions({
storeId: `workspace:${workspaceId}`,
schema,
adapter,
gcTime: 60_000, // Keep in memory for 60 seconds after last use
})
```
**User store:**
## `data-modeling/todo-workspaces/multi-store/user.store.ts`
```ts filename="data-modeling/todo-workspaces/multi-store/user.store.ts"
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker,
sharedWorker,
})
// Define user store configuration
// Each user has their own store to track which workspaces they're part of
export const userStoreOptions = (username: string) =>
storeOptions({
storeId: `user:${username}`,
schema,
adapter,
gcTime: Number.POSITIVE_INFINITY, // Keep user store in memory indefinitely
})
```
### App Setup
Create a [`StoreRegistry`](/reference/framework-integrations/react-integration#new-storeregistryconfig) and provide it to your React app:
## `data-modeling/todo-workspaces/multi-store/App.tsx`
```tsx filename="data-modeling/todo-workspaces/multi-store/App.tsx"
export function App({ children }: { children: ReactNode }) {
const [storeRegistry] = useState(
() =>
new StoreRegistry({
defaultOptions: {
batchUpdates,
},
}),
)
return {children}
}
```
### Accessing Stores
Use the [`useStore()`](/reference/framework-integrations/react-integration#usestoreoptions) hook to access specific workspace instances:
## `data-modeling/todo-workspaces/multi-store/Workspace.tsx`
```tsx filename="data-modeling/todo-workspaces/multi-store/Workspace.tsx"
// Component that accesses a specific workspace store
function WorkspaceContent({ workspaceId }: { workspaceId: string }) {
// Load the workspace store for this specific workspace
const workspaceStore = useStore(workspaceStoreOptions(workspaceId))
// Query workspace data
const [workspace] = workspaceStore.useQuery(queryDb(workspaceTables.workspace.select().limit(1)))
const todos = workspaceStore.useQuery(queryDb(workspaceTables.todo.select()))
if (!workspace) return
)
}
// Workspace component with Suspense and ErrorBoundary
export function Workspace({ workspaceId }: { workspaceId: string }) {
return (
Error loading workspace}>
Loading workspace...}>
)
}
```
### Loading Multiple Workspaces
To display all workspaces for a user, first load the user store to get their workspace list, then dynamically load each workspace:
## `data-modeling/todo-workspaces/multi-store/WorkspaceList.tsx`
```tsx filename="data-modeling/todo-workspaces/multi-store/WorkspaceList.tsx"
// Component that displays all workspaces for a user
function WorkspaceListContent({ username }: { username: string }) {
// Load the user store to get their workspace list
const userStore = useStore(userStoreOptions(username))
// Query all workspaces this user belongs to
const workspaces = userStore.useQuery(queryDb(userTables.userWorkspace.select()))
return (
My Workspaces
{workspaces.length === 0 ? (
No workspaces yet
) : (
workspaces.map((w) => (
))
)}
)
}
// Full workspace list with Suspense
export function WorkspaceList({ username }: { username: string }) {
return (
Error loading workspaces}>
Loading workspaces...}>
)
}
```
## Further notes
To make this app more production-ready, we might want to do the following:
- Use a proper auth setup to enforce a trusted user identity
- Introduce a proper user invite process
- Introduce access levels (e.g. read-only, read-write)
- Introduce end-to-end encryption
### Individual todo stores for complex data
If each todo item has a lot of data (e.g. think of a GitHub/Linear issue with lots of details), it might make sense to split up each todo item into its own store.
This would create **3 store types** instead of 2:
- **User stores** (one per user) - unchanged
- **Workspace stores** (one per workspace) - only basic todo metadata
- **Todo stores** (one per todo item) - rich todo data
Your app would then have **N + M + K stores** total (N workspaces + M users + K todo items).
This pattern improves performance by only loading detailed todo data when specifically viewing that item, and prevents large todos from slowing down workspace syncing.
# [Turn-based game](https://dev.docs.livestore.dev/data-modeling/turnbased-game/)
LiveStore is a great fit for turn-based games. In this guide we'll look at a simple turn-based game and how to model it in LiveStore.
General idea: Let server enforce the logic that each player only commits one action per turn.
TODO: write rest of guide
# [Design Decisions](https://dev.docs.livestore.dev/evaluation/design-decisions/)
## Goals
- Fast, synchronous, transactional, and reactive state management
- Global state is eventually consistent
- Persistent storage
- Syncing
- Convenient schema migrations
- Great devtools
## Major Design Decisions
- Based on [event-sourcing](/evaluation/event-sourcing) (implying a read/write model separation)
- Using SQLite for state management over JavaScript implementations
- There are many benefits to using SQLite for state management, including performance, reliability, and ease of use.
- Run in-memory SQLite in main-thread to enable synchronous queries
- Usually LiveStore is used with a second SQLite database for persistence running in a separate thread (e.g. web worker)
- Running SQLite additionally in the main-thread however also means each tab uses extra memory.
- The current implementation of LiveStore assumes that the data is small enough to fit in memory. However, SQLite is very efficient so this should work for many use cases and apps.
- LiveStore implements a Signals-based reactivity system based on the ideas of Adapton for incremental computation
- The goal is to keep LiveStore syncing provider agnostic so you can use the right syncing provider for your use case.
## Implementation decisions
- Build most of the library in TypeScript. We might move more parts to Rust in the future.
- Embrace and build on top of [Effect](https://effect.website) as a library of powerful primitives, particularly for IO/concurrency heavy parts of the library.
## Original motivation
- Frustration with database schema migrations -> event sourcing to separate read and write model (avoid schema migrations for read model)
- Applying the "Make the right thing easy" principle to app data management
# [How LiveStore works](https://dev.docs.livestore.dev/evaluation/how-livestore-works/)
### TLDR
LiveStore uses event sourcing to sync events across clients and materialize state into a local, reactive SQLite database.
## How LiveStore Works Client-Side
On the client, LiveStore provides a reactive SQLite database for application state, which is kept consistent through an underlying event sourcing mechanism.
#### Local Reactive SQLite
Application state is materialized into a local SQLite database, offering high-performance, offline-capable data access. This SQLite database is reactive: UI components subscribe to data changes and update automatically when the state changes. LiveStore uses in-memory SQLite for sub-millisecond queries and persistent SQLite for durable storage across application sessions.
#### Event Sourcing
Underpinning the reactive state, LiveStore implements the event sourcing pattern. All data modifications are captured as an immutable, ordered sequence of events. This eventlog serves as the canonical history, enabling reliable state reconstruction and providing inherent auditability, which aids in debugging. The reactive SQLite state is a projection of this eventlog.
#### Client-Side Event Flow
1. **Event Committing:** User interactions within the application generate events detailing the specific action (e.g., `TodoCreated`, `TaskCompleted`).
2. **Local Persistence & Materialization:** The committed event is atomically persisted to the local eventlog and immediately materialized as state into the SQLite database.
3. **UI Reactivity:** Changes to the SQLite database trigger the reactivity system, causing subscribed UI components (e.g. React components) to automatically update and reflect the new state.
## How LiveStore Syncing Works
LiveStore extends its local event-sourcing model globally by synchronizing events across all clients, typically through a central sync backend. This ensures that the eventlog, serving as the single source of truth, is consistently replicated, leading to an eventually consistent state for all participants.
#### Push/Pull Event Synchronization
Inspired by Git, LiveStore employs a push/pull model for event synchronization. Clients must first pull the latest events from the sync backend to ensure their local eventlog is up-to-date before they can push their own newly committed local events. This model helps maintain a global total order of events. Local pending events that haven't been pushed are rebased on top of the latest upstream events before being pushed.
#### Sync Provider Integration
LiveStore supports various sync backend implementations, and it's straightforward for developers to create their own. The sync backend is responsible for storing events, enforcing the total event order, and notifying clients of new events.
#### Conflict Resolution
When concurrent operations from different clients lead to conflicting events, LiveStore defaults to a "last-write-wins" strategy. However, it also provides the capability for developers to implement custom merge conflict resolution logic tailored to their application's specific needs.
#### Overall Syncing Data Flow
After a local event is committed and materialized (as per the client-side flow), LiveStore attempts to push this event to the sync backend. Simultaneously, LiveStore is pulling events from the sync backend in the background.
Two main scenarios can occur during a push attempt:
1. **Client In Sync:** If the client's local eventlog is already up-to-date with the sync backend (i.e., no new remote events have arrived since the last pull/push), the local event is pushed directly.
2. **Concurrent Incoming Events:** If new remote events have been pulled in the background, or are discovered during the push attempt, the client first processes these incoming remote events. Any local, unpushed events are then rebased on top of these new remote events before being pushed to the sync backend.
In both scenarios, once remote events are received (either through background pulling or during a push cycle), they are persisted to the local eventlog, materialized into the local SQLite database, and the UI reacts to the new state, ensuring eventual consistency.
## Platform Adapters
LiveStore includes platform adapters to integrate with various environments, such as web browsers, mobile applications (iOS/Android), desktop applications, and Node.js.
# [Event Sourcing](https://dev.docs.livestore.dev/evaluation/event-sourcing/)
- Similar to Redux but persisted and synced across devices
- Provides a more principled way to handle data instead of relying on mutable state
- Core idea: Separate read vs write model
- Read model: App database (i.e. SQLite)
- Write model: Ordered log of all mutation events
- Related topics
- Domain driven design
- Benefits
- Simple mental model
- Preserves user intent
- Scalable
- Flexible
- You can easily evolve the read model based on your query patterns as your app requirements change over time
- Flexible merge conflicts resolution
- Automatic migrations of the read model (i.e. app database)
- Write model can also be evolved (e.g. via versioned mutations and optionally mapping old mutations to new ones)
- History of all state changes is captured (e.g. for auditing and debugging)
- Foundation for syncing
- Downsides
- Slightly more boilerplate to manually define mutations
- Need to be careful so eventlog doesn't grow too much
## LiveStore as an event-sourcing framework
While the benefits of event sourcing are compelling, building a robust system from scratch is complex and time-consuming. Developers often encounter pitfalls related to data consistency, schema migrations, and efficient state reconstruction.
LiveStore provides an off-the-shelf event sourcing solution designed for ease of use and correctness. It simplifies development by:
- Providing clear APIs for defining mutations (events).
- Automatically managing the event log persistence and ordering.
- Efficiently recomputing the state (e.g. SQLite database) from the eventlog via materializers.
- Handling complexities like automatic data migrations and offering strategies for conflict resolution during synchronization.
This allows you to leverage the power of event sourcing without needing to implement the underlying infrastructure and tackle common edge cases yourself.
## Further reading
- [The Log: What every software engineer should know about real-time data's unifying abstraction](https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying)
# [Performance](https://dev.docs.livestore.dev/evaluation/performance/)
LiveStore is designed with performance in mind. To ensure consistent speed and minimal resource consumption, we maintain a suite of performance tests that run automatically on every commit to `main` and every pull request. These tests help us detect regressions early and identify performance bottlenecks for implementing optimizations.
## Performance tests
Our current test suite focuses on two key metrics: **latency** and **memory usage**.
We measure these two metrics across various user interaction scenarios on a minimal LiveStore+React test app.
We select scenarios that help stress-test LiveStore’s ability to handle common underlying tasks that are part of common user interactions.
To learn more about our testing methodology, check out the [README](https://github.com/livestorejs/livestore/blob/main/tests/perf/README.md) of our performance tests.
> **Future expansions:** We [plan](https://github.com/livestorejs/livestore/blob/main/tests/perf/README.md#future-improvements) to measure throughput and bundle size, as well as expand the selection of scenarios and dimensions for the tests.
## Latest test results
You can view the latest performance test results on our [public dashboard](https://livestore.grafana.net/public-dashboards/4a9a3b7941464bcebbc0fa2cdddc3130).
Otherwise, you can view the latest test results by inspecting the logs of the `perf-test` job in our [GitHub Actions workflow](https://github.com/livestorejs/livestore/actions/workflows/ci.yml).
## Reporting a performance issue
We’re committed to transparency and continuous improvement. If you find performance gaps or regressions in your own usage, please [file an issue](https://github.com/livestorejs/livestore/issues/new)
# [State of the project](https://dev.docs.livestore.dev/evaluation/state-of-the-project/)
LiveStore is based on years of research (see [Riffle](https://riffle.systems/essays/prelude/)) and is used as the foundation for ambitious apps such as [Overtone](https://overtone.pro). LiveStore has been in development since 2021 and is making good progress towards a stable release. LiveStore is not yet ready for all production scenarios.
## Current state
LiveStore is currently in **beta** with most APIs being fairly stable (there might still be some breaking changes in coming releases). Most work is currently focussed on reliability and performance improvements.
There is currently no specific timeline for a 1.0 release but we are making good progress in that direction.
### On breaking changes
While LiveStore is in beta there can be three kinds of breaking changes:
- Breaking API changes
- Client storage format changes (whenever `liveStoreStorageFormatVersion` is updated)
- Sync backend storage format changes (e.g. when a sync backend implementation changes the way how it stores data)
We try our best to minimize breaking changes and to provide a migration path whenever possible.
## Roadmap
See [GitHub issues](https://github.com/livestorejs/livestore/issues) for more details. Get in touch if you have any questions or feedback.
### 2025 Q3
- Adapter bug fixes & stability improvements
- Performance improvements
- Syncing latency & throughput
- More testing
### Long-term
- Eventlog compaction [#136](https://github.com/livestorejs/livestore/issues/136)
- Support more syncing providers
- Support more framework integrations
- Support more platforms (e.g. Electron, Tauri)
# [Technology comparison](https://dev.docs.livestore.dev/evaluation/technology-comparison/)
## TLDR of what sets LiveStore apart
- Uses combination of reactive, in-memory + synced, persisted SQLite for instant, synchronous queries
- Based on event-sourcing methodologies
- Client-centric (with great devtools)
## Other local-first/syncing technologies
To compare LiveStore with other local-first/syncing technologies, please see the [Local-First Landscape](https://www.localfirst.fm/landscape) resource.
## LiveStore vs Redux
LiveStore shares a lot of similarities with Redux in that sense that both are based on event-sourcing methodologies. Let's compare some of the core concepts:
- Redux actions are similar to LiveStore events: Both are used to describe "things that have happened"
- Redux views are similar to LiveStore's state (e.g. SQLite tables): Both are derived from the history of events/actions.
- A major difference here is that LiveStore's state materialized as a SQLite database allows for a lot more flexibility via dynamic queries and aggregations vs Redux's static views.
- Redux reducers are similar to LiveStore's materializers: Both are used to transform events/actions into a final state.
- Both Redux and LiveStore are client-centric.
- Both Redux and LiveStore provide powerful [devtools](/reference/devtools).
While LiveStore can be used for the same use cases as Redux, LiveStore goes far beyond Redux in the following ways:
- LiveStore leverages SQLite for a more powerful state model allowing for flexible queries and aggregations with much simpler materialization logic.
- LiveStore supports client-persistence out of the box.
- LiveStore comes with a built-in [sync engine](/reference/syncing) syncing events between clients.
As a downside compared to Redux, LiveStore has a slightly larger bundle size.
## Other state management libraries
- Zustand
- Redux Toolkit (RTK)
- MobX
- Jotai
- Xstate
- Recoil
- TanStack Query
# [When to use LiveStore (and when not)](https://dev.docs.livestore.dev/evaluation/when-livestore/)
Choosing a data layer for a local-first app is a big decision and should be considered carefully. On a high level, LiveStore can be a good fit if ...
- you are looking for a principled data layer that works across platforms
- you want to use SQLite for your queries
- you like [event sourcing](/evaluation/event-sourcing) to model data changes
- you are working on a new app as LiveStore doesn't yet provide a way to [re-use an existing database](/misc/faq#existing-database)
- the current [state of the project](/evaluation/state-of-the-project) aligns with your own timeline and requirements
## Evaluation exercise
A great way to evaluate whether LiveStore is a good fit for your application, is by trying to model your application events (and optionally state) schema. This exercise can be done in a few minutes and can give you a good indication of whether LiveStore is a good fit for your application.
### Example: Calendar/scheduling app
Let's say you are building a calendar/scheduling app, your events might include:
- `AppointmentScheduled`
- `AppointmentRescheduled`
- `AppointmentCancelled`
- `ParticipantInvitedToAppointment`
- `ParticipantRespondedToInvite`
From this you might want to derive the following state (modeled as SQLite tables):
- `Appointment`
- `id`
- `title`
- `description`
- `participants`
- `Participant`
- `id`
- `name`
- `email`
## Great use cases for LiveStore
- High-performance desktop/web/mobile apps
- e.g. productivity apps like
- AI agents
- Apps that need ...
- solid offline support
- audit logs
## Benefits of LiveStore
- Unified data layer combining local reactive state with globally synced data
- Easy to ...
- reason about
- debug
- test
- evolve
- operate
## Reasons when not to use LiveStore
- You have an existing database which is the source of truth of your data. (Better use [Zero](https://zero.rocicorp.dev) or [ElectricSQL](https://www.electricsql.com) for this.)
- Your app data is highly connected across users (like a social network / marketplace / etc.) or modeling your data via read-write model separation/event sourcing doesn't seem feasible.
- You want to build a more traditional client-server application with your primary data source being a remote server.
- You want a full-stack batteries-included solution (e.g. auth, storage, etc.). (Technologies like [Jazz](https://jazz.tools) or [Instant](https://instantdb.com) might be a better fit.)
- You don't like to model your data via read-write model separation/event sourcing or the trade-offs it involves.
- You're a new developer and are just getting started. LiveStore is a relatively advanced technology with many design trade-offs that might make most sense after you have already experienced some of the problems LiveStore is trying to solve.
- You want to keep your app bundle size as small as possible. LiveStore adds a few hundred kB to your app bundle size (mostly due to bundling SQLite).
## Considerations
### Database constraints
- All the client app data should fit into a in-memory SQLite database
- Depending on the target device having databases up to 1GB in size should be okay.
- If you you have more data, you can consider segmenting your database into multiple SQLite database (e.g. segmented per project, workspace, document, ...).
- You can either use the `storeId` option for the segmentation or there could also be a way to use the [SQLite attach feature](https://www.sqlite.org/lang_attach.html) to dynamically attach/detach databases.
### Syncing
LiveStore's syncing system is designed for small/medium-level concurrency scenarios (e.g. 10s / low 100s of users collaborating on the same thing for a given eventlog).
- Collaboration on multiple different eventlogs concurrently is supported and should be used to "scale horizontally".
### Other considerations
- How data flows / what's the source of truth?
# [Cloudflare Durable Objects Examples](https://dev.docs.livestore.dev/examples/cloudflare-adapter/)
# Cloudflare Durable Objects Examples
Examples using `@livestore/adapter-cloudflare` for Cloudflare Workers and Durable Objects.
## Cloudflare Adapter
- Runs LiveStore inside Cloudflare Durable Objects
- Uses Durable Object Storage API (not traditional databases)
- SQLite WASM with Cloudflare-specific VFS
- No WebSocket support - uses Durable Objects' distributed consistency
---
View all examples on GitHub →
# [Expo Adapter Examples](https://dev.docs.livestore.dev/examples/expo-adapter/)
# Expo Adapter Examples
Examples using `@livestore/adapter-expo` for React Native mobile applications.
## Expo Adapter
- Uses native Expo SQLite stored in device's SQLite directory
- Requires New Architecture (Fabric) - incompatible with old architecture
- Single-threaded operation in main thread
- WebSocket connections for sync via React Native dev server
---
View all examples on GitHub →
# [Examples](https://dev.docs.livestore.dev/examples//)
# Example Applications
Discover how to build local-first applications with LiveStore through our comprehensive collection of example apps. Each example demonstrates different features, patterns, and platform integrations to help you get started quickly.
## Browse by Platform Adapter
LiveStore supports multiple platform adapters, each optimized for different environments. Choose the adapter that matches your target platform:
## Getting Started
1. **Choose your platform** from the adapter categories above
2. **Browse examples** that match your use case and framework preference
3. **Clone and run** the examples locally to see LiveStore in action
4. **Study the source code** to understand patterns and best practices
## Multi-Adapter Examples
Some examples demonstrate **cross-platform synchronization** by using multiple adapters:
- **CF Chat** uses both Web and Cloudflare adapters for hybrid client-server architecture
- **Sync-enabled examples** show how to connect different platforms seamlessly
## About LiveStore
LiveStore is a local-first data layer that runs everywhere - from browsers to mobile apps to edge computing. Each adapter provides platform-optimized features while maintaining a consistent API across all environments.
---
View all examples on GitHub →
# [Node Adapter Examples](https://dev.docs.livestore.dev/examples/node-adapter/)
# Node Adapter Examples
Examples using `@livestore/adapter-node` for Node.js server-side applications.
## Node Adapter
- Uses native Node.js SQLite with file system storage
- Stores SQLite files directly on disk (default: current directory)
- Supports single-threaded or worker thread modes
- WebSocket connections for sync and devtools integration
---
View all examples on GitHub →
# [Expo](https://dev.docs.livestore.dev/getting-started/expo/)
export const CODE = {
babelConfig: babelConfigCode,
metroConfig: metroConfigCode,
}
{/* We're adjusting the package to use the dev version on the dev branch */}
export const manualInstallDepsStr = [
'@livestore/devtools-expo' + versionNpmSuffix,
'@livestore/adapter-expo' + versionNpmSuffix,
'@livestore/livestore' + versionNpmSuffix,
'@livestore/react' + versionNpmSuffix,
'@livestore/sync-cf/client' + versionNpmSuffix,
'@livestore/peer-deps' + versionNpmSuffix,
'expo-sqlite',
].join(' ')
### Prerequisites
- Recommended: Bun 1.2 or higher
- Node.js {MIN_NODE_VERSION} or higher
To use [LiveStore](/) with [Expo](https://docs.expo.dev/), ensure your project has the [New Architecture](https://docs.expo.dev/guides/new-architecture/) enabled. This is required for transactional state updates.
### Option A: Quick start
For a quick start we recommend using our template app following the steps below.
For existing projects see [Existing project setup](#existing-project-setup).
1. **Set up project from template**
Replace `livestore-app` with your desired app name.
2. **Install dependencies**
It's strongly recommended to use `bun` or `pnpm` for the simplest and most reliable dependency setup (see [note on package management](/misc/package-management) for more details).
```bash
bun install
```
```bash
pnpm install --node-linker=hoisted
```
Make sure to use `--node-linker=hoisted` when installing dependencies in your project or add it to your `.npmrc` file.
```
# .npmrc
nodeLinker=hoisted
```
Hopefully Expo will also support non-hoisted setups in the future.
```bash
npm install
```
When using `yarn`, make sure you're using Yarn 4 or higher with the `node-modules` linker.
```bash
yarn set version stable
yarn config set nodeLinker node-modules
yarn install
```
Pro tip: You can use [direnv](https://direnv.net/) to manage environment variables.
3. **Run the app**
In a new terminal, start the Cloudflare Worker (for the sync backend):
### Option B: Existing project setup \{#existing-project-setup\}
1. **Install dependencies**
2. **Add Vite meta plugin to babel config file**
LiveStore Devtools uses Vite. This plugin emulates Vite's `import.meta.env` functionality.
In your `babel.config.js` file, add the plugin as follows:
3. **Update Metro config**
Add the following code to your `metro.config.js` file:
## Define Your Schema
Create a file named `schema.ts` inside the `src/livestore` folder. This file defines your LiveStore schema consisting of your app's event definitions (describing how data changes), derived state (i.e. SQLite tables), and materializers (how state is derived from events).
Here's an example schema:
## `getting-started/expo/livestore/schema.ts`
```ts filename="getting-started/expo/livestore/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
uiState: State.SQLite.clientDocument({
name: 'uiState',
schema: Schema.Struct({ newTodoText: Schema.String, filter: Schema.Literal('all', 'active', 'completed') }),
default: { id: SessionIdSymbol, value: { newTodoText: '', filter: 'all' } },
}),
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
uiStateSet: tables.uiState.set,
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
## Add the LiveStore Provider
To make the LiveStore available throughout your app, wrap your app's root component with the `LiveStoreProvider` component from `@livestore/react`. This provider manages your app’s data store, loading, and error states.
Here's an example:
## `getting-started/expo/Root.tsx`
```tsx filename="getting-started/expo/Root.tsx"
const storeId = 'expo-todomvc'
const syncUrl = 'https://example.org/sync'
const adapter = makePersistedAdapter({
sync: { backend: makeWsSync({ url: syncUrl }) },
})
export const Root: FC = () => (
Loading LiveStore ({status.stage})...}
renderError={(error) => Error: {String(error)}}
renderShutdown={() => LiveStore shutdown}
boot={(store) => {
if (store.query(tables.todos.count()) === 0) {
store.commit(events.todoCreated({ id: crypto.randomUUID(), text: 'Make coffee' }))
}
}}
>
)
```
### `getting-started/expo/components/ListTodos.tsx`
```tsx filename="getting-started/expo/components/ListTodos.tsx"
export const ListTodos: FC = () => {
const { store } = useStore()
const todos = useQuery(visibleTodos$)
const toggleTodo = useCallback(
({ id, completed }: typeof tables.todos.Type) => {
store.commit(completed ? events.todoUncompleted({ id }) : events.todoCompleted({ id }))
},
[store],
)
const clearCompleted = () => store.commit(events.todoClearedCompleted({ deletedAt: new Date() }))
return (
{todos.map((todo) => (
{todo.text}{todo.completed ? 'Completed' : 'Pending'}
))}
)
}
```
### `getting-started/expo/components/NewTodo.tsx`
```tsx filename="getting-started/expo/components/NewTodo.tsx"
export const NewTodo: FC = () => {
const { store } = useStore()
const { newTodoText } = useQuery(uiState$)
const updateText = (text: string) => store.commit(events.uiStateSet({ newTodoText: text }))
const createTodo = () =>
store.commit(
events.todoCreated({ id: crypto.randomUUID(), text: newTodoText }),
events.uiStateSet({ newTodoText: '' }),
)
const addSampleTodos = () => {
const todos = Array.from({ length: 5 }, (_, index) => ({
id: crypto.randomUUID(),
text: `Todo ${index + 1}`,
}))
store.commit(...todos.map((todo) => events.todoCreated(todo)))
}
return (
)
}
```
### `getting-started/expo/livestore/queries.ts`
```ts filename="getting-started/expo/livestore/queries.ts"
export const uiState$ = queryDb(tables.uiState.get(), { label: 'uiState' })
export const visibleTodos$ = queryDb(
(get) => {
const { filter } = get(uiState$)
return tables.todos.where({
deletedAt: null,
completed: filter === 'all' ? undefined : filter === 'completed',
})
},
{ label: 'visibleTodos' },
)
```
### Commit events
After wrapping your app with the `LiveStoreProvider`, you can use the `useStore` hook from any component to commit events.
Here's an example:
## Queries
To retrieve data from the database, first define a query using `queryDb` from `@livestore/livestore`. Then, execute the query with the `useQuery` hook from `@livestore/react`.
Consider abstracting queries into a separate file to keep your code organized, though you can also define them directly within components if preferred.
Here's an example:
## Devtools
To open the devtools, run the app and from your terminal press `shift + m`, then select LiveStore Devtools and press `Enter`.

This will open the devtools in a new tab in your default browser.

Use the devtools to inspect the state of your LiveStore database, execute events, track performance, and more.
## Database location
### With Expo Go
To open the database in Finder, run the following command in your terminal:
```bash
open $(find $(xcrun simctl get_app_container booted host.exp.Exponent data) -path "*/Documents/ExponentExperienceData/*livestore-expo*" -print -quit)/SQLite
```
### With development builds
For development builds, the app SQLite database is stored in the app's Library directory.
Example:
`/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Data/Application//Documents/SQLite/app.db`
To open the database in Finder, run the following command in your terminal:
```bash
open $(xcrun simctl get_app_container booted [APP_BUNDLE_ID] data)/Documents/SQLite
```
Replace `[APP_BUNDLE_ID]` with your app's bundle ID. e.g. `dev.livestore.livestore-expo`.
## Further notes
- LiveStore doesn't yet support Expo Web (see [#130](https://github.com/livestorejs/livestore/issues/130))
# [Web Adapter Examples](https://dev.docs.livestore.dev/examples/web-adapter/)
# Web Adapter Examples
Examples using `@livestore/adapter-web` for browser environments.
## Web Adapter
- Uses SQLite WASM with OPFS (Origin Private File System) for browser storage
- Runs in Web Workers and SharedWorkers for multi-tab coordination
- Persists data via OPFS Access Handle Pool VFS
- Supports WebSocket connections for sync and devtools
## Frameworks
- React (`@livestore/react`)
- SolidJS (`@livestore/solid`)
- Web Components
- Vanilla JavaScript
---
View all examples on GitHub →
# [Node](https://dev.docs.livestore.dev/getting-started/node/)
## Minimal example
## `getting-started/node/minimal-example.ts`
```ts filename="getting-started/node/minimal-example.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps inline setup */
// ---cut---
const adapter = makeAdapter({
storage: { type: 'fs' },
// sync: { backend: makeWsSync({ url: 'ws://localhost:8787' }) },
})
const main = async () => {
const store = await createStorePromise({ adapter, schema, storeId: 'demo-store' })
const todos = store.query(tables.todos)
console.log(todos)
}
main().catch(() => undefined)
```
### `getting-started/node/livestore/schema.ts`
```ts filename="getting-started/node/livestore/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
},
}),
} as const
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
} as const
const materializers = State.SQLite.materializers(events, {
[events.todoCreated.name]: defineMaterializer(events.todoCreated, ({ id, text }) =>
tables.todos.insert({ id, text, completed: false }),
),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
### Option A: Quick start
For a quick start, we recommend using our template app following the steps below.
{/* For existing projects, see [Existing project setup](#existing-project-setup). */}
1. **Set up project from template**
Replace `livestore-app` with your desired app name.
2. **Install dependencies**
It's strongly recommended to use `bun` or `pnpm` for the simplest and most reliable dependency setup (see [note on package management](/misc/package-management) for more details).
Pro tip: You can use [direnv](https://direnv.net/) to manage environment variables.
3. **Run dev environment**
# [Getting started with LiveStore + React](https://dev.docs.livestore.dev/getting-started/react-web/)
export const CODE = {
viteConfig: viteConfigCode,
}
{/* We're adjusting the package to use the dev version on the dev branch */}
export const manualInstallDepsStr = [
'@livestore/livestore' + versionNpmSuffix,
'@livestore/wa-sqlite' + versionNpmSuffix,
'@livestore/adapter-web' + versionNpmSuffix,
'@livestore/react' + versionNpmSuffix,
'@livestore/peer-deps' + versionNpmSuffix,
'@livestore/sync-cf' + versionNpmSuffix,
'@livestore/devtools-vite' + versionNpmSuffix,
].join(' ')
## Prerequisites
- Recommended: Bun 1.2 or higher
- Node.js {MIN_NODE_VERSION} or higher
### Option A: Quick start
For a quick start, we recommend using our template app following the steps below.
For existing projects, see [Existing project setup](#existing-project-setup).
1. **Set up project from template**
Replace `livestore-app` with your desired app name.
2. **Install dependencies**
It's strongly recommended to use `bun` or `pnpm` for the simplest and most reliable dependency setup (see [note on package management](/misc/package-management) for more details).
Pro tip: You can use [direnv](https://direnv.net/) to manage environment variables.
3. **Run dev environment**
4. **Open browser**
Open `http://localhost:60000` in your browser.
You can also open the devtools by going to `http://localhost:60000/_livestore`.
### Option B: Existing project setup \{#existing-project-setup\}
1. **Install dependencies**
2. **Update Vite config**
Add the following code to your `vite.config.js` file:
## Define Your Schema
Create a file named `schema.ts` inside the `src/livestore` folder. This file defines your LiveStore schema consisting of your app's event definitions (describing how data changes), derived state (i.e. SQLite tables), and materializers (how state is derived from events).
Here's an example schema:
## Create the LiveStore Worker
Create a file named `livestore.worker.ts` inside the `src` folder. This file will contain the LiveStore web worker. When importing this file, make sure to add the `?worker` extension to the import path to ensure that Vite treats it as a worker file.
## `getting-started/react-web/livestore.worker.ts`
```ts filename="getting-started/react-web/livestore.worker.ts"
makeWorker({ schema })
```
### `getting-started/react-web/livestore/schema.ts`
```ts filename="getting-started/react-web/livestore/schema.ts"
// You can model your state as SQLite tables (https://docs.livestore.dev/reference/state/sqlite-schema)
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
// Client documents can be used for local-only state (e.g. form inputs)
uiState: State.SQLite.clientDocument({
name: 'uiState',
schema: Schema.Struct({ newTodoText: Schema.String, filter: Schema.Literal('all', 'active', 'completed') }),
default: { id: SessionIdSymbol, value: { newTodoText: '', filter: 'all' } },
}),
}
// Events describe data changes (https://docs.livestore.dev/reference/events)
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
uiStateSet: tables.uiState.set,
}
// Materializers are used to map events to state (https://docs.livestore.dev/reference/state/materializers)
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
## Add the LiveStore Provider
To make the LiveStore available throughout your app, wrap your app's root component with the `LiveStoreProvider` component from `@livestore/react`. This provider manages your app's data store, loading, and error states.
Here's an example:
## `getting-started/react-web/Root.tsx`
```tsx filename="getting-started/react-web/Root.tsx"
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker: LiveStoreWorker,
sharedWorker: LiveStoreSharedWorker,
})
export const App: React.FC = () => (
)
```
### Commit events
After wrapping your app with the `LiveStoreProvider`, you can use the `useStore` hook from any component to commit events.
Here's an example:
## `getting-started/react-web/Header.tsx`
```tsx filename="getting-started/react-web/Header.tsx"
const uiState$ = queryDb(tables.uiState.get(), { label: 'uiState' })
export const Header: React.FC = () => {
const { store } = useStore()
const { newTodoText } = store.useQuery(uiState$)
const updateNewTodoText = (text: string) => store.commit(events.uiStateSet({ newTodoText: text }))
const createTodo = () =>
store.commit(
events.todoCreated({ id: crypto.randomUUID(), text: newTodoText }),
events.uiStateSet({ newTodoText: '' }),
)
return (
TodoMVC
updateNewTodoText(e.target.value)}
onKeyDown={(e) => {
if (e.key === 'Enter') {
createTodo()
}
}}
/>
)
}
```
## Queries
To retrieve data from the database, first define a query using `queryDb` from `@livestore/livestore`. Then, execute the query with the `useQuery` hook from `@livestore/react`.
Consider abstracting queries into a separate file to keep your code organized, though you can also define them directly within components if preferred.
Here's an example:
## `getting-started/react-web/MainSection.tsx`
```tsx filename="getting-started/react-web/MainSection.tsx"
const uiState$ = queryDb(tables.uiState.get(), { label: 'uiState' })
const visibleTodos$ = queryDb(
(get) => {
const { filter } = get(uiState$)
return tables.todos.where({
deletedAt: null,
completed: filter === 'all' ? undefined : filter === 'completed',
})
},
{ label: 'visibleTodos' },
)
export const MainSection: React.FC = () => {
const { store } = useStore()
const toggleTodo = React.useCallback(
({ id, completed }: typeof tables.todos.Type) =>
store.commit(completed ? events.todoUncompleted({ id }) : events.todoCompleted({ id })),
[store],
)
const visibleTodos = store.useQuery(visibleTodos$)
return (
{visibleTodos.map((todo) => (
toggleTodo(todo)}
/>
store.commit(events.todoDeleted({ id: todo.id, deletedAt: new Date() }))}
/>
))}
)
}
```
# [Solid](https://dev.docs.livestore.dev/getting-started/solid/)
TODO
# [Code of Conduct](https://dev.docs.livestore.dev/misc/CODE_OF_CONDUCT/)
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[contact@livestore.dev][Contact].
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][Homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][Translations].
[Homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[Translations]: https://www.contributor-covenant.org/translations
[Contact]: mailto:contact@livestore.dev
# [Getting started with LiveStore + Vue](https://dev.docs.livestore.dev/getting-started/vue/)
{/* We're adjusting the package to use the dev version on the dev branch */}
export const manualInstallDepsStr = [
'@livestore/livestore' + versionNpmSuffix,
'@livestore/wa-sqlite' + versionNpmSuffix,
'@livestore/adapter-web' + versionNpmSuffix,
'@livestore/utils' + versionNpmSuffix,
'@livestore/peer-deps' + versionNpmSuffix,
'@livestore/devtools-vite' + versionNpmSuffix,
'slashv/vue-livestore' + versionNpmSuffix,
].join(' ')
## Prerequisites
- Recommended: Bun 1.2 or higher
- Node.js {MIN_NODE_VERSION} or higher
## About Vue integration
Vue integration is still in beta and being incubated as a separate repository. Please direct any issues or contributions to [Vue LiveStore](https://github.com/slashv/vue-livestore)
## Option A: Quick start
For a quick start, we recommend referencing the [playground](https://github.com/slashv/vue-livestore/tree/main/playground) folder in the Vue LiveStore repository.
## Option B: Existing project setup \{#existing-project-setup\}
1. **Install dependencies**
It's strongly recommended to use `bun` or `pnpm` for the simplest and most reliable dependency setup (see [note on package management](/misc/package-management) for more details).
2. **Update Vite config**
Add the following code to your `vite.config.js` file:
```ts
export default defineConfig({
plugins: [
vue(),
vueDevTools(),
livestoreDevtoolsPlugin({ schemaPath: './src/livestore/schema.ts' }),
],
worker: { format: 'es' },
})
```
### Define Your Schema
Create a file named `schema.ts` inside the `src/livestore` folder. This file defines your LiveStore schema consisting of your app's event definitions (describing how data changes), derived state (i.e. SQLite tables), and materializers (how state is derived from events).
Here's an example schema:
## `getting-started/vue/livestore/schema.ts`
```ts filename="getting-started/vue/livestore/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
uiState: State.SQLite.clientDocument({
name: 'uiState',
schema: Schema.Struct({ newTodoText: Schema.String, filter: Schema.Literal('all', 'active', 'completed') }),
default: { id: SessionIdSymbol, value: { newTodoText: '', filter: 'all' } },
}),
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
uiStateSet: tables.uiState.set,
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
### Create the LiveStore Worker
Create a file named `livestore.worker.ts` inside the `src/livestore` folder. This file will contain the LiveStore web worker. When importing this file, make sure to add the `?worker` extension to the import path to ensure that Vite treats it as a worker file.
## `getting-started/vue/livestore/livestore.worker.ts`
```ts filename="getting-started/vue/livestore/livestore.worker.ts"
makeWorker({ schema })
```
### Add the LiveStore Provider
To make the LiveStore available throughout your app, wrap your app's root component with the `LiveStoreProvider` component from `vue-livestore`. This provider manages your app's data store, loading, and error states.
Here's an example:
```vue
Loading LiveStore...
```
### Commit events
After wrapping your app with the `LiveStoreProvider`, you can use the `useStore` hook from any component to commit events.
Here's an example:
```vue
Create
```
### Queries
To retrieve data from the database, first define a query using `queryDb` from `@livestore/livestore`. Then, execute the query with the `useQuery` hook from `@livestore/react`.
Consider abstracting queries into a separate file to keep your code organized, though you can also define them directly within components if preferred.
Here's an example:
```vue
{{ todo.text }}
```
# [Frequently Asked Questions](https://dev.docs.livestore.dev/misc/FAQ/)
### Does LiveStore have optimistic updates?
Yes and no. LiveStore doesn't have the concept of optimistic updates as you might know from libraries like [React Query](https://tanstack.com/query/latest/docs/framework/react/guides/optimistic-updates), however, any data update in LiveStore is automatically optimistic without the developer having to implement any special logic.
This provides the benefits of optimistic updates without the extra complexity by manually having to implement the logic for each individual data update (which can be very error prone).
### Does LiveStore have database transactions?
LiveStore runs on the client-side and handles transactions differently than traditional server-side databases. While materializers automatically run in transactions, global transactional behavior (often called "online transactions") needs to be explicitly modeled in your application logic.
### Can I use an ORM or query builder with LiveStore?
It's possible to use most ORMs/query builders with LiveStore (as long as they are able to synchronously generate SQL statements). You should also give the built-in LiveStore query builder a try. See [the ORM page](/patterns/orm) for more information.
### Is there a company behind LiveStore? How does LiveStore make money?
LiveStore is developed by [Johannes Schickling](https://github.com/schickling) and has been incubated as the foundation of [Overtone](https://overtone.pro) (a local-first music app). The plan is to keep the development of LiveStore as sustainable as possible via sponsorships and other paths (e.g. commercial licenses, paid consulting, premium devtools, etc).
### Is there a hosted sync backend provided by LiveStore?
No, LiveStore is designed to be self-hosted or be used with a 3rd party sync backend.
### Can I use my existing database with LiveStore? {#existing-database}
Not currently. LiveStore is built around the idea of event-sourcing which separates reads and writes. This means LiveStore isn't syncing your database directly but only the events that are used to materialize the database making sure it's kept in sync across clients.
However, we might provide support for this in the future depending on demand.
### What's the difference between clientId, sessionId, and userId?
- **sessionId**: Identifies a single LiveStore instance within a client (e.g., a browser tab). Sessions can persist (e.g., across tab reloads in web).
- **clientId**: A randomly generated identifier managed by LiveStore that identifies a client instance. Each client has its own unique clientId and can have one or multiple sessions.
- **userId**: Not a LiveStore concept. User identity must be handled at the application level through your events and application logic.
A single user might have multiple clients (e.g., using different browsers or devices), each with its own clientId. User identification should be modeled within your application domain.
# [Community](https://dev.docs.livestore.dev/misc/community/)
## Discord
You can join the Discord server here.
## Office hours
You can join future office hour events [here](https://lu.ma/livestore).
{
officeHours.map((url) => (
))
}
## Conference talks & podcasts
- [Sync different: Event sourcing in local-first apps](https://www.youtube.com/watch?v=nyPl84BopKc)
- [From Prisma Founder to LiveStore: Building local-first apps with Johannes Schickling (Aaron Francis interview)](https://www.youtube.com/watch?v=aKTbGIrkrLE)
## RFX: Request for exploration \{#rfx\}
LiveStore opens the door to many new possibilities. Many more than I could explore or build myself, so I invite you to explore some of the ideas below.
### Technological ideas
- Auth
- Authn
- Authz
- e2ee
- Server side
- React server rendering
- Centralized read models
- Integrating with existing databases / systems
- CRDTs for text editing
- Automerge / YJS as embedded data
- Collaboration
- Presence features
- Blob files
- Version control
- Manual push/pull + git-like commits of multiple events
- Event-sourcing
- Schema evolution: migrating events (e.g. cambria)
- Cross-app data interop
- AI
- Local RAG
- Agents
### Application ideas
It would be great to see a new generation of apps built with LiveStore - ideally each app being:
- [Local-first](https://www.inkandswitch.com/essay/local-first/)
- Open-source
- Self-hostable
Here are some app ideas:
- Replacement for Doodle
- Replacement for Canny (feature requests)
- Replacement for Splitwise
- Replacement for Wunderlist
- GitHub client
- A secret Santa app
- Movie / TV tracking app
- Fitness app
### LiveStore internals
- Explore and improve multi-store ergonomics
- Diff queries in SQLite
- IVM
## Notable LiveStore projects (Open-source)
- [WorkSquared](https://github.com/sociotechnica-org/work-squared): An AI-haunted workplace to coordinate, plan, and execute.
- [Cheffect](https://github.com/tim-smart/cheffect): Local-first recipe management app built with LiveStore and Effect
# [Design Partners](https://dev.docs.livestore.dev/misc/design-partners/)
LiveStore is looking for design partners with the following aims:
- For your company:
- Architectural guidance and internal training
- Priority support
- Influence over the roadmap and prioritization of features/bugfixes
- Make sure LiveStore is well-maintained as a critical part of your product
- For LiveStore:
- Sustain the continous development and maintenance of the project
- Make sure LiveStore is a designed around real-world use cases and constraints
Please [get in touch](https://forms.gle/NUy9irooEpXjqFAb6) if you're interested in becoming a design partner.
# [Note on Package Management](https://dev.docs.livestore.dev/misc/package-management/)
export const catalog = `\
catalog:
effect: ${EFFECT_VERSION} # As LiveStore depends on \`effect\`
# also \`react\`, \`react-dom\` etc based on your project
`
## Recommended
It's strongly recommended to use `pnpm` or `bun` when building an app with LiveStore to avoid dependency issues (e.g. wrong version resolution, duplicate dependencies, etc).
### Peer dependencies
Since LiveStore has a few peer dependencies, you either should manually add them to your project or add the `@livestore/peer-deps` package to your project to satisfy them.
### PNPM Catalog
When using `pnpm`, we recommend specifying the following packages in your [PNPM Catalog](https://pnpm.io/catalogs):
# [Credits](https://dev.docs.livestore.dev/misc/credits/)
LiveStore wouldn't have been possible without the help and support of many individuals and companies.
A special thanks goes to:
- Geoffrey Litt & Nicholas Schiefer for the collaboration of the [Riffle research project](https://riffle.systems/essays/prelude/) on which LiveStore is based on
- Matt Wonlaw for the collaboration on LiveStore over an extended period of time
- [Kuldar](https://kuldar.com) for the lovely LiveStore logo
- Ink & Switch for their visionary [local-first research](https://inkandswitch.com/local-first/) and for being a continuous source of inspiration
- The SQLite team for their amazing work on the SQLite core library
- Roy Hashimoto for their great work on the SQLite WASM library [wa-sqlite](https://github.com/rhashimoto/wa-sqlite) which LiveStore uses a fork of
- Tim Suchanek for the initial collaboration on the Effect DB schema library
- All sponsors, users & community members for feedback and support
# [Resources](https://dev.docs.livestore.dev/misc/resources/)
Feel free to use the following assets for presentations, blog posts, etc about LiveStore.
## Logo
Dark PNG
Dark SVG
Light PNG
Light SVG
## Architecture Diagrams
### Client scenarios
### Sync architecture
### Data modeling
# [Sponsoring LiveStore](https://dev.docs.livestore.dev/misc/sponsoring/)
## TLDR
- Sponsoring LiveStore helps ensure long-term stability and improvements for the project you rely on.
- Sponsors receive exclusive benefits, including a LiveStore Devtools license and access to sponsor-only resources.
- LiveStore is fully open source and community-supported—your sponsorship directly enables its ongoing development.
## Goal: Sustainable Open Source
As the creator and maintainer of LiveStore, I'm often asked *"how do you make money with LiveStore?"*. That's a great question with a simple answer. I'm not building LiveStore to make a lot of money - my goal is to make LiveStore a sustainable open source project. I've been working on LiveStore since 2021 (mostly full-time) and hope you can help to keep the project sustainable.
Open source has been a big part of my life - I've founded [Prisma](https://www.prisma.io/), created [Contentlayer](https://www.contentlayer.dev/), and built/maintained many other open source projects over the years. Through these experiences, I've seen firsthand how challenging it can be to keep open source projects healthy in the long run. Too often, maintainers burn out, and projects that many people depend on end up dying. My goal with LiveStore is to build an open source project that's sustainable and could possibly serve as an inspiration for other open source projects.
I wanted LiveStore to exist for over a decade - something I felt was missing in the ecosystem and that I know others have wanted as well. But building and maintaining a project on that level of ambition is incredibly hard, especially without a clear path to monetization. I believe that's also why a technology like LiveStore didn't exist yet.
Particularly being concerned about the sustainability of open source projects, I was hesitant to start another open source project myself. Still, I believe deeply in the value LiveStore creates for developers, and I'm committed to making it the best it can be.
The unfortunate reality is that there is no well-established way for open source creators to get paid for their work. While there are some great initiatives and platforms out there - like [Open Source Pledge](https://opensourcepledge.org/), [Generous](https://generous.builders/), [GitHub Sponsors](https://github.com/sponsors), [Polar](https://polar.sh/), [OpenCollective](https://opencollective.com/), and [thanks.dev](https://thanks.dev/) - most open source projects still struggle to capture even a fraction of the value they create. I believe in a positive-sum world, and I'm happy to contribute, but sustainability is essential if LiveStore is going to keep growing and improving.
My mid-/long term goal is to bring in enough resources not just to support myself, but to pay others to work on LiveStore as well. I want to ensure that the project remains stable, well-maintained, and innovative - something you can truly rely on. Sponsorship is the most direct way to make this possible. It's not just about funding features or bug fixes; it's about creating a relationship where your support helps guarantee the future of a tool you depend on.
I hope those words resonate with you and you'll understand why sponsoring LiveStore isn't just a nice gesture - it's essential for keeping the project going and a direct investment in the stability and evolution of a project that your application (and business) depends on. Your support ensures that LiveStore remains sustainable and healthy for the whole community.
Thank you! 🧡
## Related posts
- [The Open Source Sustainability Crisis](https://openpath.quest/2024/the-open-source-sustainability-crisis/) by Chad Whitacre
- [Entitlement in Open Source](https://mikemcquaid.com/entitlement-in-open-source/) by Homebrew lead Mike McQuaid
## Aligned Incentives
- **Stability and Reliability:**
Your application depends on LiveStore as the core data foundation. Sponsorship ensures continuous, focused maintenance and improvement, directly benefiting you with increased stability, reliability, and performance.
- **Shared Investment in Long-term Success:**
Sponsorship creates mutual investment in the project's future. You sponsoring LiveStore signals that it is crucial to your business, motivating me (and other maintainers) to prioritize features, enhancements, and fixes that benefit you.
- **Avoiding Costly In-House Development:**
LiveStore offers a unique architectural design with (currently) no direct alternatives readily available. So the only alternative is building something similar in-house which is takes a lot of time and resources. Sponsorship aligns incentives by ensuring LiveStore remains an attractive and efficient alternative.
- **Transparent Sustainability:**
Open-source sustainability is a common challenge as projects stagnate without sustainable resources. Sponsorship transparently addresses this, providing maintainers with clarity and stability, ensuring the project thrives and evolves to users' benefit.
- **Healthy Community and Ecosystem**:
Sponsors actively contribute to fostering a healthy, collaborative ecosystem. Their direct involvement ensures responsiveness to real-world needs, creating an ongoing, beneficial dialogue between users and maintainers.
- **Focused Innovation and Quality**:
Regular sponsorship allows maintainers to allocate dedicated time to innovative research and high-quality development. This ultimately translates into more reliable software, fewer bugs, faster releases, and thoughtful features driven by actual user needs.
- **Ensuring Longevity and Avoiding Lock-In**:
By aligning financial incentives through sponsorship, maintainers avoid being forced into less favorable monetization methods (e.g., restrictive licensing or heavy commercial lock-ins), maintaining open access and flexibility for users.
## Sponsor Benefits
You can access your sponsor benefits via the [Sponsor dashboard](https://livestore.dev/sponsor).
- [LiveStore Devtools](/reference/devtools) License
- Access to
- Sponsor-only Discord channels
- LiveStore Office Hours
- Prioritized bug fixes and feature requests
## Thanks to our Sponsors
A big and heartfelt thank you to all our sponsors. Your support has been invaluable and LiveStore wouldn't be where it is without you.
### Partners
- [ElectricSQL](https://www.electricsql.com/)
- [Netlify](https://www.netlify.com/)
- [Expo](https://expo.dev/)
- [Axial](https://axial.work/)
### Individuals
A big thank you to all individual GitHub sponsors! 🧡
## FAQ
### Why not raise VC money for LiveStore?
While raising venture capital for LiveStore might be possible, the challenge lies in building a VC-scale business around LiveStore. My current goal is to make and keep LiveStore sustainable without investor funding. (While I don't rule out this path in the future, it's currently not planned.)
### Why not build a hosting service around LiveStore?
While technically feasible, LiveStore embraces partnerships with other syncing services to create a win-win situation and minimize vendor lock-in for users.
### Are free devtools licenses for students?
Yes, please reach out via Discord.
### Are there other ways to support LiveStore?
Yes, there are many ways to support LiveStore:
- Become a [contributor / maintainer](/contributing/contributing)
- Help other community members (e.g. via Discord)
- Spread the word
- Give talks, write blog posts, post on social media, ...
- Provide feedback (e.g. via GitHub issues or Discord)
# [Anonymous user transition](https://dev.docs.livestore.dev/patterns/anonymous-user-transition/)
## Basic idea
- Locally choose a unique identifier for the user (e.g. via `crypto.randomUUID()`).
- You might want to handle the very unlikely case that the identifier is not unique (collision) on the sync backend.
- Persist this identifier locally (either via a separate LiveStore instance or via `localStorage`).
- Use this identifier in the `storeId` for the user-related LiveStore instance.
- Initially when the user is anonymous, the store won't be synced yet (i.e. no sync backend used in adapter).
- As part of the auth flow, the LiveStore instance is now synced with the same `storeId` to a sync backend which will sync all local events to the sync backend making sure the user keeps all their data.
# [Auth](https://dev.docs.livestore.dev/patterns/auth/)
LiveStore doesn't include built-in authentication or authorization support, but you can implement it in your app's logic.
## Pass an auth payload to the sync backend
Use the `syncPayload` store option to send a custom payload to your sync backend.
### Example
The following example sends the authenticated user's JWT to the server.
## `patterns/auth/live-store-provider.tsx`
```tsx filename="patterns/auth/live-store-provider.tsx"
const schema = {} as Parameters[0]['schema']
const storeId = 'demo-store'
const user = { jwt: 'user-token' }
const children: ReactNode = null
const adapter = makeInMemoryAdapter()
// ---cut---
export const AuthenticatedProvider = () => (
{/* ... */}
{children}
)
```
On the sync server, validate the token and allow or reject the sync based on the result. See the following example:
## `patterns/auth/pass-auth-payload.ts`
```ts filename="patterns/auth/pass-auth-payload.ts"
const JWT_SECRET = 'a-string-secret-at-least-256-bits-long'
export class SyncBackendDO extends makeDurableObject({
onPush: async (message) => {
console.log('onPush', message.batch)
},
onPull: async (message) => {
console.log('onPull', message)
},
}) {}
export default makeWorker({
syncBackendBinding: 'SYNC_BACKEND_DO',
validatePayload: async (payload: any, context) => {
const { storeId } = context
const { authToken } = payload
if (!authToken) {
throw new Error('No auth token provided')
}
const user = await getUserFromToken(authToken)
if (!user) {
throw new Error('Invalid auth token')
} else {
// User is authenticated!
console.log('Sync backend payload', JSON.stringify(user, null, 2))
}
// Check if token is expired
if (payload.exp && payload.exp < Date.now() / 1000) {
throw new Error('Token expired')
}
await checkUserAccess(user, storeId)
},
enableCORS: true,
})
async function getUserFromToken(token: string): Promise {
try {
const { payload } = await jose.jwtVerify(token, new TextEncoder().encode(JWT_SECRET))
return payload
} catch (error) {
console.log('⚠️ Error verifying token', error)
}
}
async function checkUserAccess(payload: jose.JWTPayload, storeId: string): Promise {
// Check if user is authorized to access the store
console.log('Checking access for store', storeId, 'with payload', payload)
}
```
The above example uses [`jose`](https://www.npmjs.com/package/jose), a popular JavaScript module that supports JWTs. It works across various runtimes, including Node.js, Cloudflare Workers, Deno, Bun, and others.
The `validatePayload` function receives the `authToken`, checks if the payload exists, and verifies that it's valid and hasn't expired. If all checks pass, sync continues as normal. If any check fails, the server rejects the sync.
The client app still works as expected, but saves data locally. If the user re-authenticates or refreshes the token later, LiveStore syncs any local changes made while the user was unauthenticated.
## Re-validate payload inside the Durable Object
When you rely on `syncPayload`, treat it as untrusted input. Decode the token inside `validatePayload` to gate the connection, and then repeat the same verification inside the Durable Object before trusting per-push metadata.
## `patterns/auth/keep-payload-canonical.ts`
```ts filename="patterns/auth/keep-payload-canonical.ts"
// ---cut---
type SyncPayload = { authToken?: string; userId?: string }
type AuthorizedSession = {
authToken: string
userId: string
}
const ensureAuthorized = (payload: unknown): AuthorizedSession => {
if (payload === undefined || payload === null || typeof payload !== 'object') {
throw new Error('Missing auth payload')
}
const { authToken, userId } = payload as SyncPayload
if (!authToken) {
throw new Error('Missing auth token')
}
const claims = verifyJwt(authToken)
if (!claims.sub) {
throw new Error('Token missing subject claim')
}
if (userId !== undefined && userId !== claims.sub) {
throw new Error('Payload userId mismatch')
}
return { authToken, userId: claims.sub }
}
export default makeWorker({
syncBackendBinding: 'SYNC_BACKEND_DO',
validatePayload: (payload) => {
ensureAuthorized(payload)
},
})
export class SyncBackendDO extends makeDurableObject({
onPush: async (message: SyncMessage.PushRequest, { payload }) => {
const { userId } = ensureAuthorized(payload)
await ensureTenantAccess(userId, message.batch)
},
}) {}
const ensureTenantAccess = async (_userId: string, _batch: SyncMessage.PushRequest['batch']) => {
// Replace with your application-specific access checks.
}
```
### `patterns/auth/verify-jwt.ts`
```ts filename="patterns/auth/verify-jwt.ts"
export type Claims = {
sub?: string
}
export const verifyJwt = (token: string): Claims => {
if (token.length === 0) {
throw new Error('Missing token')
}
// Replace with real JWT verification (e.g. via `jose`)
return { sub: token }
}
```
- `validatePayload` runs once per connection and rejects mismatched tokens before LiveStore upgrades to WebSocket.
- `onPush` (and `onPull`, if you need it) must repeat the verification because the payload forwarded to the Durable Object is the original client input.
- The HTTP transport does not forward payloads today; embed the necessary authorization context directly in the events or move those clients to WebSocket/DO-RPC if you must rely on shared payload metadata.
You can extend `ensureAuthorized` to project additional claims, memoise verification per `authToken`, or enforce application-specific policies without changing LiveStore internals.
## Client Identity vs User Identity
LiveStore's `clientId` identifies a client instance, while user identity is an application-level concern that must be modeled through your application's events and logic.
### Key Points
- `clientId`: Automatically managed by LiveStore, identifies a client instance
- User identity: Managed by your application through events and syncPayload
### Using syncPayload for Authentication
The `syncPayload` is primarily intended for authentication purposes:
User identification and semantic data (like user IDs) should typically be handled through your event payloads and application state rather than relying solely on the sync payload.
# [Effect](https://dev.docs.livestore.dev/patterns/effect/)
LiveStore itself is built on top of [Effect](https://effect.website) which is a powerful library to write production-grade TypeScript code. It's also possible (and recommended) to use Effect directly in your application code.
## Schema
LiveStore uses the [Effect Schema](https://effect.website/docs/schema/introduction/) library to define schemas for the following:
- Read model table column definitions
- Event event payloads definitions
- Query response types
For convenience, LiveStore re-exports the `Schema` module from the `effect` package, which is the same as if you'd import it via `import { Schema } from 'effect'` directly.
## `Equal` and `Hash` Traits
LiveStore's reactive primitives (`LiveQueryDef` and `SignalDef`) implement Effect's `Equal` and `Hash` traits, enabling efficient integration with Effect's data structures and collections.
## Effect Atom Integration
LiveStore integrates seamlessly with [Effect Atom](https://github.com/effect-atom/effect-atom) for reactive state management in React applications. This provides a powerful combination of Effect's functional programming capabilities with LiveStore's event sourcing and CQRS patterns.
Effect Atom is an external package developed by [Tim Smart](https://github.com/tim-smart) that provides a more Effect-idiomatic alternative to the `@livestore/react` package. While `@livestore/react` offers a straightforward React integration, Effect Atom leverages Effect API/patterns throughout, making it a natural choice for applications already using Effect.
### Installation
```bash
pnpm install @effect-atom/atom-livestore @effect-atom/atom-react
```
### Store Creation
Create a LiveStore-backed atom store with persistence and worker support using the `AtomLivestore.Tag` pattern:
## `patterns/effect/store-setup/atoms.ts`
```ts filename="patterns/effect/store-setup/atoms.ts"
export { schema } from './schema.ts'
// Create a persistent adapter with OPFS storage
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker: LiveStoreWorker,
sharedWorker: LiveStoreSharedWorker,
})
// Define the store as a service tag
export class StoreTag extends AtomLivestore.Tag()('StoreTag', {
schema,
storeId: 'default',
adapter,
batchUpdates: unstable_batchedUpdates, // React batching for performance
}) {}
```
### `patterns/effect/store-setup/schema.ts`
```ts filename="patterns/effect/store-setup/schema.ts"
// Define event payloads
export const events = {
userCreated: Events.clientOnly({
name: 'userCreated',
schema: Schema.Struct({
id: Schema.String,
name: Schema.String,
email: Schema.String,
}),
}),
userUpdated: Events.clientOnly({
name: 'userUpdated',
schema: Schema.Struct({
id: Schema.String,
name: Schema.optionalWith(Schema.String, { as: 'Option' }),
email: Schema.optionalWith(Schema.String, { as: 'Option' }),
isActive: Schema.optionalWith(Schema.Boolean, { as: 'Option' }),
}),
}),
productCreated: Events.clientOnly({
name: 'productCreated',
schema: Schema.Struct({
id: Schema.String,
name: Schema.String,
description: Schema.String,
price: Schema.Number,
}),
}),
productUpdated: Events.clientOnly({
name: 'productUpdated',
schema: Schema.Struct({
id: Schema.String,
name: Schema.optionalWith(Schema.String, { as: 'Option' }),
description: Schema.optionalWith(Schema.String, { as: 'Option' }),
price: Schema.optionalWith(Schema.Number, { as: 'Option' }),
}),
}),
todoCreated: Events.clientOnly({
name: 'todoCreated',
schema: Schema.Struct({
id: Schema.String,
text: Schema.String,
completed: Schema.Boolean,
}),
}),
todoToggled: Events.clientOnly({
name: 'todoToggled',
schema: Schema.Struct({
id: Schema.String,
completed: Schema.Boolean,
}),
}),
itemCreated: Events.clientOnly({
name: 'itemCreated',
schema: Schema.Struct({
id: Schema.String,
name: Schema.String,
metadata: Schema.Record({ key: Schema.String, value: Schema.Unknown }),
}),
}),
itemUpdated: Events.clientOnly({
name: 'itemUpdated',
schema: Schema.Struct({
id: Schema.String,
status: Schema.String,
}),
}),
}
// Define tables
const tables = {
users: State.SQLite.table({
name: 'users',
columns: {
id: State.SQLite.text({ primaryKey: true }),
name: State.SQLite.text(),
email: State.SQLite.text(),
isActive: State.SQLite.boolean(),
createdAt: State.SQLite.datetime(),
},
}),
products: State.SQLite.table({
name: 'products',
columns: {
id: State.SQLite.text({ primaryKey: true }),
name: State.SQLite.text(),
description: State.SQLite.text(),
price: State.SQLite.real(),
createdAt: State.SQLite.datetime(),
},
}),
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean(),
createdAt: State.SQLite.datetime(),
},
}),
}
// Define materializers
const materializers = State.SQLite.materializers(events, {
userCreated: ({ id, name, email }) => tables.users.insert({ id, name, email, isActive: true, createdAt: new Date() }),
userUpdated: ({ id, name, email, isActive }) => {
const updates: { name?: string; email?: string; isActive?: boolean } = {}
if (Option.isSome(name)) updates.name = name.value
if (Option.isSome(email)) updates.email = email.value
if (Option.isSome(isActive)) updates.isActive = isActive.value
return tables.users.update(updates).where({ id })
},
todoCreated: ({ id, text, completed }) => tables.todos.insert({ id, text, completed, createdAt: new Date() }),
todoToggled: ({ id, completed }) => tables.todos.update({ completed }).where({ id }),
productCreated: ({ id, name, description, price }) =>
tables.products.insert({ id, name, description, price, createdAt: new Date() }),
productUpdated: ({ id, name, description, price }) => {
const updates: { name?: string; description?: string; price?: number } = {}
if (Option.isSome(name)) updates.name = name.value
if (Option.isSome(description)) updates.description = description.value
if (Option.isSome(price)) updates.price = price.value
return tables.products.update(updates).where({ id })
},
itemCreated: () => [], // Item events don't have a corresponding table
itemUpdated: () => [], // Item events don't have a corresponding table
})
// Create state
const state = State.SQLite.makeState({ tables, materializers })
// Create the store schema
export const schema = makeSchema({ events, state })
export { tables }
```
The `StoreTag` class provides the following static methods:
- `StoreTag.runtime` - Access to Effect runtime
- `StoreTag.commit` - Commit events to the store
- `StoreTag.store` - Access store with Effect
- `StoreTag.storeUnsafe` - Direct store access when store is already loaded (synchronous)
- `StoreTag.makeQuery` - Create query atoms with Effect
- `StoreTag.makeQueryUnsafe` - Create query atoms without Effect
### Defining Query Atoms
Create reactive query atoms that automatically update when the underlying data changes:
## `patterns/effect/store-setup/queries.ts`
```ts filename="patterns/effect/store-setup/queries.ts"
// User schema for type safety
const User = Schema.Struct({
id: Schema.String,
name: Schema.String,
isActive: Schema.Boolean,
})
const Product = Schema.Struct({
id: Schema.String,
name: Schema.String,
createdAt: Schema.DateTimeUtc,
})
// Search term atom for dynamic queries
export const searchTermAtom = Atom.make('')
// Re-export from utils for convenience
export { usersQueryAtom as usersAtom } from './utils.ts'
// Query with SQL
export const activeUsersAtom = StoreTag.makeQuery(
queryDb({
query: sql`SELECT * FROM users WHERE isActive = true ORDER BY name`,
schema: Schema.Array(User),
}),
)
// Static query example - dynamic queries would need a different approach
// For dynamic queries, you'd typically use a derived atom that depends on searchTermAtom
export const searchResultsAtom = StoreTag.makeQuery(
queryDb({
query: sql`SELECT * FROM products ORDER BY createdAt DESC`,
schema: Schema.Array(Product),
}),
)
```
### `patterns/effect/store-setup/utils.ts`
```ts filename="patterns/effect/store-setup/utils.ts"
// Common query atoms that can be reused
export const todosQueryAtom = StoreTag.makeQuery(queryDb(tables.todos))
export const todosQueryUnsafeAtom = StoreTag.makeQueryUnsafe(queryDb(tables.todos))
export const usersQueryAtom = StoreTag.makeQuery(queryDb(tables.users))
export const productsQueryAtom = StoreTag.makeQuery(queryDb(tables.products))
// Common types for optimistic updates
export type PendingTodo = { id: string; text: string; completed: boolean }
export type PendingUser = { id: string; name: string; email: string }
// Common pending state atoms
export const pendingTodosAtom = Atom.make([])
export const pendingUsersAtom = Atom.make([])
```
### Using Queries in React Components
Access query results in React components with the `useAtomValue` hook. When using `StoreTag.makeQuery` (non-unsafe API), the result is wrapped in a Result type for proper loading and error handling:
## `patterns/effect/store-setup/user-list.tsx`
```tsx filename="patterns/effect/store-setup/user-list.tsx"
export function UserList() {
const users = useAtomValue(activeUsersAtom)
return Result.builder(users)
.onInitial(() =>
Loading users...
)
.onSuccess((users) => (
{users.map((user) => (
{user.name}
))}
))
.onDefect((error: any) =>
Error: {error.message}
)
.render()
}
```
### Integrating Effect Services
Combine Effect services with LiveStore operations using the store's runtime:
## `patterns/effect/store-setup/services.tsx`
```tsx filename="patterns/effect/store-setup/services.tsx"
// Example service definition
export class MyService extends Context.Tag('MyService')<
MyService,
{
processItem: (name: string) => Effect.Effect<{
name: string
metadata: Record
}>
}
>() {}
// Use the commit hook for event handling
export const useCommit = () => useAtomSet(StoreTag.commit)
// Simple commit example
export const createItemAtom = StoreTag.runtime.fn()((itemName, get) => {
return Effect.sync(() => {
const store = get(StoreTag.storeUnsafe)
if (store) {
store.commit(
events.itemCreated({
id: crypto.randomUUID(),
name: itemName,
metadata: { createdAt: new Date().toISOString() },
}),
)
}
})
})
// Use in a React component
export function CreateItemButton() {
const createItem = useAtomSet(createItemAtom)
const handleClick = () => {
createItem('New Item')
}
return (
Create Item
)
}
```
### Advanced Patterns
#### Optimistic Updates
Combine local state with LiveStore for optimistic UI updates. When using `StoreTag.makeQueryUnsafe`, the data is directly available:
## `patterns/effect/optimistic-example/optimistic.ts`
```ts filename="patterns/effect/optimistic-example/optimistic.ts"
// Combine real and pending todos for optimistic UI
export const optimisticTodoAtom = Atom.make((get) => {
const todos = get(todosQueryUnsafeAtom) // Direct array, not wrapped in Result
const pending = get(pendingTodosAtom)
return [...(todos || []), ...pending]
})
```
#### Derived State
Create computed atoms based on LiveStore queries. When using the non-unsafe API, handle the Result type:
## `patterns/effect/derived-example/derived.ts`
```ts filename="patterns/effect/derived-example/derived.ts"
// Derive statistics from todos
export const todoStatsAtom = Atom.make((get) => {
const todos = get(todosQueryAtom) // Result wrapped
return Result.map(todos, (todoList) => ({
total: todoList.length,
completed: todoList.filter((t) => t.completed).length,
pending: todoList.filter((t) => !t.completed).length,
}))
})
```
#### Batch Operations
Perform multiple commits efficiently (commits are synchronous):
## `patterns/effect/batch-example/batch.ts`
```ts filename="patterns/effect/batch-example/batch.ts"
// Bulk update atom for batch operations
export const bulkUpdateAtom = StoreTag.runtime.fn()(
Effect.fn(function* (ids, get) {
const store = get(StoreTag.storeUnsafe)
if (!store) return
// Commit multiple events synchronously
for (const id of ids) {
store.commit(events.itemUpdated({ id, status: 'processed' }))
}
}),
)
```
### Best Practices
1. **Use `StoreTag.makeQuery` for queries**: This ensures proper Effect integration and error handling
2. **Leverage Effect services**: Integrate business logic through Effect services for better testability
3. **Handle loading states**: Use `Result.builder` pattern for consistent loading/error UI
4. **Batch React updates**: Always provide `batchUpdates` for better performance
5. **Label queries**: Add descriptive labels to queries for better debugging
6. **Type safety**: Let TypeScript infer types from schemas rather than manual annotations
### Real-World Example
For a comprehensive example of LiveStore with Effect Atom in action, check out [Cheffect](https://github.com/tim-smart/cheffect) - a recipe management application that demonstrates:
- Complete Effect service integration
- AI-powered recipe extraction using Effect services
- Complex query patterns with search and filtering
- Worker-based persistence with OPFS
- Production-ready error handling and logging
# [Encryption](https://dev.docs.livestore.dev/patterns/encryption/)
LiveStore doesn't yet support encryption but might in the future.
See [this issue](https://github.com/livestorejs/livestore/issues/70) for more details.
For now you can implement encryption yourself e.g. by encrypting the events using a custom Effect Schema definition which applies a encryption transformation to the events.
# [External Data](https://dev.docs.livestore.dev/patterns/external-data/)
LiveStore doesn't provide any built-in functionality to deal with external data. However, LiveStore was designed with this use case in mind (e.g. Overtone integrates with lots of external data like Spotify, ...). One way to deal with external data is to also model it as an event log and materialize it into LiveStore state as well.
(If you're interested in learning more about the solution we're using for Overtone, get in touch.)
# [Presence](https://dev.docs.livestore.dev/patterns/presence/)
LiveStore doesn't yet have any built-in presence functionality (e.g. to track online/offline users).
Common presence use cases are:
- Track which users are online / in a room
- Track which users are typing (e.g. in a chat)
- Text cursor (similar to Google Docs)
- Cursor movements (similar to Figma)
For now it's recommend to implement presence functionality in your application or use a third party service (e.g. Liveblocks).
# [Side effect](https://dev.docs.livestore.dev/patterns/side-effects/)
TODO: Document how to safely run side-effects as response to LiveStore events.
Notes for writing those docs:
- Scenarios:
- Run side-effect in each client session
- Run side-effect only once per client (i.e. use a lock between client sessions)
- Run side-effect only once globally (will require some kind of global transaction)
- How to deal with rollbacks/rebases
- Allow for filtering events based on whether they have been confirmed by the sync backend or include unconfirmed events
# [Storybook Testing (React)](https://dev.docs.livestore.dev/patterns/storybook/)
export const CODE = {
todoInputStories: `import type { Meta, StoryObj } from '@storybook/react'
const meta: Meta = {
title: 'TodoMVC/TodoInput',
component: TodoInput,
}
export default meta
type Story = StoryObj
export const Default: Story = {}
export const WithInitialText: Story = {
decorators: [
createLiveStoreDecorator([
events.uiStateSet({ newTodoText: 'Buy groceries' })
])
],
}`,
storybookPreview: `import React from 'react'
// Default decorator with no seed data
const LiveStoreDecorator = createLiveStoreDecorator()
export const decorators = [LiveStoreDecorator]`,
decorator: `import React from 'react'
// Create LiveStore decorator with optional seeding
export const createLiveStoreDecorator = (seedEvents = []) => (Story) => {
const onBoot = (store) => {
// Seed data through events during boot
if (seedEvents.length > 0) {
store.commit(...seedEvents)
}
}
return (
)
}`,
schema: `import { Events, makeSchema, Schema, SessionIdSymbol, State } from '@livestore/livestore'
// Define tables (based on TodoMVC example)
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
// Client document for UI state
uiState: State.SQLite.clientDocument({
name: 'uiState',
schema: Schema.Struct({
newTodoText: Schema.String,
filter: Schema.Literal('all', 'active', 'completed')
}),
default: {
id: SessionIdSymbol,
value: { newTodoText: '', filter: 'all' }
},
}),
}
// Define events (exactly from TodoMVC)
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
// Auto-generated client document event
uiStateSet: tables.uiState.set,
}
// Define materializers to map events to state
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })`
}
LiveStore works seamlessly with Storybook for React component development and testing.
**Note:** This guide focuses on React. For other frameworks, adapt patterns accordingly.
## Setup
First, [install Storybook](https://storybook.js.org/docs/get-started/install) in your React project.
## Configuration
Create a decorator that wraps stories with a fresh LiveStore instance and use the TodoMVC schema for realistic examples.
# [AI](https://dev.docs.livestore.dev/patterns/ai/)
- LiveStore is a great fit for building AI applications.
- Scenarios:
- Local RAG (via sqlite-vec (see [feature request](https://github.com/livestorejs/livestore/issues/127)) + local LLM e.g. Gemini Nano embedded in Chrome)
- Agentic applications
- Event <> tool calls
- Nice mapping
## Example
```ts
// TODO (contribution welcome)
```
# [File Management](https://dev.docs.livestore.dev/patterns/file-management/)
LiveStore doesn't have built-in support for file management but it's easy to use LiveStore alongside existing file storage solutions (e.g. S3).
The basic idea is to store the file metadata (e.g. url, name, size, type) in LiveStore and the file content separately.
## Example
```ts
// TODO (contribution welcome)
```
# [ORM](https://dev.docs.livestore.dev/patterns/orm/)
- LiveStore has a built-in query builder which should be sufficient for most simple use cases.
- You can always fall back to using raw SQL queries if you need more complex queries.
- As long as the ORM allows supports synchronously generating SQL statements (and binding parameters), you should be able to use it with LiveStore.
- Supported ORMs:
- [Knex](https://knexjs.org/)
- [Kysely](https://kysely.dev/)
- [Drizzle](https://orm.drizzle.team/)
- [Objection.js](https://vincit.github.io/objection.js/)
- Unsupported ORMs:
- [Prisma](https://www.prisma.io/) (because it's async)
## Example
```ts
// TODO (contribution welcome)
```
# [Rich Text Editing](https://dev.docs.livestore.dev/patterns/rich-text-editing/)
LiveStore doesn't yet have any built-in support for rich text editing. It's currently recommended to use a purpose-built library (e.g. [Yjs](https://yjs.dev/) or [Automerge](https://automerge.org/)) for this use case in combination with LiveStore.
The idea here is to reference the rich text document from within LiveStore's event log and sync both in parallel.
## Example
```ts
// TODO
```
# [State Machines](https://dev.docs.livestore.dev/patterns/state-machines/)
LiveStore can be used to implement state machines or together with existing state machine libraries (e.g. [XState](https://stately.ai/docs/xstate)).
The basic idea is to listen query results and emit events when the query results change. The state machine side effects can then further commit new mutations to LiveStore.
## Example
```ts
// TODO (contribution welcome)
```
# [Undo/Redo](https://dev.docs.livestore.dev/patterns/undo-redo/)
Undo/redo functionality should be generally modeled through explicit events instead of "removing" events from the event history.
## Example
```ts
// TODO (contribution welcome)
```
# [File Structure](https://dev.docs.livestore.dev/patterns/file-structure/)
While there are no strict requirements/conventions for how to structure your project (files, folders, etc), a common pattern is to have a `src/livestore` folder which contains all the LiveStore related code.
```
src/
livestore/
index.ts # re-exports everything
schema.ts # schema definitions
queries.ts # query definitions
events.ts # event definitions
...
...
```
# [App Evolution](https://dev.docs.livestore.dev/patterns/app-evolution/)
When building an app with LiveStore, you'll need to keep some things in mind when evolving your app.
## Schema changes
### State schema changes
Generally any kind of changes to your state schema (e.g. SQLite tables, ...) can be done at any time without any further considerations assuming the event materializer is updated to support the new schema.
### Event schema changes
Event schema changes require a bit more consideration. Changes to the event schema should generally be done in a backwards-compatible way. See [Event schema evolution](/reference/events/#schema-evolution) for more details.
## Parallel different app versions
In scenarios where you have multiple app versions rolled out in parallel (e.g. app version v3 with event schema v3 and app version v4 with event schema v4), you'll need to keep the following in mind:
App instances running version 4 might commit events that are not yet supported by version 3. Your app needs to decide how to handle this scenario in one of the following ways:
- Ignore unknown events
- Cause an error in the app for unknown events
- Handle events with a "catch all" event handler
- Let app render a "app update required" screen. App can still be used in read-only mode.
- ...
LiveStore exposes a dedicated `unknownEventHandling` configuration on `makeSchema` so you can codify the desired behaviour instead of sprinkling ad-hoc checks across your app. The default is `'warn'`, which logs every unknown event and keeps processing.
```ts
const schema = makeSchema({
events,
state,
unknownEventHandling: {
strategy: 'callback',
onUnknownEvent: (event, error) => {
// Optional observer hook (only used for the `callback` strategy)
console.warn('Unknown event seen in production', { event, reason: error.reason })
},
},
})
```
Set the strategy to `'ignore'` to silently skip forward-only events, `'fail'` to stop immediately (useful during development), or `'callback'` to forward them to custom telemetry while continuing to replay the log.
# [Offline Support](https://dev.docs.livestore.dev/patterns/offline/)
- LiveStore supports offline data management out of the box. In order to make your app work fully offline, you might need to also consider the following:
- Design your app in a way to treat the network as an optional feature (e.g. when relying on other APIs / external data)
- Use service workers to cache assets locally (e.g. images, videos, etc.)
## Tracking connectivity
Use `store.networkStatus` to react to connectivity transitions. The subscribable emits every time the sync backend connection flips or the devtools latch simulates an offline state.
```ts
const status = await store.networkStatus.pipe(Effect.runPromise)
if (status.isConnected === false) {
console.warn('Sync backend offline since', new Date(status.timestampMs))
}
await store.networkStatus.changes.pipe(
Stream.tap((next) => console.log('network status updated', next)),
Stream.runDrain,
Effect.scoped,
Effect.runPromise,
)
```
When devtools close the sync latch to simulate an offline client, `status.devtools.latchClosed` is `true`, allowing you to differentiate between real and simulated outages. Remember to dispose of long-lived subscriptions using the Effect scope you already manage for your runtime.
# [Troubleshooting](https://dev.docs.livestore.dev/misc/troubleshooting/)
### Store / sync backend is stuck in a weird state
While hopefully rare in practice, it might still happen that a client or a sync backend is stuck in a weird/invalid state. Please report such cases as a [GitHub issue](https://github.com/livestorejs/livestore/issues).
To avoid being stuck, you can either:
- use a different `storeId`
- or reset the sync backend and local client for the given `storeId`
## React related issues
### Rebase loop triggers repeated event emissions
Symptoms
- Logs repeatedly show messages like: `merge:pull:rebase: rollback` and the same local events being rolled back and replayed.
Why this happens
- LiveStore uses optimistic local commits and rebasing during sync. On pull, the client rolls back local events, applies the remote head, then replays local events — and only then refreshes reactive queries (transactional from the UI’s perspective).
- If your app emits events from a reactive effect based on read‑model changes (e.g., “when the latest item changes, emit X”), the effect runs after each completed rebase. Without a rebase‑safe guard, it can emit the same logical event repeatedly across rebases.
- Multiple windows/devices for the same user can also emit the same logical event at nearly the same time. Even if writes are idempotent, the extra local commits still cause additional rebases and effect re‑runs.
Circuit breaker fix (rebase‑safe)
- Implement a session‑local circuit breaker: track which logical actions you’ve already emitted in this session using an in‑memory set. This guard is not affected by rollback/replay, so it prevents re‑emitting across rebases.
- Avoid feedback loops: don’t use the same store state you’re writing as the primary trigger.
Example pattern (React)
```tsx
// Pseudocode – rebase‑safe circuit breaker for side‑effects
const circuitBreakerRef = useRef>(new Set())
const latest = useLatestItemFromStore() // derived read‑model state
React.useEffect(() => {
if (!latest) return
const key = latest.logicalId
if (circuitBreakerRef.current.has(key)) return // session‑local guard (not rolled back)
circuitBreakerRef.current.add(key) // open the breaker before emitting
store.commit(events.someEvent({ id: deterministicIdFrom(latest), ... }))
}, [latest, store])
```
Checklist
- Use a deterministic id for the event when possible.
- Gate emission with a session‑local circuit breaker to avoid re‑emitting across rebases.
- Keep effect dependencies minimal; avoid depending on store state that you also update in the same effect.
Note on terminology
- “Circuit breaker” here refers to an app‑level guard that prevents repeated side‑effect emissions across rebases. It is distinct from the traditional network/service circuit‑breaker pattern (failure threshold/open/half‑open) but serves a similar purpose of preventing repeated work under specific conditions.
### Query doesn't update properly
If you notice the result of a `useQuery` hook is not updating properly, you might be missing some dependencies in the query's hash.
For example, the following query:
```ts
// Don't do this
const query$ = useQuery(queryDb(tables.issues.query.where({ id: issueId }).first()))
// ^^^^^^^ missing in deps
// Do this instead
const query$ = useQuery(queryDb(tables.issues.query.where({ id: issueId }).first(), { deps: [issueId] }))
```
## `node_modules` related issues
### `Cannot execute an Effect versioned ...`
If you're seeing an error like `RuntimeException: Cannot execute an Effect versioned 3.10.13 with a Runtime of version 3.10.12`, you likely have multiple versions of `effect` installed in your project.
As a first step you can try deleting `node_modules` and running `pnpm install` again.
If the issue persists, you can try to add `"resolutions": { "effect": "3.15.2" }` or [`pnpm.overrides`](https://pnpm.io/package_json#pnpmoverrides) to your `package.json` to force the correct version of `effect` to be used.
## Package management
- Please make sure you only have a single version of any given package in your project (incl. LiveStore and other packages like `react`, etc). Having multiple versions of the same package can lead to all kinds of issues and should be avoided. This is particularly important when using LiveStore in a monorepo.
- Setting `resolutions` in your root `package.json` or tools like [PNPM catalogs](https://pnpm.io/catalogs) or [Syncpack](https://github.com/JamieMason/syncpack) can help you manage this.
# [Version control](https://dev.docs.livestore.dev/patterns/version-control/)
LiveStore's event sourcing approach allows you to implement version control functionality in your application (similar to Git but for your application domain). This could include features like:
- Branching
- Semantic commit messages & grouping
- History tracking
- Semantic/interactive merges
# [LiveStore CLI](https://dev.docs.livestore.dev/reference/cli/)
The LiveStore CLI provides tools for creating new projects and integrating with AI assistants through MCP (Model Context Protocol).
:::caution[Experimental - Not Production Ready]
The LiveStore CLI is an experimental preview and not ready for production use. APIs, commands, and functionality may change significantly. Use for development and evaluation purposes only.
:::
## Installation
You can use the LiveStore CLI in several ways:
```bash
# Recommended: Use bunx (no installation needed)
bunx @livestore/cli --help
# Alternative options:
npm install -g @livestore/cli # Global install
npm install -D @livestore/cli # Project install
npx @livestore/cli --help # Use with npx
```
## Commands
### `bunx @livestore/cli new-project`
Create a new LiveStore project from available examples.
```bash
# Interactive selection
bunx @livestore/cli new-project
# Specify example and path
bunx @livestore/cli new-project --example web-todomvc my-project
# Use specific branch
bunx @livestore/cli new-project --branch dev
```
### `bunx @livestore/cli mcp`
MCP server tools for AI assistant integration. See [MCP Integration](/reference/mcp) for details.
```bash
# Start MCP server
bunx @livestore/cli mcp
# Available subcommands
bunx @livestore/cli mcp coach # AI coaching assistant (requires API key env var)
bunx @livestore/cli mcp tools # Development tools server
# Coach command requires API key - check implementation for specific variable name
# Example: OPENAI_API_KEY=your_key bunx @livestore/cli mcp coach
```
## Global Options
- `--verbose` - Enable verbose logging
- `--help` - Show command help
# [Concepts](https://dev.docs.livestore.dev/reference/concepts/)

## Overview
- Adapter (platform adapter)
- An adapter can instantiate a client session for a given platform (e.g. web, Expo)
- Client
- A logical group of client sessions
- Identified by a `clientId` - a randomly generated 6-char nanoid
- Each client has at least one client session
- Sessions within a client share local data
- Client session
- An instance within a client
- Identified by a `sessionId`
- In web: sessionId can persist across tab reloads
- Multiple sessions can exist within a single client (e.g., multiple browser tabs)
- Store
- Reactivity graph
- [Devtools](/reference/devtools)
- [Events](/reference/events)
- Event definition
- Eventlog
- Synced vs client-only events
- Framework integration
- A framework integration is a package that provides a way to integrate LiveStore with a framework (e.g. React, Solid, Svelte, etc.)
- [Reactivity system](/reference/reactivity-system)
- Db queries `queryDb()`
- Computed queries `computed()`
- Signals `signal()`
- Schema
- LiveStore uses schema definitions for the following cases:
- [Event definitions](/reference/events)
- [SQLite state schema](/reference/state/sqlite-schema)
- [Query result schemas](/reference/state/sql-queries)
- LiveStore uses the [Effect Schema module](/patterns/effect) to define fine-granular schemas
- State
- Derived from the eventlog via materializers
- Materializer
- Event handler function that maps an event to a state change
- SQLite state / database
- In-memory SQLite database within the client session thread (usually main thread)
- Used by the reactivity graph
- Persisted SQLite database (usually running on the leader thread)
- Fully derived from the eventlog
- [Store](/reference/store)
- A store exposes most of LiveStore's functionality to the application layer and is the main entry point for using LiveStore.
- To create a store you need to provide a schema and a platform adapter which creates a client session.
- A store is often created, managed and accessed through a framework integration (like React).
- A store is identified by a `storeId` which is also used for syncing events between clients.
- Sync provider
- A sync provider is a package that provides a sync backend and a sync client.
- Sync backend
- A central server that is responsible for syncing the eventlog between clients
### Implementation details
- Leader thread
- Responsible for syncing and persisting of data
- Sync processor
- LeaderSyncProcessor
- ClientSessionSyncProcessor
## Pluggable architecture
LiveStore is designed to be pluggable in various ways:
- Platform adapters
- Sync providers
- Framework integrations
## Important Notes on Identity
- LiveStore does not have built-in concepts of "users" or "devices"
- User identity must be modeled within your application domain through events and application logic
- The `clientId` identifies a client instance, not a user
- Multiple clients can represent the same user (e.g., different browsers or devices)
# [Debugging a LiveStore app](https://dev.docs.livestore.dev/reference/debugging/)
When working on a LiveStore app you might end up in situations where you need to debug things. LiveStore is built with debuggability in mind and tries to make your life as a developer as easy as possible.
Here are a few things that LiveStore offers to help you debug your app:
- [OpenTelemetry](/reference/opentelemetry) integration for tracing / metrics
- [Devtools](/reference/devtools) for inspecting the state of the store
- Store helper methods
## Debugging helpers on the store
The `store` exposes a `_dev` property which contains a few helpers that can help you debug your app.
## Other recommended practices and tools
- Use the step debugger
# [Devtools](https://dev.docs.livestore.dev/reference/devtools/)
NOTE: Once LiveStore is open source, the devtools will be a [sponsor-only benefit](/misc/sponsoring).
## Features
- Real-time data browser with 2-way sync

- Query inspector

- Eventlog browser

- Sync status

- Export/import

- Reactivity graph / signals inspector

- SQLite playground

## Adapters
### `@livestore/adapter-web`:
Requires the `@livestore/devtools-vite` package to be installed and configured in your Vite config:
```ts
// vite.config.js
export default defineConfig({
// ...
plugins: [
livestoreDevtoolsPlugin({ schemaPath: './src/livestore/schema.ts' }),
],
})
```
The devtools can be opened in a separate tab (via e.g. `localhost:3000/_livestore/web). You should see the Devtools URL logged in the browser console when running the app.
#### Chrome extension
You can also use the Devtools Chrome extension.

Please make sure to manually install the extension version matching the LiveStore version you are using by downloading the appropriate version from the [GitHub releases page](https://github.com/livestorejs/livestore/releases) and installing it manually via `chrome://extensions/`.
To install the extension:
1. **Unpack the ZIP file** (e.g. `livestore-devtools-chrome-0.3.0.zip`) into a folder on your computer.
2. Navigate to `chrome://extensions/` and enable **Developer mode** (toggle in the top-right corner).
3. Click **"Load unpacked"** and select the unpacked folder or drag and drop the folder onto the page.
### `@livestore/adapter-expo`:
Requires the `@livestore/devtools-expo` package to be installed and configured in your metro config:
```ts
// metro.config.js
const { getDefaultConfig } = require('expo/metro-config')
const { addLiveStoreDevtoolsMiddleware } = require('@livestore/devtools-expo')
const config = getDefaultConfig(__dirname)
addLiveStoreDevtoolsMiddleware(config, { schemaPath: './src/livestore/schema.ts' })
module.exports = config
```
You can open the devtools by pressing `Shift+m` in the Expo CLI process and then selecting `@livestore/devtools-expo` which will open the devtools in a new tab.
### `@livestore/adapter-node`:
Devtools are configured out of the box for the `makePersistedAdapter` variant (note currently not supported for the `makeInMemoryAdapter` variant).
You should see the Devtools URL logged when running the app.
# [Events](https://dev.docs.livestore.dev/reference/events/)
## Event definitions
There are two types of events:
- `synced`: Events that are synced across clients
- `clientOnly`: Events that are only processed locally on the client (but still synced across client sessions e.g. across browser tabs/windows)
An event definition consists of a unique name of the event and a schema for the event arguments. It's recommended to version event definitions to make it easier to evolve them over time.
Events will be synced across clients and materialized into state (i.e. SQLite tables) via [materializers](/reference/state/materializers).
### Example
## `reference/events/livestore-schema.ts`
```ts filename="reference/events/livestore-schema.ts"
// livestore/schema.ts
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
} as const
```
### Best Practices
- It's strongly recommended to use past-tense event names (e.g. `todoCreated`/`createdTodo` instead of `todoCreate`/`createTodo`) to indicate something already occurred.
- When generating IDs for events (e.g. for the todo in the example above), it's recommended to use a globally unique ID generator (e.g. UUID, nanoid, etc.) to avoid conflicts. For convenience, `@livestore/livestore` re-exports the `nanoid` function.
- TODO: write down more best practices
- TODO: mention AI linting (either manually or via a CI step)
- core idea: feed list of best practices to AI and check if events adhere to them + get suggestions if not
- It's recommended to avoid `DELETE` events and instead use soft-deletes (e.g. add a `deleted` date/boolean column with a default value of `null`). This helps avoid some common concurrency issues.
### Unknown events
Older clients might receive events that were introduced in newer app versions. Configure the behaviour centrally via `unknownEventHandling` when constructing the schema:
```ts
const schema = makeSchema({
events,
state,
unknownEventHandling: {
strategy: 'callback',
onUnknownEvent: (event, error) => {
console.warn('LiveStore saw an unknown event', { event, reason: error.reason })
},
},
})
```
Pick `'warn'` (default) to log every occurrence, `'ignore'` to silently drop new events until the client updates, `'fail'` to halt immediately, or `'callback'` to delegate to custom logging/telemetry while continuing to process the log.
### Schema evolution \{#schema-evolution\}
- Event definitions can't be removed after they were added to your app.
- Event schema definitions can be evolved as long as the changes are forward-compatible.
- That means data encoded with the old schema can be decoded with the new schema.
- In practice, this means ...
- for structs ...
- you can add new fields if they have default values or are optional
- you can remove fields
## Commiting events
## `reference/events/commit.ts`
```ts filename="reference/events/commit.ts"
// somewhere in your app
declare const store: Store
store.commit(events.todoCreated({ id: '1', text: 'Buy milk' }))
```
## Eventlog
The history of all events that have been committed is stored forms the "eventlog". It is persisted in the client as well as in the sync backend.
Example `eventlog.db`:

# [MCP Integration](https://dev.docs.livestore.dev/reference/mcp/)
LiveStore includes MCP (Model Context Protocol) integration that allows AI assistants like Claude to access LiveStore documentation, examples, and development tools.
:::caution[Experimental]
The MCP integration is experimental and under active development. Features may change.
:::
For installation and general CLI usage, see the [LiveStore CLI](/reference/cli) documentation.
## What is MCP?
MCP (Model Context Protocol) is a standard for providing AI assistants with access to external resources and tools. LiveStore's MCP server gives AI assistants access to:
- LiveStore documentation and guides
- Schema examples for common app types
- Development tools and utilities
## Usage
Start the MCP server:
```bash
bunx @livestore/cli mcp
```
## Available Commands
### `bunx @livestore/cli mcp coach`
Starts an AI coaching assistant with access to LiveStore documentation and best practices.
### `bunx @livestore/cli mcp tools`
Provides development tools and utilities for working with LiveStore projects.
### LiveStore Runtime Tools
- `livestore_instance_connect`
- Connects a single in-process LiveStore instance by dynamically importing a module that exports `schema` and a `syncBackend` factory (and optionally `syncPayload`).
- Notes:
- Only one instance can be active at a time; connecting again shuts down and replaces the previous instance.
- Reconnecting creates a fresh, in-memory client database. The visible state is populated by your backend's initial sync. Until sync completes, queries may return empty or partial results.
- Module contract (generic example):
```ts
export { schema } from './src/livestore/schema.ts'
export const syncBackend = makeWsSync({ url: process.env.LIVESTORE_SYNC_URL ?? 'ws://localhost:8787' })
export const syncPayload = { authToken: process.env.LIVESTORE_SYNC_AUTH_TOKEN ?? 'insecure-token-change-me' }
```
- Params example: `{ "storePath": ".ts", "storeId": "" }`
- Returns example:
`{ "storeId": "", "clientId": "client-123", "sessionId": "session-abc", "schemaInfo": { "tableNames": ["..."], "eventNames": ["..."] } }`
- `livestore_instance_query`
- Executes raw SQL against the client database (read-only).
- Notes:
- SQLite dialect; use valid SQLite syntax.
- `bindValues` must be an array (positional `?`) or a record (named `$key`). Do not pass stringified JSON.
- Params example (positional): `{ "sql": "SELECT * FROM my_table WHERE userId = ?", "bindValues": ["u1"] }`
- Params example (named): `{ "sql": "SELECT * FROM my_table WHERE userId = $userId", "bindValues": { "userId": "u1" } }`
- Returns example: `{ "rows": [{ "col": "value" }], "rowCount": 1 }`
- `livestore_instance_commit_events`
- Commits one or more events defined by your connected schema.
- Notes:
- Use the canonical event name declared in your schema (e.g., `v1.EntityCreated`).
- `args` must be a non-stringified JSON object matching the event schema. Date fields typically accept ISO 8601 strings.
- Params example: `{ "events": [{ "name": "v1.EntityCreated", "args": { "id": "e1", "title": "Hello", "createdAt": "2024-01-01T00:00:00.000Z" } }] }`
- Returns example: `{ "committed": 1 }`
- `livestore_instance_status`
- Reports instance/runtime info.
- Returns example (connected): `{ "_tag": "connected", "storeId": "", "clientId": "client-123", "sessionId": "session-abc", "tableCounts": { "my_table": 12 } }`
- Returns example (not connected): `{ "_tag": "disconnected" }`
- `livestore_instance_disconnect`
- Disconnects the current LiveStore instance and releases resources.
- Returns: `{ "_tag": "disconnected" }`
## Local Cloudflare Sync (dev)
Run a local Cloudflare sync backend:
1. Start the sync worker (wrangler):
- `cd tests/integration/src/tests/adapter-cloudflare/fixtures`
- `wrangler dev`
- You should see an info page at `http://localhost:8787/`.
2. Start the MCP server in another terminal:
- `bunx @livestore/cli mcp server`
3. From your MCP client (e.g., Claude Desktop), call tools:
- Use your own MCP module path and storeId. The repo also provides an example: `examples/cf-chat/mcp.ts`.
- Connect: `livestore_instance_connect` with `{ "storePath": ".ts", "storeId": "" }`
- Commit: `livestore_instance_commit_events` with `[ { "name": "v1.EntityCreated", "args": { "id": "e1", "title": "Hello", "createdAt": "2024-01-01T00:00:00.000Z" } } ]`
- Query: `livestore_instance_query` with `{ "sql": "SELECT * FROM my_table ORDER BY createdAt DESC LIMIT 5" }`
- Status: `livestore_instance_status`
- Disconnect: `livestore_instance_disconnect`
## Adding to Claude
To use with Claude Desktop, add the MCP server to your Claude configuration:
```json
{
"mcpServers": {
"livestore": {
"command": "bunx",
"args": ["@livestore/cli", "mcp"]
}
}
}
```
## Available Resources
The MCP server provides access to:
- **Documentation**: Overview, features, getting started guides
- **Architecture**: Technical design and principles
- **Schema Examples**: Pre-built schemas for todo, blog, e-commerce, and social apps
- **Development Tools**: Project scaffolding and utilities
This enables AI assistants to provide context-aware help with LiveStore development.
# [OpenTelemetry](https://dev.docs.livestore.dev/reference/opentelemetry/)
LiveStore has built-in support for OpenTelemetry.
## Usage with React
```tsx
// otel.ts
const makeTracer = () => {
const url = import.meta.env.VITE_OTEL_EXPORTER_OTLP_ENDPOINT
const provider = new WebTracerProvider({
spanProcessors: [new SimpleSpanProcessor(new OTLPTraceExporter({ url }))],
})
provider.register()
return provider.getTracer('livestore')
}
export const tracer = makeTracer()
// In your main entry file
export const App: React.FC = () => (
)
// And in your `livestore.worker.ts`
makeWorker({ schema, otelOptions: { tracer } })
```
# [Store](https://dev.docs.livestore.dev/reference/store/)
The `Store` is the most common way to interact with LiveStore from your application code. It provides a way to query data, commit events, and subscribe to data changes.
## Creating a store
For how to create a store in React, see the [React integration docs](/reference/framework-integrations/react-integration). The following example shows how to create a store manually:
## `reference/store/create-store.ts`
```ts filename="reference/store/create-store.ts"
const adapter = makeAdapter({
storage: { type: 'fs' },
// sync: { backend: makeWsSync({ url: '...' }) },
})
export const bootstrap = async () => {
const store = await createStorePromise({
schema,
adapter,
storeId: 'some-store-id',
})
return store
}
```
### `reference/store/schema.ts`
```ts filename="reference/store/schema.ts"
const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
},
}),
} as const
const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
} as const
const materializers = State.SQLite.materializers(events, {
[events.todoCreated.name]: defineMaterializer(events.todoCreated, ({ id, text }) =>
tables.todos.insert({ id, text, completed: false }),
),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
export const storeTables = tables
export const storeEvents = events
```
## Using a store
### Querying data
## `reference/store/query-data.ts`
```ts filename="reference/store/query-data.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet shows query result */
// ---cut---
declare const store: Store
const todos = store.query(storeTables.todos)
console.log(todos)
```
### Subscribing to data
## `reference/store/subscribe.ts`
```ts filename="reference/store/subscribe.ts"
declare const store: Store
const unsubscribe = store.subscribe(storeTables.todos, (todos) => {
console.log(todos)
})
unsubscribe()
```
### Committing events
## `reference/store/commit-event.ts`
```ts filename="reference/store/commit-event.ts"
declare const store: Store
store.commit(storeEvents.todoCreated({ id: '1', text: 'Buy milk' }))
```
### Shutting down a store
LiveStore provides two APIs for shutting down a store:
## `reference/store/shutdown.ts`
```ts filename="reference/store/shutdown.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet demonstrates shutdown helpers */
// ---cut---
declare const store: Store
const effectShutdown = Effect.gen(function* () {
yield* Effect.log('Shutting down store')
yield* store.shutdown()
})
const shutdownWithPromise = async () => {
await store.shutdownPromise()
}
```
## Multiple Stores
You can create and use multiple stores in the same app. This can be useful when breaking up your data model into smaller pieces.
## Development/debugging helpers
A store instance also exposes a `_dev` property that contains some helpful methods for development. For convenience you can access a store on `globalThis`/`window` like via `__debugLiveStore.default._dev` (`default` is the store id):
```ts
// Download the SQLite database
__debugLiveStore.default._dev.downloadDb()
// Download the eventlog database
__debugLiveStore.default._dev.downloadEventlogDb()
// Reset the store
__debugLiveStore.default._dev.hardReset()
// See the current sync state
__debugLiveStore.default._dev.syncStates()
```
# [Reactivity System](https://dev.docs.livestore.dev/reference/reactivity-system/)
LiveStore has a high-performance, fine-grained reactivity system built in which is similar to Signals (e.g. in [SolidJS](https://docs.solidjs.com/concepts/signals)).
## Defining reactive state
LiveStore provides 3 types of reactive state:
- Reactive SQL queries on top of SQLite state (`queryDb()`)
- Reactive state values (`signal()`)
- Reactive computed values (`computed()`)
Reactive state variables end on a `$` by convention (e.g. `todos$`). The `label` option is optional but can be used to identify the reactive state variable in the devtools.
### Reactive SQL queries
## `reference/reactivity-system/query-db.ts`
```ts filename="reference/reactivity-system/query-db.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet exposes intermediate streams */
// ---cut---
const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
createdAt: State.SQLite.datetime(),
},
}),
} as const
const uiState$ = signal({ showCompleted: false }, { label: 'uiState$' })
const todos$ = queryDb(tables.todos.orderBy('createdAt', 'desc'), { label: 'todos$' })
{
const todos$ = queryDb(
(get) => {
const { showCompleted } = get(uiState$)
return tables.todos.where(showCompleted ? { completed: true } : {})
},
{ label: 'todos$' },
)
}
```
### Signals
Signals are reactive state values that can be set and get. This can be useful for state that is not materialized from events into SQLite tables.
## `reference/reactivity-system/signals.ts`
```ts filename="reference/reactivity-system/signals.ts"
declare const store: Store
const now$ = signal(Date.now(), { label: 'now$' })
setInterval(() => {
store.setSignal(now$, Date.now())
}, 1000)
const num$ = signal(0, { label: 'num$' })
const increment = () => store.setSignal(num$, (prev) => prev + 1)
increment()
increment()
console.log(store.query(num$))
```
### Computed values
## `reference/reactivity-system/computed.ts`
```ts filename="reference/reactivity-system/computed.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet highlights derived signal */
// ---cut---
const num$ = signal(0, { label: 'num$' })
const duplicated$ = computed((get) => get(num$) * 2, { label: 'duplicated$' })
```
## Accessing reactive state
Reactive state is always bound to a `Store` instance. You can access the current value of reactive state the following ways:
### Using the `Store` instance
## `reference/reactivity-system/store-access.ts`
```ts filename="reference/reactivity-system/store-access.ts"
declare const store: Store
const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
title: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
},
}),
} as const
const count$ = queryDb(tables.todos.count(), { label: 'count$' })
const count = store.query(count$)
console.log(count)
const unsubscribe = store.subscribe(count$, (value) => {
console.log(value)
})
unsubscribe()
```
### Via framework integrations
#### React
## `reference/reactivity-system/react-component.tsx`
```tsx filename="reference/reactivity-system/react-component.tsx"
declare const state$: LiveQueryDef
export const MyComponent: FC = () => {
const value = useQuery(state$)
return
}
```
### Reacting to changing variables passed to queries
If your query depends on a variable passed in by the component, use the deps array to react to changes in this variable.
## `reference/reactivity-system/deps-query.tsx`
```tsx filename="reference/reactivity-system/deps-query.tsx"
const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
},
}),
} as const
declare const store: Store & ReactApi
export const todos$ = ({ showCompleted }: { showCompleted: boolean }) =>
queryDb(
() => {
return tables.todos.where(showCompleted ? { completed: true } : {})
},
{
label: 'todos$',
deps: [showCompleted ? 'true' : 'false'],
},
)
export const MyComponent: FC<{ showCompleted: boolean }> = ({ showCompleted }) => {
const todos = store.useQuery(todos$({ showCompleted })) as ReadonlyArray<{
id: string
text: string
completed: boolean
}>
return
{todos.length} Done
}
```
## Further reading
- [Riffle](https://riffle.systems/essays/prelude/): Building data-centric apps with a reactive relational database
- [Adapton](http://adapton.org/) / [miniAdapton](https://arxiv.org/pdf/1609.05337)
## Related technologies
- [Signia](https://signia.tldraw.dev/): Signia is a minimal, fast, and scalable signals library for TypeScript developed by TLDraw.
# [Overview and prerequisites](https://dev.docs.livestore.dev/tutorial/0-welcome/)
## Welcome to the LiveStore tutorial
This tutorial will guide you through the process of building a simple todo app with React, Vite & Tailwind, using LiveStore as its data layer.
It is focused on building a _minimalistic_ application to gradually introduce the main concepts of LiveStore. That being said, LiveStore itself has been specifically designed for large, complex applications, and shines especially when used in these contexts.
:::note[Goal of this tutorial: Education]
The goal of the tutorial is to _educate_. It reduces as much noise as possible so you can focus on the parts that actually matter for building an application with LiveStore.
**If you just want to see LiveStore in action and play around with it, consider setting up the [starter project](/getting-started/react-web/) directly.**
:::
## Prerequisites
### Useful knowledge
While the tutorial is aimed at LiveStore newcomers, it will be helpful to have some knowledge in basic areas of web development, such as of [React](https://react.dev/), [TypeScript](https://www.typescriptlang.org/) and [event-driven architectures](/evaluation/how-livestore-works/#event-sourcing).
### Technical setup
- The tutorial assumes that you're using a Unix-like shell, e.g. by using commands like `mkdir` and `touch` for creating directories and files.
- It gives you the option to use either [Bun](https://bun.sh/) or [`pnpm`](https://pnpm.io/) as the package manager.
- You'll deploy the application to Cloudflare. If you don't have an account there yet, you can sign up for a free one [here](https://dash.cloudflare.com/sign-up) (or skip the deployment steps in the tutorial).
## What you'll do
Here's an overview of the steps you'll take in the tutorial:
1. Set up a starter project with React, Vite & Tailwind
- This project will be used as a starting point for the tutorial and already comes with basic functionality for a todo app.
- It uses local React state to manage the todo list. Throughout the tutorial you'll gradually replace the ephemeral, local state with LiveStore persistent storage.
1. Deploy the application to Cloudflare
- This enables you to observe the evolution in behaviour as you're introducing LiveStore.
1. Add LiveStore to the project
- You'll add LiveStore dependencies to the project and implement persisting the todos so that they survive a page refresh.
1. Automatically sync data to Cloudflare
- You'll use LiveStore to set up syncing of the todo list data via Cloudflare Workers and Durable Objects in the background.
- Now your todos will not only survive a page refresh, but also automatically sync across browser sessions, and even across devices.
1. Expand the business logic with more LiveStore events
- You'll learn how to use LiveStore events to expand the business logic of the todo app.
1. Persist UI state
- LiveStore can also be used to persist UI state, such as the text from an input field or a filter selection.
## Credits
This tutorial has been written by [Nikolas Burk](https://x.com/nikolasburk).
# [1. Set up starter project with React, Vite & Tailwind](https://dev.docs.livestore.dev/tutorial/1-setup-starter-project/)
## Set up a starter project with React, Vite & Tailwind
We have prepared a [starter project](https://github.com/livestorejs/livestore/tree/dev/examples/tutorial-starter) for you that you can use as a starting point for the tutorial. Download it via the following command:
Once you've downloaded the project, you can navigate to the project directory and install the dependencies:
The project currently is set up as follows:
- Minimal project created via [`vite create`](https://vite.dev/guide/#scaffolding-your-first-vite-project) using React and TypeScript.
- Using [Tailwind CSS](https://tailwindcss.com/) for styling.
- Has basic functionality for adding and deleting todos via local [`React.useState`](https://react.dev/learn/state-a-components-memory).
## Understand the current project state
Run the app with:
Here's the UI you're going to see after adding a few todos:

Let's take a quick moment to understand how the app is currently implemented:
All relevant code lives in `App.tsx`. Here's a simplified version of it:
```ts
interface Todo {
id: number
text: string
}
function App() {
const [todos, setTodos] = useState([])
const [input, setInput] = useState('')
const addTodo = () => {
const newTodo: Todo = {
id: Date.now(),
text: input
}
setTodos([...todos, newTodo])
setInput('')
}
const deleteTodo = (id: number) => {
setTodos(todos.filter(todo => todo.id !== id))
}
return (
// Render input text field and todo list ...
// ... and invoke `addTodo` and `deleteTodo`
// ... when the buttons are clicked.
)
}
```
For any React developer, this is a very familiar setup:
You have two pieces of state:
- application state: `todos: Todo[]` → manipulated by the `addTodo` and `deleteTodo` functions.
- UI state: `input: string` → manipulated when the text in the input field changes.
The "problem" with this code is that the todo items are not _persisted_, meaning they vanish when:
- the page is refreshed in the browser.
- the development server is restarted.
In the next chapters, you'll learn how to persist the todos in the list, so that they'll "survive" both actions.
Even more: They will not only persist, they will automatically sync across multiple browsers tabs/windows, and even across devices—without you needing to think about the syncing logic and managing remote state.
That's the power of LiveStore!
# [2. Deploy to Cloudflare Workers](https://dev.docs.livestore.dev/tutorial/2-deploy-to-cloudflare/)
In this tutorial, you're going to use Cloudflare for deployment.
:::note[Skip if you don't care about deployment]
If you're only interested in learning about concepts of LiveStore and don't care about deployment, you can skip this step and move to the next chapter.
**That being said, we recommend that you deploy the app so that you can experience first-hand the power of _syncing data across devices_ later in the tutorial.**
:::
## Configure the Vite plugin for Cloudflare
First, you're going to install [Cloudflare's Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/):
Now, add it to your `vite.config.ts`:
```diff title="vite.config.ts" lang="ts"
+import { cloudflare } from '@cloudflare/vite-plugin'
export default defineConfig({
plugins: [
react(),
tailwindcss(),
+ cloudflare(),
],
})
```
## Configure Wrangler
Cloudflare deployments can be managed via [Wrangler](https://developers.cloudflare.com/workers/wrangler/), so you'll first need to install the Wrangler CLI:
Now, create a `wrangler.toml`:
```bash
touch wrangler.toml
```
Add the following contents to it:
```toml title="wrangler.toml"
name = "livestore-todo-app"
compatibility_date = "2024-10-28"
[observability]
enabled = true
```
## Configure `deploy` script
In your `package.json`, add a new script called `deploy` that looks as follows:
## Deploy the app
Deploy the app by running the `deploy` script:
Expand to view the expected command output
Here's the expected output:
```
➜ livestore-todo-app git:(main) ✗ pnpm run deploy
> livestore-todo-app@0.0.0 deploy /Users/nikolasburk/Projects/LiveStore/plain-react-tutorial/livestore-todo-app
> pnpm run build && wrangler deploy
> livestore-todo-app@0.0.0 build /Users/nikolasburk/Projects/LiveStore/plain-react-tutorial/livestore-todo-app
> tsc -b && vite build
vite v7.1.12 building for production...
✓ 29 modules transformed.
dist/.assetsignore 0.02 kB
dist/index.html 0.47 kB │ gzip: 0.30 kB
dist/wrangler.json 1.15 kB │ gzip: 0.56 kB
dist/assets/index-DEkdisv2.css 9.64 kB │ gzip: 2.74 kB
dist/assets/index-COIOT0yp.js 196.04 kB │ gzip: 61.45 kB
✓ built in 319ms
⛅️ wrangler 4.45.1
───────────────────
Using redirected Wrangler configuration.
- Configuration being used: "dist/wrangler.json"
- Original user's configuration: "wrangler.toml"
- Deploy configuration file: ".wrangler/deploy/config.json"
🌀 Building list of assets...
✨ Read 7 files from the assets directory /Users/nikolasburk/Projects/LiveStore/plain-react-tutorial/livestore-todo-app/dist
🌀 Starting asset upload...
🌀 Found 4 new or modified static assets to upload. Proceeding with upload...
+ /index.html
+ /assets/index-COIOT0yp.js
+ /livestore.svg
+ /assets/index-DEkdisv2.css
Uploaded 1 of 4 assets
Uploaded 2 of 4 assets
Uploaded 4 of 4 assets
✨ Success! Uploaded 4 files (2.67 sec)
Total Upload: 0.34 KiB / gzip: 0.23 KiB
Uploaded livestore-todo-app (13.40 sec)
Deployed livestore-todo-app triggers (4.40 sec)
https://livestore-todo-app.nikolas-burk.workers.dev
Current Version ID: 37ce16a4-0bfd-4ecb-829e-a2737c84d9cf
```
If everything worked as expected, your app is now running at: `https://livestore-todo-app.USERNAME.workers.dev`
Find your specific URL in the terminal output or navigate to the **Workers & Pages** section in your [Cloudflare Dashboard](https://dash.cloudflare.com/) to find the project and the URL.
The app should look and behave exactly like the local version from the previous step:

# [3. Read and write todos via LiveStore](https://dev.docs.livestore.dev/tutorial/3-read-and-write-todos-via-livestore/)
Now on to the fun part! In this section, you'll set up LiveStore so that the todos that you're creating will persist and survive page refreshes and dev server reloads.
:::note[Help improve LiveStore]
Remember that LiveStore is still early and its developer surface and API not yet fully optimized. Bear with us through the setup and boilerplate code that's needed for a running application—it'll be worth it!
Also: If you have ideas for improving the developer experience for LiveStore, please [raise an issue](https://github.com/livestorejs/livestore/issues). This project is fully open-source and depends on people like you.
:::
## Install LiveStore dependencies
Start by installing the necessary dependencies:
Expand to view details about the packages
Here's an overview of each of these dependencies:
- `@livestore/livestore@0.4.0-dev.14` → Implements the core LiveStore functionality (schema, events, queries, ...).
- `@livestore/wa-sqlite@0.4.0-dev.14` → Implements usage of a [SQLite build in WebAssembly](https://github.com/livestorejs/wa-sqlite), so you can use SQLite inside your browser.
- `@livestore/adapter-web@0.4.0-dev.14` → Implements the [LiveStore web adapter.](https://dev.docs.livestore.dev/reference/platform-adapters/web-adapter/)
- `@livestore/react@0.4.0-dev.14` → Provides [LiveStore integration for React](https://github.com/livestorejs/wa-sqlite).
- `@livestore/peer-deps@0.4.0-dev.14` → Required to [satisfy LiveStore peer dependencies](https://dev.docs.livestore.dev/misc/package-management/#peer-dependencies).
## Define your LiveStore schema
In this step, you're going to define the [schema](/reference/state/sqlite-schema/) that LiveStore uses to represent the data of your app. The schema is one of the core concepts of LiveStore.
Your LiveStore-related files typically live in a `livestore` directory in your app, so create that first:
```bash
mkdir src/livestore
touch src/livestore/schema.ts
```
The schema contains three major components:
- The [**table**](/reference/state/sqlite-schema/#defining-tables) structures of your local SQLite database.
- The [**events**](https://dev.docs.livestore.dev/reference/events/) that can be emitted by your app.
- The [**materializers**](https://dev.docs.livestore.dev/reference/state/materializers/) that use the events to alter the state of your database.
Here's how you define the schema for the current version of the todo app:
```ts title="src/livestore/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.integer({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
},
})
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.Number, text: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.Number }),
}),
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text }),
'v1.TodoDeleted': ({ id }) => tables.todos.delete().where({ id: id }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
Here's a quick summary of the code:
- `tables` contains the definitions of your SQLite table structures. It currently defines a single `todos` table with two columns called `id` and `text`.
- `events` defines the types of events that your app can emit. It currently defines the `todoCreated` and `todoDeleted` events with an attached `schema` property which defines the shape of the event.
- `materializers` are the functions that are invoked for each event. In them, you define what should happen when the event gets fired. Right now, the `v1.TodoCreated` event results in an `insert` operation in the database while `v1.TodoDeleted` will remove a row from the database via a `delete` operation.
The `tables`, `events` and `materializers` are packaged up into a `schema` object that's needed in the parts of your app where you interact with LiveStore.
:::note[Versioning events and materializes]
You may have noticed that event and materializer names are prefixed with `v1`.
It's good practice in LiveStore to version your events and materializers to ensure future compatibility between them as your app evolves.
:::
## Configure your React/Vite app to use LiveStore
To now "connect" LiveStore with your React app, you need to:
1. Create the [LiveStore web worker](/reference/platform-adapters/web-adapter/#web-worker) that is responsible for the logic of persisting data on your file system.
2. Create a LiveStore [adapter](/reference/platform-adapters/web-adapter/) that enables persistences with local SQLite via [OPFS](https://web.dev/articles/origin-private-file-system).
3. Wrap your root component in `main.tsx` with the LiveStore [provider](https://dev.docs.livestore.dev/reference/framework-integrations/react-integration/#livestoreprovider) (which will receive both the web worker and adapter from above as arguments).
4. Update your Vite Config to work with LiveStore.
Let's get started!
First, create a new file for the LiveStore web worker:
```bash
touch src/livestore/livestore.worker.ts
```
Then, add the following code to it:
```ts file="src/livestore/livestore.worker.ts"
makeWorker({ schema })
```
This is boilerplate code that you'll almost never need to touch. (In this tutorial, it'll only be edited once when we introduce syncing data to Cloudflare.)
:::note[Special syntax for importing web worker files in Vite]
When importing this file, make sure to add the `?worker` extension to the import path to ensure that [Vite treats it as a worker file](https://vite.dev/guide/features.html#web-workers).
:::
Next, update `main.tsx` to create an adapter and wrap your `App` component inside the `LiveStoreProvider`:
```tsx title="src/main.tsx"
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker: LiveStoreWorker,
sharedWorker: LiveStoreSharedWorker,
})
createRoot(document.getElementById('root')!).render(
Loading LiveStore ({_.stage})...
}
storeId="todo-db-tutorial"
batchUpdates={batchUpdates}
>
)
```
:::note[So many "workers" ...]
There's some ambiguity with the "worker" terminology in this tutorial. In general, there are three different kinds of Workers at play.
- The **LiveStore web worker** (`LiveStoreWorker`); this is the Worker you defined in `livestore.worker.ts`. Technically, this is a _browser_ web worker, i.e. a separate JavaScript thread that runs in the browser, isolated from the main UI thread.
- The **LiveStore shared web worker** (`LiveStoreSharedWorker`); you just imported it from `@livestore/adapter-web/shared-worker`, also a _browser_ web worker. It's actually more of an _implementation detail_ but currently required to be exposed, so that the setup works with Vite.
- The **Cloudflare Worker** that will automatically sync the data in the background; you'll set this up in the next step.
Note that both the LiveStore web worker and the LiveStore Shared web worker are regular [web workers,](https://www.dhiwise.com/post/web-workers-vs-service-workers-in-javascript#what-are-web-workers-) not [service workers](https://www.dhiwise.com/post/web-workers-vs-service-workers-in-javascript#what-are-service-workers-)!
Here's how to think about the workers in the context of browser tabs (or windows):
```
Tab 1 ──┐
Tab 2 ──┼──→ Shared Worker (tab coordination) ──→ Web Worker (persistence)
Tab 3 ──┘
```
:::
Finally, update your Vite Config to enable usage of WebAssembly:
```diff title="vite.config.ts" lang="ts"
export default defineConfig({
plugins: [
react(),
tailwindcss(),
cloudflare(),
],
+ optimizeDeps: {
+ exclude: ['@livestore/wa-sqlite'],
+ },
})
```
Now, your app is set up to start reading and writing data in a local SQLite database via LiveStore. If you run the app via `pnpm dev`, you'll briefly see a loading screen (implemented via the `renderLoading` prop on the `LiveStoreProvider`) before the todo list UI from the previous steps will appear again:

The todos themselves are not yet persisted because you haven't modified the logic for managing state in `App.tsx`. That's what you'll do next.
## Read and write todos from local SQLite via LiveStore
The current version of the app still stores the todos _in-memory_ via the [state](https://react.dev/learn/state-a-components-memory) of your `App` component. However, with the basic LiveStore setup in place, you can now move to persisting your todos inside SQLite via LiveStore!
In order to read todos, you need to define a LiveStore [query](/reference/state/sql-queries). LiveStore queries are _live_ or _reactive_, meaning they will automatically update your UI components when the underlying data changes.
Here's how you can define and use a query to fetch all todos from the local SQLite database using LiveStore inside the `App` component (you don't need to update your code yet, you'll get the full snippet at the end of this section):
```diff title="src/App.tsx" lang="ts"
function App() {
+ const { store } = useStore()
// The trailing `$` is a convention to indicate that this
// variable is a "live query".
+ const todos$ = queryDb(() => tables.todos.select())
// `todos` is an array containing all the todos from the DB.
// When rendering the component, you can do {todos.map(todo => ...
// and access `todo.text` and `todo.id` as before.
+ const todos = store.useQuery(todos$)
- const [todos, setTodos] = useState([])
// ... remaining code for the `App` component
}
```
With this change, you're now reading the `todos` from LiveStore (where they're persisted) instead of using React's ephemeral state.
This was only half of the job though: Right now your code would throw a type error because it still uses `setTodos` to update the local state. You need to update this to use the `todoCreated` and `todoDeleted` events you defined in `src/livestore/schema.ts`:
```diff title="src/App.tsx" lang="ts"
function App() {
const { store } = useStore()
const todos$ = queryDb(() => tables.todos.select())
const todos = store.useQuery(todos$)
// Commit an event to the `store` instead of
// updating local React state.
const addTodo = () => {
+ store.commit(
+ events.todoCreated({ id: Date.now(), text: input }),
+ )
setInput('')
}
// Commit an event to the `store` instead of
// updating local React state.
const deleteTodo = (id: number) => {
+ store.commit(
+ events.todoDeleted({ id }),
+ )
}
// ... remaining code for the `App` component
}
```
See how the data now will flow through the app _unidirectionally_ with this setup?
Let's follow it from the first time the app is started:
1. The `App` component registers the `todos$` "live query" with the store.
1. The query fires initially; the returned `todos` array is empty.
1. The `App` component renders an empty list.
1. A user adds a `todo`; the `todoCreated` event is triggered and committed to the DB.
1. The `v1.TodoCreated` materializer is invoked and writes the data into the DB.
1. The `todos$` query fires again because the state of the DB changed; the returned `todos` array now contains the newly created todo.
1. The `App` component renders a single todo.
Now try it out! Here's the full code for `App.tsx` that you can copy and paste:
```ts title="src/App.tsx"
function App() {
const { store } = useStore()
const todos$ = queryDb(() => tables.todos.select())
const todos = store.useQuery(todos$)
const [input, setInput] = useState('')
const addTodo = () => {
if (input.trim()) {
store.commit(
events.todoCreated({ id: Date.now(), text: input }),
)
setInput('')
}
}
const deleteTodo = (id: number) => {
store.commit(
events.todoDeleted({ id }),
)
}
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
addTodo()
}
}
return (
)
}
export default App
```
Try adding a few todos and then refresh the browser (or restart your development server):

In addition to the persistence, you can now observe the following behaviour:
- You can open multiple browser tabs/windows, make edits to the todo list and see live updates when you make changes in one of them.
- You can stop and restart the local version of your app and the todos will be persisted.
- If you open the app in an incognito tab or a different browser, the list of todos will be empty again; that's because the _Shared web worker_ only works in the same browser "session"; incognito tabs and different browsers use a different session.
Expand to learn about browser isolation/sessions
**Regular browser tab/windows**
- All regular (i.e. "not incognito") tabs/windows share the same browser session.
- The LiveStore shared web worker is shared across all these tabs/windows.
- OPFS storage is shared across the origin.
- ✅ Real-time sync works because they're all using the same LiveStore shared web worker and storage.
**Incognito windows**
- Incognito mode creates a separate session from regular browsing.
- However, multiple incognito tabs/windows opened in the same incognito session still share:
- The same LiveStore shared web worker instance.
- The same OPFS storage (within that incognito session).
- Each incognito session gets isolated storage, but windows within that session are _not_ isolated from each other.
Think of it this way:
- Regular tabs/windows = Session A (all tabs/windows share the same data).
- Incognito tabs/windows = Session B (all incognito tabs/windows share the same data).
**To get true isolation**
If you want completely isolated instances, you'd need to use:
- **Different browser profiles** - Chrome profiles are truly isolated.
- **Different browsers** - Chrome vs Firefox vs Safari.
- **Different devices** (only possible with the deployed app) - Your computer vs your phone.
Now, you can also deploy the app:
Here's a little GIF demonstrating the current state of the live updates via browser isolation. On the left, we have two regular Chrome windows, on the right, two Safari windows.

- When a change is made in one Chrome window, it is automatically reflected in the other Chrome window.
- Similarly, when a change is made in one Safari window, it is automatically reflected in the other Safari window.
To recap, here's a visualization of the data flow in the app at this point:
## Set up LiveStore DevTools
You may have wondered: "If the data is persisted, _where_ is it?". If the data is somewhere on your file system, you _should_ be able to spin up a database viewer of your choice and inspect the local SQLite DB file.
Unfortunately, that's not how OPFS works. While the SQLite files do exist _somewhere_ on your file system, they are hidden in the browser's internals and not easily accessible.
That being said, there is a convenient way how you can actually see the data on your file system: Using [LiveStore DevTools](/reference/devtools/)!
:::note[LiveStore DevTools are a sponsor-only feature]
Our goal is to grow the project in a [sustainable](/misc/sponsoring/#goal-sustainable-open-source) way (e.g. not rely on VC funding), that's why DevTools are currently a [sponsor-only](/misc/sponsoring/) feature.
That being said, we still include them in this tutorial since you'll be able to use them without sponsorship for one week.
LiveStore can only exist thanks to its generous sponsors, please consider becoming one of them if the project is of value to you. ❤️
:::
To start using DevTools, first install the package for the web adapter you're using:
Next, update your Vite Config to add the `livestoreDevtoolsPlugin`. It should look as follows:
```diff title="vite.config.ts" lang="ts"
+import { livestoreDevtoolsPlugin } from '@livestore/devtools-vite'
// https://vite.dev/config/
export default defineConfig({
plugins: [
react(),
tailwindcss(),
cloudflare(),
+ livestoreDevtoolsPlugin({ schemaPath: './src/livestore/schema.ts' })
],
optimizeDeps: {
exclude: ['@livestore/wa-sqlite'],
},
})
```
Now, in the developer console of your browser, you can find the **LiveStore** tab with details about the current LiveStore internals:

There are several tabs in the LiveStore DevTools:
- **Database**: Browse the data that's currently in your DB, send SQL queries, export the database to your file system, and more.
- **Queries**: Shows the last results of the currently active live queries.
- **Events**: The [event log](/evaluation/event-sourcing/) that stores all the events emitted by your app.
If you're curious, add and delete a few todos via the UI and observe what's happening in the three tabs.
# [4. Sync data to Cloudflare](https://dev.docs.livestore.dev/tutorial/4-sync-data-via-cloudflare/)
In this step, you're going to introduce a [sync](/reference/syncing/) backend. This sync backend will:
- Have a "live connection" (via WebSockets) to _all_ the clients that are running your app.
- Propagate events to _all_ other clients whenever a particular client emits an event.
Notice how you'll achieve this by _only_ changing the data layer of your application!
You won't need to touch the code that you've already written in `main.tsx` and `App.tsx`—all the syncing is handled by LiveStore under the hood without you needing to worry about it in your application code.
You're going to implement the sync backend using [Cloudflare Workers](https://developers.cloudflare.com/workers/) and [Durable Objects](https://developers.cloudflare.com/durable-objects/), using the [@livestore/sync-cf](/reference/syncing/sync-provider/cloudflare/) package.
## Install the Cloudflare sync provider package
Run the following command in your terminal:
## Create the sync backend
Now, create a new `sync` directory and a new file for the sync backend:
```bash
mkdir src/sync
touch src/sync/client-ws.ts
```
Now, add the following code to it:
```ts title="src/sync/client-ws.ts"
export class SyncBackendDO extends makeDurableObject({
onPush: async (message, context) => {
console.log('client-ws.ts: onPush', message, context)
},
onPull: async (message, context) => {
console.log('client-ws.ts: onPull', message, context)
},
}) {}
export default {
async fetch(request: CfTypes.Request, _env: SyncBackend.Env, ctx: CfTypes.ExecutionContext) {
const searchParams = SyncBackend.matchSyncRequest(request)
console.log('client-ws.ts: fetch in with searchParams', searchParams)
if (searchParams !== undefined) {
return SyncBackend.handleSyncRequest({
request,
searchParams,
ctx,
syncBackendBinding: 'SYNC_BACKEND_DO',
})
}
return new Response('Not Found', { status: 404 })
},
}
```
This code will be deployed as a Durable Object (think of it as a "Cloudflare Worker with WebSocket capabilities and attached storage").
In this tutorial, you won't need to go beyond the following basic boilerplate code for syncing.
However, in more advanced scenarios, there are way for you to hook into the connection between your client apps and the sync backend. This could e.g. be useful when your app requires authentication and you need to send along an auth token.
Next, you need to slightly modify the code for your LiveStore web worker in `/src/livestore/livestore.worker.ts`:
```diff title="/src/livestore/livestore.worker.ts" lang="ts"
+import { makeWsSync } from '@livestore/sync-cf/client'
makeWorker({
schema,
+ sync: {
+ backend: makeWsSync({ url: `${location.origin}/sync` }),
+ }
})
```
Finally, you need to update your `wrangler.toml` to ensure Vite and Cloudflare know about the new sync backend:
```diff title="wrangler.toml" lang="toml"
name = "livestore-todo-app"
compatibility_date = "2024-10-28"
+main = "./src/sync/client-ws.ts"
+compatibility_flags = [
+ "nodejs_compat",
+]
[observability]
enabled = true
+[[durable_objects.bindings]]
+name = "SYNC_BACKEND_DO"
+class_name = "SyncBackendDO"
+[[migrations]]
+tag = "v1"
+new_sqlite_classes = ["SyncBackendDO"]
```
Here's a quick rundown of the changes:
- `main`: Since you're now introducing a "backend", your app needs to be reconfigured. It now has a dedicated entry point in `./src/sync/client-ws.ts`. When your Worker receives a request, this file's exported handlers are executed. In your case, it points to the WebSocket sync backend implementation.
- `durable_objects.bindings`: This configures the bindings for your Durable Object. You'll be able to inspect it in the Cloudflare Dashboard by its name `SYNC_BACKEND_DO`.
- `migrations`: Whenever you introduce a new Durable Object that supports [SQLite storage](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) (or update an existing one), you need to add a migration referencing the new or updated Durable Object class.
You're now ready to run and deploy the app:
Expand to view the expected output
```
✗ pnpm run deploy
> livestore-todo-app@0.0.0 deploy /Users/nikolasburk/Projects/LiveStore/plain-react-tutorial/livestore-todo-app
> pnpm run build && wrangler deploy
> livestore-todo-app@0.0.0 build /Users/nikolasburk/Projects/LiveStore/plain-react-tutorial/livestore-todo-app
> tsc -b && vite build
vite v7.1.12 building SSR bundle for production...
✓ 1085 modules transformed.
dist/livestore_todo_app/.vite/manifest.json 0.16 kB
dist/livestore_todo_app/wrangler.json 1.37 kB
dist/livestore_todo_app/index.js 1,021.14 kB
✓ built in 1.31s
vite v7.1.12 building for production...
✓ 1119 modules transformed.
dist/client/index.html 0.47 kB │ gzip: 0.30 kB
dist/client/assets/make-shared-worker-CbM93UVL.js 363.01 kB
dist/client/assets/livestore.worker-DbNV_so_.js 594.87 kB
dist/client/assets/wa-sqlite-CLgeTS2u.wasm 618.93 kB │ gzip: 303.78 kB
dist/client/assets/index-DNxhg8nM.css 11.76 kB │ gzip: 3.12 kB
dist/client/assets/index-BhsI_Uw7.js 760.04 kB │ gzip: 238.67 kB
(!) Some chunks are larger than 500 kB after minification. Consider:
- Using dynamic import() to code-split the application
- Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/configuration-options/#output-manualchunks
- Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
✓ built in 4.29s
⛅️ wrangler 4.45.1
───────────────────
Using redirected Wrangler configuration.
- Configuration being used: "dist/livestore_todo_app/wrangler.json"
- Original user's configuration: "wrangler.toml"
- Deploy configuration file: ".wrangler/deploy/config.json"
🌀 Building list of assets...
✨ Read 8 files from the assets directory /Users/nikolasburk/Projects/LiveStore/plain-react-tutorial/livestore-todo-app/dist/client
🌀 Starting asset upload...
🌀 Found 4 new or modified static assets to upload. Proceeding with upload...
+ /index.html
+ /assets/index-BhsI_Uw7.js
+ /assets/index-DNxhg8nM.css
+ /assets/livestore.worker-DbNV_so_.js
Uploaded 1 of 4 assets
Uploaded 2 of 4 assets
Uploaded 4 of 4 assets
✨ Success! Uploaded 4 files (3 already uploaded) (4.31 sec)
Total Upload: 997.21 KiB / gzip: 208.73 KiB
Worker Startup Time: 47 ms
Your Worker has access to the following bindings:
Binding Resource
env.SYNC_BACKEND_DO (SyncBackendDO) Durable Object
Uploaded livestore-todo-app (17.72 sec)
Deployed livestore-todo-app triggers (9.82 sec)
https://livestore-todo-app.nikolas-burk.workers.dev
Current Version ID: 8860dd68-6082-4da9-9407-dea670cd8b23
```
Feel free to test the new behaviour locally or using the deployed version. This time, data syncing will work across browsers and even devices!
Here's a GIF demonstrating how a regular Chrome tab, an incognito Chrome tab and a Safari tab are staying in sync automatically:

# [5. Expand business logic with more events](https://dev.docs.livestore.dev/tutorial/5-expand-business-logic/)
In this chapter, you'll expand the business logic of the app by adding a "todo completed" feature.
You'll do this in these steps:
1. Update your schema to:
1. Add a `boolean` column to your SQLite table.
2. Add events to mark a todo as completed and uncompleted.
3. Add a materializer to update the DB when one of these events happens.
2. Update the UI with a checkbox element for each todo which sends these events when toggled.
## Update your LiveStore schema
Open your `schema.ts` file and update it with a new column, and two events (for completing and uncompleting a todo) with two corresponding materializers:
```diff title="src/livestore/schema.ts" lang="ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.integer({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
+ completed: State.SQLite.boolean({ default: false }),
},
})
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.Number, text: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.Number }),
}),
+ todoCompleted: Events.synced({
+ name: 'v1.TodoCompleted',
+ schema: Schema.Struct({ id: Schema.Number }),
+ }),
+ todoUncompleted: Events.synced({
+ name: 'v1.TodoUncompleted',
+ schema: Schema.Struct({ id: Schema.Number }),
+ }),
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text }),
'v1.TodoDeleted': ({ id }) => tables.todos.delete().where({ id: id }),
+ 'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
+ 'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
## Update the UI and business logic
Next, add a `toggleTodo` function in `App.tsx` in which you're emitting the events that you've just defined and invoke it from a new checkbox element that's added to each todo element:
```diff title="src/App.tsx" lang="ts"
function App() {
const { store } = useStore()
const todos$ = queryDb(() => tables.todos.select())
const todos = store.useQuery(todos$)
const [input, setInput] = useState('')
const addTodo = () => {
if (input.trim()) {
store.commit(
events.todoCreated({ id: Date.now(), text: input }),
)
setInput('')
}
}
const deleteTodo = (id: number) => {
store.commit(
events.todoDeleted({ id }),
)
}
+ const toggleTodo = (id: number, completed: boolean) => {
+ store.commit(
+ completed ? events.todoUncompleted({ id }) : events.todoCompleted({ id })
+ )
+ }
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
addTodo()
}
}
return (
)
}
export default App
```
## Deploy and test
If you run or deploy the app, the UI will now look as follows:

Because the `completed` field is also being persisted in the database, it also auto-syncs across browser sessions and devices.
# [6. Persist UI state](https://dev.docs.livestore.dev/tutorial/6-persist-ui-state/)
As you saw in the beginning of the tutorial, the state of an application is typically divided into two categories:
- **App state**: State that represents the _application data_ a user needs to achieve their goals with your app. In your current case, that's the list of todos. App state _typically_ lives in the Cloud somewhere (in traditional apps, the Cloud is the source of truth for it; in local-first apps, the data is stored locally and backed up in the Cloud).
- **UI state**: UI state that is only relevant for a particular browser session.
Many websites have the problem of "losing UI state" on browser refreshes. This can be incredibly frustrating for users, especially when they've already invested a lot of time getting to a certain point in an app (e.g. filling out a form). Then, the site reloads for some reason and they have to start over!
With LiveStore, this problem is easily solved: It allows you to persist _UI state_ (e.g. form inputs, active tabs, custom UI elements, and pretty much anything you'd otherwise manage via `React.useState`). This means users can always pick up exactly where they left off.
## Add another UI element
First, update `App.tsx` to look as follows:
```diff title="src/App.tsx" lang="tsx"
function App() {
const { store } = useStore()
const todos$ = queryDb(() => tables.todos.select())
const todos = store.useQuery(todos$)
const [input, setInput] = useState('')
+ const [filter, setFilter] = useState<'All' | 'Active' | 'Completed'>('All')
const addTodo = () => {
if (input.trim()) {
store.commit(
events.todoCreated({ id: Date.now(), text: input }),
)
setInput('')
}
}
const deleteTodo = (id: number) => {
store.commit(
events.todoDeleted({ id }),
)
}
const toggleTodo = (id: number, completed: boolean) => {
store.commit(
completed ? events.todoUncompleted({ id }) : events.todoCompleted({ id })
)
}
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
addTodo()
}
}
return (
)
}
export default App
```
If you run the app now, it'll look similar to this:

You can switch between tabs and see how the tabbed component updates the currently active tab. Just like `input`, this uses local React state:
```ts
const [filter, setFilter] = useState<'All' | 'Active' | 'Completed'>('All')
```
This state is ephemeral, meaning it won't survive page refreshes. In a small app like this tutorial app, this won't really matter—but when you're building a complex UI where users will lose a lot of work when the state suddenly resets, being able to persist this state can be a real life-saver (including things like scroll positions, input forms, and any other relevant state that's important to your users)!
With LiveStore, you can persist the state of the currently active tab across browser refreshes. Let's do it!
## Update the LiveStore schema with a client document
In your `schema.ts` file, add another table definition to the `tables` object. This time, it won't be of type `State.SQLite.table` though, but rather use LiveStore's special [`clientDocument`](/api/livestore/livestore/namespaces/state/namespaces/sqlite/variables/clientdocument/) type for this:
```diff title="src/livestore/schema.ts" lang="ts"
+import { Events, makeSchema, Schema, State, SessionIdSymbol } from '@livestore/livestore'
+export const Filter = Schema.Literal('All', 'Active', 'Completed')
+export type Filter = typeof Filter.Type
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.integer({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
},
}),
+ uiState: State.SQLite.clientDocument({
+ name: 'uiState',
+ schema: Schema.Struct({ input: Schema.String, filter: Filter }),
+ default: { id: SessionIdSymbol, value: { input: '', filter: 'All' } },
+ }),
}
```
On this table, you define:
- A name for this client document.
- The structure of the client document via `schema`; in your case:
- The current state of the `input` text field for adding a new todo.
- The `filter` that'll be used to filter the todos according to their `completed` status.
- Default values for this client document.
Unlike with other application state, you don't need to define custom events and materializers. The only thing you need to do is add the following event to your `events` object:
```diff title="src/livestore/schema.ts" lang="ts"
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.Number, text: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.Number }),
}),
// Add these two events
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.Number }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.Number }),
}),
+ uiStateSet: tables.uiState.set,
}
```
Here, both the event definition and materializer are automatically derived from the client document schema, with the materializer implementing upsert semantics.
## Implement local state with LiveStore
In this step, you need to add the local state for the tabbed UI element to the `App` component. Additionally, you're going to replace the `useState` hook that you currently use for the `input` state with LiveStore's approach as well.
Remove the two lines where you're using `useState` for `input` and `filter` with the following:
```diff title="src/App.tsx" lang="tsx"
+const uiState$ = queryDb(() => tables.uiState.get())
+const { input, filter } = store.useQuery(uiState$)
+const updatedInput = (input: string) => store.commit(events.uiStateSet({ input }))
+const updatedFilter = (filter: Filter) => store.commit(events.uiStateSet({ filter }))
-const [input, setInput] = useState('')
-const [filter, setFilter] = useState<'All' | 'Active' | 'Completed'>('All')
```
Don't forget to update the imports to include `Filter` type and drop `useState`:
```diff title="src/App.tsx" lang="tsx"
-import { useState } from 'react'
+import { tables, events, Filter } from './livestore/schema'
```
Instead of using `useState` for managing the UI state, you're now committing it into your local SQLite DB via LiveStore as well.
Since you've renamed the functions to update the values of `input` and `filter` you need to adjust the parts of the `App` component where they are used:
- `setInput` → `updatedInput`
- `setFilter` → `updatedFilter`
You have now recreated the same functionality from before and are able to switch the tabs in the UI element—with one important difference: If you refresh the browser, the UI state will remain the same as before. Your UI state is now persisted and survives page refreshes:

## Implement filter logic
The last step in this tutorial is to actually update the list based on which tab is currently selected.
With all your current knowledge, you could think that the implementation would need to look something like this:
```tsx title="src/App.tsx"
const todos$ = queryDb(() => tables.todos.where({
completed:
filter === 'Completed' ? true
: filter === 'Active' ? false
: undefined // if `undefined` is passed to `where`, no filtering happens
}))
const todos = store.useQuery(todos$)
```
If you try that out though, you'll notice that this works _once_ (when your browser loads for the first time). However, when you switch tabs, the list will not actually update.
That's because the query isn't updated with the new value `filter` value when it changes. Here's how you need to do it instead:
```tsx title="src/App.tsx"
const todos$ = queryDb((
(get) => {
const { filter } = get(uiState$)
return tables.todos.where({
completed: filter === 'Completed' ? true
: filter === 'Active' ? false
: undefined
})
}
), { label: 'todos' })
const todos = store.useQuery(todos$)
```
This is the final code for the `App` component:
```tsx title="src/App.tsx"
function App() {
const { store } = useStore()
const todos$ = queryDb((
(get) => {
const { filter } = get(uiState$)
return tables.todos.where({
completed: filter === 'Completed' ? true
: filter === 'Active' ? false
: undefined
})
}
), { label: 'todos' })
const todos = store.useQuery(todos$)
const uiState$ = queryDb(() => tables.uiState.get())
const { input, filter } = store.useQuery(uiState$)
const updatedInput = (input: string) => store.commit(events.uiStateSet({ input }))
const updatedFilter = (filter: Filter) => store.commit(events.uiStateSet({ filter }))
const addTodo = () => {
if (input.trim()) {
store.commit(
events.todoCreated({ id: Date.now(), text: input }),
)
updatedInput('')
}
}
const deleteTodo = (id: number) => {
store.commit(
events.todoDeleted({ id }),
)
}
const toggleTodo = (id: number, completed: boolean) => {
store.commit(
completed ? events.todoUncompleted({ id }) : events.todoCompleted({ id })
)
}
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
addTodo()
}
}
return (
)
}
```
### useClientDocument
## `reference/framework-integrations/react/use-client-document.tsx`
```tsx filename="reference/framework-integrations/react/use-client-document.tsx"
export const TodoItem: FC<{ id: string }> = ({ id }) => {
const { store } = useStore()
const [todo, updateTodo] = store.useClientDocument(tables.uiState, id)
return (
updateTodo({ text: 'Hello, world!' })}>
{todo.text}
)
}
```
## Usage with ...
### Vite
LiveStore works with Vite out of the box.
### Tanstack Start
LiveStore works with Tanstack Start out of the box.
### Expo / React Native
LiveStore has a first-class integration with Expo / React Native via `@livestore/adapter-expo`.
### Next.js
Given various Next.js limitations, LiveStore doesn't yet work with Next.js out of the box.
## Multi-Store
The multi-store API enables managing multiple stores within a single React application. This is useful for:
- **Partial data synchronization** - Load only the data you need, when you need it
- **Multi-tenant applications** - Separate stores for each workspace, organization, or project (like Slack workspaces, Notion pages, or Linear teams)
:::caution[Experimental API]
The Multi-Store API is still early in its development.
If you have feedback or questions about this API, please don't hesitate to comment on the [RFC](https://github.com/livestorejs/livestore/pull/585)
:::
### Core Concepts
The multi-store API introduces four main primitives:
- **StoreRegistry** - Manages and caches all store instances with automatic garbage collection
- **useStore()** - Suspense-enabled hook for accessing individual store instances
- **storeOptions()** - Type-safe way to define reusable store configurations
Stores are cached by their `storeId` and automatically disposed after being inactive for a configurable duration.
### Setting Up
First, define your re-usable store configuration using `storeOptions()`:
## `reference/framework-integrations/react/multi-store/store.ts`
```ts filename="reference/framework-integrations/react/multi-store/store.ts"
// Define reusable store configuration with storeOptions()
// This helper provides type safety and can be reused across your app
export const issueStoreOptions = (issueId: string) =>
storeOptions({
storeId: `issue:${issueId}`,
schema,
adapter: makeInMemoryAdapter(),
// Optional: Configure garbage collection time
gcTime: 30_000, // 30 seconds
})
```
### `reference/framework-integrations/react/multi-store/schema.ts`
```ts filename="reference/framework-integrations/react/multi-store/schema.ts"
// Event definitions
export const events = {
issueCreated: Events.synced({
name: 'v1.IssueCreated',
schema: Schema.Struct({
id: Schema.String,
title: Schema.String,
status: Schema.Literal('todo', 'done'),
}),
}),
issueStatusChanged: Events.synced({
name: 'v1.IssueStatusChanged',
schema: Schema.Struct({
id: Schema.String,
status: Schema.Literal('todo', 'done'),
}),
}),
}
// State definition
export const tables = {
issue: State.SQLite.table({
name: 'issue',
columns: {
id: State.SQLite.text({ primaryKey: true }),
title: State.SQLite.text(),
status: State.SQLite.text(),
},
}),
}
const materializers = State.SQLite.materializers(events, {
'v1.IssueCreated': ({ id, title, status }) => tables.issue.insert({ id, title, status }),
'v1.IssueStatusChanged': ({ id, status }) => tables.issue.update({ status }).where({ id }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
Then create a `StoreRegistry` and provide it to your app:
## `reference/framework-integrations/react/multi-store/App.tsx`
```tsx filename="reference/framework-integrations/react/multi-store/App.tsx"
export function App({ children }: { children: ReactNode }) {
const [storeRegistry] = useState(
() =>
new StoreRegistry({
defaultOptions: {
// These options apply to all stores unless overridden
batchUpdates,
// gcTime: 60_000, // Optional: default garbage collection time
},
}),
)
return {children}
}
```
### Using Stores
Use the `useStore()` hook to load or get a store instance. It suspends until the store is loaded:
## `reference/framework-integrations/react/multi-store/IssueView.tsx`
```tsx filename="reference/framework-integrations/react/multi-store/IssueView.tsx"
export function IssueView({ issueId }: { issueId: string }) {
// useStore() suspends the component until the store is loaded
// If the same store was already loaded, it returns immediately
const issueStore = useStore(issueStoreOptions(issueId))
// Query data from the store
const [issue] = issueStore.useQuery(queryDb(tables.issue.select().where({ id: issueId })))
if (!issue) return
Issue not found
return (
{issue.title}
Status: {issue.status}
)
}
// Wrap with Suspense and ErrorBoundary for loading and error states
export function IssueViewWithSuspense({ issueId }: { issueId: string }) {
return (
Error loading issue}>
Loading issue...}>
)
}
```
### Multiple Instances
You can create multiple instances of the same store type by using different `storeId` values:
## `reference/framework-integrations/react/multi-store/IssueList.tsx`
```tsx filename="reference/framework-integrations/react/multi-store/IssueList.tsx"
function IssueCard({ issueId }: { issueId: string }) {
// Each call to useStore with a different storeId loads/gets a separate store instance
const issueStore = useStore(issueStoreOptions(issueId))
const [issue] = issueStore.useQuery(queryDb(tables.issue.select().where({ id: issueId })))
if (!issue) return
Issue not found
return (
{issue.title}
Store ID: {issueStore.storeId}
Status: {issue.status}
)
}
// Component that displays multiple independent store instances with shared error and loading states
export function IssueList() {
const issueIds = ['issue-1', 'issue-2', 'issue-3']
return (
Issues
Error loading issues
}>
Loading issues...}>
{issueIds.map((id) => (
))}
)
}
```
Each store instance is completely isolated with its own data, event log, and synchronization state.
### Preloading
When you know a store will be needed soon, you can preload it in advance:
## `reference/framework-integrations/react/multi-store/PreloadedIssue.tsx`
```tsx filename="reference/framework-integrations/react/multi-store/PreloadedIssue.tsx"
export function PreloadedIssue({ issueId }: { issueId: string }) {
const [showIssue, setShowIssue] = useState(false)
const storeRegistry = useStoreRegistry()
// Preload the store when user hovers (before they click)
const handleMouseEnter = () => {
storeRegistry.preload(issueStoreOptions(issueId))
}
return (
}>
Loading issue...}>
)}
)
}
```
This warms up the cache so the store is ready when the user navigates to it.
### StoreId Guidelines
When creating `storeId` values:
- **Use namespaces** - Prefix with the entity type (e.g., `workspace:abc-123`, `issue:456`) to avoid collisions between different store types and improve debugging
- **Globally unique** - Prefer globally unique IDs (e.g., nanoid) to prevent collisions
- **Keep them stable** - The same entity should always use the same `storeId` across renders
- **Sanitize user input** - If incorporating user data, be sure to validate/sanitize to prevent injection attacks
- **Document your conventions** - Document your conventions and special IDs like `current-user` as they're part of your API contract
### API Reference
#### `storeOptions(options)`
Defines reusable store configuration with type safety. Returns options that can be passed to `useStore()` or `registry.preload()`.
Options:
- `storeId` - Unique identifier for this store instance (required)
- `schema` - The LiveStore schema (required)
- `adapter` - The platform adapter (required)
- `gcTime` - Time in milliseconds to keep inactive stores in memory (default: 60_000 in browser, infinity in non-browser)
- `boot` - Function called when the store is first loaded
- `batchUpdates` - Function for batching React updates
- And other `CreateStoreOptions`
#### `new StoreRegistry(config?)`
Creates a registry that manages store instances.
Config:
- `defaultOptions` - Default options applied to all stores (can be overridden per-store)
#### `StoreRegistryProvider`
React context provider that supplies the registry to components.
Props:
- `storeRegistry` - The registry instance (required)
- `children` - React nodes (required)
#### `useStore(options)`
Hook that returns a store instance, suspending until it's loaded.
- Throws a Promise during loading (for React Suspense)
- Throws an Error if loading fails (for Error Boundaries)
- Returns the loaded store when ready
#### `useStoreRegistry()`
Returns the current `StoreRegistry` from context. Useful for advanced operations like preloading.
#### `registry.preload(options)`
Starts loading a store without suspending. Returns a Promise that resolves when loading completes (or rejects on error). This is a fire-and-forget operation.
### Complete Example
See the Multi-Store example for a complete working application demonstrating various multi-store patterns.
## Technical notes
- `@livestore/react` uses `React.useState` under the hood for `useQuery` / `useClientDocument` to bind LiveStore's reactivity to React's reactivity. Some libraries are using `React.useExternalSyncStore` for similar purposes but using `React.useState` in this case is more efficient and all that's needed for LiveStore.
- `@livestore/react` supports React Strict Mode.
# [Solid integration](https://dev.docs.livestore.dev/reference/framework-integrations/solid-integration/)
## Example
See [examples](/examples) for a complete example.
## `reference/solid-integration/livestore/store.ts`
```ts filename="reference/solid-integration/livestore/store.ts"
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker: LiveStoreWorker,
sharedWorker: LiveStoreSharedWorker,
})
export const store = await getStore({
adapter,
schema,
storeId: 'default',
})
```
### `reference/solid-integration/livestore/schema.ts`
```ts filename="reference/solid-integration/livestore/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
uiState: State.SQLite.clientDocument({
name: 'uiState',
schema: Schema.Struct({ newTodoText: Schema.String, filter: Schema.Literal('all', 'active', 'completed') }),
default: { id: SessionIdSymbol, value: { newTodoText: '', filter: 'all' } },
}),
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
uiStateSet: tables.uiState.set,
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
## `reference/solid-integration/MainSection.tsx`
```tsx filename="reference/solid-integration/MainSection.tsx"
/** biome-ignore-all lint/a11y/noLabelWithoutControl: TODO 🫠 */
/** @jsxImportSource solid-js */
export const MainSection: Component = () => {
const todos = query(visibleTodos$, [] as (typeof tables.todos.Type)[])
const todoItems = () => todos() ?? ([] as (typeof tables.todos.Type)[])
const toggleTodo = ({ id, completed }: typeof tables.todos.Type) =>
store()?.commit(completed ? events.todoUncompleted({ id }) : events.todoCompleted({ id }))
const deleteTodo = (id: string) => store()?.commit(events.todoDeleted({ id, deletedAt: new Date() }))
return (
{(todo: typeof tables.todos.Type) => (
toggleTodo(todo)} />
deleteTodo(todo.id)} />
)}
)
}
```
### `reference/solid-integration/livestore/queries.ts`
```ts filename="reference/solid-integration/livestore/queries.ts"
export const uiState$ = queryDb(tables.uiState.get(), { label: 'uiState' })
export const visibleTodos$ = queryDb(
(get) => {
const { filter } = get(uiState$)
return tables.todos.where({
deletedAt: null,
completed: filter === 'all' ? undefined : filter === 'completed',
})
},
{ label: 'visibleTodos' },
)
```
### Logging
You can control logging for Solid’s runtime helpers via optional options passed to `getStore`:
```ts
const store = await getStore({
schema,
adapter,
storeId: 'default',
// Optional: swap logger and minimum log level
logger: Logger.prettyWithThread('window'),
logLevel: LogLevel.Info, // use LogLevel.None to disable logs
})
```
# [Cloudflare Durable Object Adapter](https://dev.docs.livestore.dev/reference/platform-adapters/cloudflare-durable-object-adapter/)
The Cloudflare Durable Object adapter enables running LiveStore applications on Cloudflare Workers with stateful Durable Objects for synchronized real-time data.
## Installation
```bash
pnpm add @livestore/adapter-cloudflare @livestore/sync-cf
```
## Configuration
### Wrangler configuration
Configure your `wrangler.toml` with the required Durable Object bindings:
```toml
name = "my-livestore-app"
main = "./src/worker.ts"
compatibility_date = "2025-05-07"
compatibility_flags = [
"enable_request_signal", # Required for HTTP RPC streams
]
[[durable_objects.bindings]]
name = "SYNC_BACKEND_DO"
class_name = "SyncBackendDO"
[[durable_objects.bindings]]
name = "CLIENT_DO"
class_name = "LiveStoreClientDO"
[[migrations]]
tag = "v1"
new_sqlite_classes = ["SyncBackendDO", "LiveStoreClientDO"]
[[d1_databases]]
binding = "DB"
database_name = "my-livestore-db"
database_id = "your-database-id"
```
### Environment types
Define your Worker bindings so TypeScript can guide you when wiring Durable Objects:
## `reference/platform-adapters/cloudflare/env.ts`
```ts filename="reference/platform-adapters/cloudflare/env.ts"
export type Env = {
CLIENT_DO: CfTypes.DurableObjectNamespace
SYNC_BACKEND_DO: CfTypes.DurableObjectNamespace
DB: CfTypes.D1Database
ADMIN_SECRET: string
}
```
We also use a small helper to extract the store identifier from incoming requests:
## `reference/platform-adapters/cloudflare/shared.ts`
```ts filename="reference/platform-adapters/cloudflare/shared.ts"
export const storeIdFromRequest = (request: CfTypes.Request) => {
const url = new URL(request.url)
const storeId = url.searchParams.get('storeId')
if (storeId === null) {
throw new Error('storeId is required in URL search params')
}
return storeId
}
```
## Basic setup
### 1. Create the sync backend Durable Object
The sync backend handles pushing and pulling events between clients:
## `reference/platform-adapters/cloudflare/sync-backend.ts`
```ts filename="reference/platform-adapters/cloudflare/sync-backend.ts"
export class SyncBackendDO extends SyncBackend.makeDurableObject({
// Optional: Handle push events
// onPush: async (message, { storeId }) => {
// console.log(`onPush for store (${storeId})`, message.batch)
// },
}) {}
```
### 2. Create the client Durable Object
Each client Durable Object hosts a LiveStore instance and exposes DO RPC callbacks:
## `reference/platform-adapters/cloudflare/client-do.ts`
```ts filename="reference/platform-adapters/cloudflare/client-do.ts"
///
type AlarmInfo = {
isRetry: boolean
retryCount: number
}
export class LiveStoreClientDO extends DurableObject implements ClientDoWithRpcCallback {
__DURABLE_OBJECT_BRAND: never = undefined as never
private storeId: string | undefined
private cachedStore: Store | undefined
private storeSubscription: Unsubscribe | undefined
private readonly todosQuery = tables.todos.select()
async fetch(request: Request): Promise {
// @ts-expect-error TODO remove casts once CF types are fixed in https://github.com/cloudflare/workerd/issues/4811
this.storeId = storeIdFromRequest(request)
const store = await this.getStore()
await this.subscribeToStore()
const todos = store.query(this.todosQuery)
return new Response(JSON.stringify(todos, null, 2), {
headers: { 'Content-Type': 'application/json' },
})
}
private async getStore() {
if (this.cachedStore !== undefined) {
return this.cachedStore
}
const storeId = this.storeId ?? nanoid()
const store = await createStoreDoPromise({
schema,
storeId,
clientId: 'client-do',
sessionId: nanoid(),
durableObject: {
// @ts-expect-error TODO remove once CF types are fixed in https://github.com/cloudflare/workerd/issues/4811
ctx: this.ctx,
env: this.env,
bindingName: 'CLIENT_DO',
},
syncBackendStub: this.env.SYNC_BACKEND_DO.get(this.env.SYNC_BACKEND_DO.idFromName(storeId)),
livePull: true,
})
this.cachedStore = store
return store
}
private async subscribeToStore() {
const store = await this.getStore()
if (this.storeSubscription === undefined) {
this.storeSubscription = store.subscribe(this.todosQuery, (todos: ReadonlyArray) => {
console.log(`todos for store (${this.storeId})`, todos)
})
}
await this.ctx.storage.setAlarm(Date.now() + 1000)
}
alarm(_alarmInfo?: AlarmInfo): void | Promise {
return this.subscribeToStore()
}
async syncUpdateRpc(payload: unknown) {
await handleSyncUpdateRpc(payload)
}
}
```
### `reference/platform-adapters/cloudflare/schema.ts`
```ts filename="reference/platform-adapters/cloudflare/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
### 3. Worker fetch handler
The worker routes incoming requests either to the sync backend or to the client Durable Object:
## `reference/platform-adapters/cloudflare/worker.ts`
```ts filename="reference/platform-adapters/cloudflare/worker.ts"
export default {
fetch: async (request: CfTypes.Request, env: Env, ctx: CfTypes.ExecutionContext) => {
const url = new URL(request.url)
const searchParams = SyncBackend.matchSyncRequest(request)
if (searchParams !== undefined) {
return SyncBackend.handleSyncRequest({
request,
searchParams,
env,
ctx,
syncBackendBinding: 'SYNC_BACKEND_DO',
headers: {},
})
}
if (url.pathname.endsWith('/client-do')) {
const storeId = storeIdFromRequest(request)
const id = env.CLIENT_DO.idFromName(storeId)
return env.CLIENT_DO.get(id).fetch(request)
}
return new Response('Not found', { status: 404 }) as unknown as CfTypes.Response
},
} satisfies SyncBackend.CFWorker
```
## API reference
### `createStoreDoPromise(options)`
Creates a LiveStore instance inside a Durable Object.
**Options:**
- `schema` – LiveStore schema definition
- `storeId` – Unique identifier for the store
- `clientId` – Client identifier
- `sessionId` – Session identifier (use `nanoid()`)
- `durableObject` – Context about the Durable Object hosting the store:
- `state` – Durable Object state handle (for example `this.ctx`)
- `env` – Environment bindings for the Durable Object
- `bindingName` – Name other workers use to reach this Durable Object
- `syncBackendStub` – Durable Object stub used to reach the sync backend
- `livePull` – Enable real-time updates (default: `false`)
- `resetPersistence` – Drop LiveStore state/eventlog persistence before booting (development only, default: `false`)
- `logger?` – Optional Effect logger layer to customize formatting/output
- `logLevel?` – Optional minimum log level (use `LogLevel.None` to disable logs)
### `syncUpdateRpc(payload)`
Client Durable Objects must implement this method so the sync backend can deliver live updates. `createStoreDoPromise` wires it up automatically—just forward the payload to `handleSyncUpdateRpc` (see the client Durable Object example above).
## Resetting LiveStore persistence (development only)
When iterating locally, you can instruct the adapter to wipe the Durable Object’s LiveStore databases before booting by enabling `resetPersistence`. Guard this behind a protected route or admin token.
## `reference/platform-adapters/cloudflare/reset.ts`
```ts filename="reference/platform-adapters/cloudflare/reset.ts"
export const maybeResetStore = async ({
request,
env,
ctx,
}: {
request: Request
env: Env
ctx: CfTypes.DurableObjectState
}) => {
const url = new URL(request.url)
const shouldReset =
env.ADMIN_SECRET === url.searchParams.get('token') && url.pathname === '/internal/livestore-dev-reset'
const storeId = url.searchParams.get('storeId') ?? nanoid()
const store = await createStoreDoPromise({
schema,
storeId,
clientId: 'client-do',
sessionId: nanoid(),
durableObject: { ctx, env, bindingName: 'CLIENT_DO' },
syncBackendStub: env.SYNC_BACKEND_DO.get(env.SYNC_BACKEND_DO.idFromName(storeId)),
livePull: true,
resetPersistence: shouldReset,
})
return store
}
```
:::caution
Resetting persistence deletes all LiveStore state and eventlog data stored inside the Durable Object. Only expose this behaviour in guarded development flows and never to production traffic.
:::
## Advanced features
- Use `livePull: true` to receive push-based updates via Durable Object RPC callbacks.
- Subscribe to data changes inside the Durable Object to trigger side effects (see the client Durable Object example).
- Wire additional routes in the worker fetch handler to expose debugging endpoints or admin operations.
For sync backend-related APIs like `makeDurableObject`, `handleSyncRequest`, and `matchSyncRequest`, see the [Cloudflare sync provider documentation](/reference/syncing/sync-provider/cloudflare/).
# [Electron Adapter](https://dev.docs.livestore.dev/reference/platform-adapters/electron-adapter/)
LiveStore doesn't yet support Electron (see [this issue](https://github.com/livestorejs/livestore/issues/296) for more details).
# [Node Adapter](https://dev.docs.livestore.dev/reference/platform-adapters/node-adapter/)
Works with Node.js, Bun and Deno.
## Example
## `reference/platform-adapters/node-adapter/adapter.ts`
```ts filename="reference/platform-adapters/node-adapter/adapter.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps inline adapter */
// ---cut---
const adapter = makeAdapter({
storage: { type: 'fs' },
// or in-memory:
// storage: { type: 'in-memory' },
sync: { backend: makeWsSync({ url: 'ws://localhost:8787' }) },
// To enable devtools:
// devtools: { schemaPath: new URL('./schema.ts', import.meta.url) },
})
```
## Resetting local persistence
During development you can instruct the adapter to wipe the locally persisted state and eventlog databases on startup:
## `reference/platform-adapters/node-adapter/reset-persistence.ts`
```ts filename="reference/platform-adapters/node-adapter/reset-persistence.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps inline adapter */
// ---cut---
const resetPersistence = process.env.NODE_ENV !== 'production' && Boolean(process.env.RESET_LIVESTORE)
const adapter = makeAdapter({
storage: { type: 'fs' },
resetPersistence,
})
```
:::caution
This will delete all local data for the given `storeId` and `clientId`. It only clears local persistence and does not reset any connected sync backend. Only enable it for debugging scenarios.
:::
### Worker adapter
The worker adapter can be used for more advanced scenarios where it's preferable to reduce the load of the main thread and run persistence/syncing in a worker thread.
## `reference/platform-adapters/node-adapter/worker-main.ts`
```ts filename="reference/platform-adapters/node-adapter/worker-main.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: snippet keeps adapter inline for docs */
// ---cut---
const adapter = makeWorkerAdapter({
storage: { type: 'fs' },
workerUrl: new URL('./livestore.worker.js', import.meta.url),
})
```
## `reference/platform-adapters/node-adapter/worker-worker.ts`
```ts filename="reference/platform-adapters/node-adapter/worker-worker.ts"
makeWorker({
schema,
sync: { backend: makeWsSync({ url: 'ws://localhost:8787' }) },
})
```
### `reference/platform-adapters/node-adapter/schema.ts`
```ts filename="reference/platform-adapters/node-adapter/schema.ts"
const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
},
}),
} as const
const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
} as const
const materializers = State.SQLite.materializers(events, {
[events.todoCreated.name]: defineMaterializer(events.todoCreated, ({ id, text }) =>
tables.todos.insert({ id, text, completed: false }),
),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
#### Logging
You can control what the Node worker logs and how it’s formatted.
Pass two optional options to `makeWorker`:
- `logger` — where/format of logs (e.g. pretty console output)
- `logLevel` — how verbose logs are (`LogLevel.None` silences logs)
```ts
makeWorker({
schema,
// readable console output by thread name
logger: Logger.prettyWithThread('livestore-node-leader-thread'),
// choose verbosity: None | Error | Warning | Info | Debug
logLevel: LogLevel.Info,
})
Tips:
- Use `LogLevel.None` to keep test output quiet.
- Keep the default (Debug) when diagnosing issues.
```
# [Tauri Adapter](https://dev.docs.livestore.dev/reference/platform-adapters/tauri-adapter/)
## Native Tauri Adapter
While LiveStore doesn't yet support a native Tauri adapter (see [this issue](https://github.com/livestorejs/livestore/issues/125) for more details), you can already use the [web adapter](./web-adapter.md) with Tauri.
The goal of the native Tauri adapter is for LiveStore to leverage native platform APIs and capabilities including:
- Native file system access (instead of going through the browser abstraction layer)
- Background sync capabilities
- ...
## Example using the web adapter
See this example of a Tauri app using the web adapter: [tauri-todomvc-sync-cf](https://github.com/bohdanbirdie/tauri-todomvc-sync-cf)
# [Web Adapter](https://dev.docs.livestore.dev/reference/platform-adapters/web-adapter/)
## Installation
```bash
npm install @livestore/adapter-web @livestore/wa-sqlite
```
## Example
## `reference/platform-adapters/web-adapter/main.ts`
```ts filename="reference/platform-adapters/web-adapter/main.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps inline adapter */
// ---cut---
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker: LiveStoreWorker,
sharedWorker: LiveStoreSharedWorker,
})
```
## `reference/platform-adapters/web-adapter/livestore.worker.ts`
```ts filename="reference/platform-adapters/web-adapter/livestore.worker.ts"
makeWorker({ schema })
```
### `reference/platform-adapters/web-adapter/schema/index.ts`
```ts filename="reference/platform-adapters/web-adapter/schema/index.ts"
const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
},
}),
} as const
const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
} as const
const materializers = State.SQLite.materializers(events, {
[events.todoCreated.name]: defineMaterializer(events.todoCreated, ({ id, text }) =>
tables.todos.insert({ id, text, completed: false }),
),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
## Adding a sync backend
## `reference/platform-adapters/web-adapter/sync-backend.ts`
```ts filename="reference/platform-adapters/web-adapter/sync-backend.ts"
makeWorker({ schema, sync: { backend: makeWsSync({ url: 'ws://localhost:8787' }) } })
```
## In-memory adapter
You can also use the in-memory adapter which can be useful in certain scenarios (e.g. testing).
## `reference/platform-adapters/web-adapter/in-memory.ts`
```ts filename="reference/platform-adapters/web-adapter/in-memory.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps inline adapter */
// ---cut---
const adapter = makeInMemoryAdapter()
```
## Web worker
- Make sure your schema doesn't depend on any code which needs to run in the main thread (e.g. avoid importing from files using React)
- Unfortunately this constraints you from co-locating your table definitions in component files.
- You might be able to still work around this by using the following import in your worker:
```ts
```
### Why is there a dedicated web worker and a shared worker?
- Shared worker:
- Needed to allow tabs to communicate with each other using a binary message channel.
- The shared worker mostly acts as a proxy to the dedicated web worker.
- Dedicated web worker (also called "leader worker" via leader election mechanism using web locks):
- Acts as the leader/single writer for the storage.
- Also handles connection to sync backend.
- Currently needed for synchronous OPFS API which isn't supported in a shared worker. (Hopefully won't be needed in the future anymore.)
### Why not use a service worker?
- While service workers seem similar to shared workers (i.e. only a single instance across all tabs), they serve different purposes and have different trade-offs.
- Service workers are meant to be used to intercept network requests and tend to "shut down" when there are no requests for some period of time making them unsuitable for our use case.
- Also note that service workers don't support some needed APIs such as OPFS.
## Storage
LiveStore currently only support OPFS to locally persist its data. In the future we might add support for other storage types (e.g. IndexedDB).
During development (`NODE_ENV !== 'production'`), LiveStore automatically copies older state database files into `archive/` inside the OPFS directory for the store (e.g. `livestore-@/archive/`). The three most recent copies are retained so you can inspect pre-migration data; older archives are pruned. In production, we delete outdated state databases immediately.
LiveStore also uses `window.sessionStorage` to retain the identity of a client session (e.g. tab/window) across reloads.
### Resetting local persistence
Resetting local persistence only clears data stored in the browser and does not affect any connected sync backend.
In case you want to reset the local persistence of a client, you can provide the `resetPersistence` option to the adapter.
## `reference/platform-adapters/web-adapter/reset-persistence.ts`
```ts filename="reference/platform-adapters/web-adapter/reset-persistence.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps inline adapter */
// ---cut---
const resetPersistence = import.meta.env.DEV && new URLSearchParams(window.location.search).get('reset') !== null
if (resetPersistence) {
const searchParams = new URLSearchParams(window.location.search)
searchParams.delete('reset')
window.history.replaceState(null, '', `${window.location.pathname}?${searchParams.toString()}`)
}
const adapter = makePersistedAdapter({
storage: { type: 'opfs' },
worker: LiveStoreWorker,
sharedWorker: LiveStoreSharedWorker,
resetPersistence,
})
```
If you want to reset persistence manually, you can:
1. **Clear site data** in Chrome DevTools (Application tab > Storage > Clear site data)
2. **Use console command** if the above doesn't work due to a Chrome OPFS bug:
```javascript
const opfsRoot = await navigator.storage.getDirectory();
await opfsRoot.remove();
```
Note: Only use this during development while the app is running.
## Architecture diagram
Assuming the web adapter in a multi-client, multi-tab browser application, a diagram looks like this:

## Other notes
- The web adapter is using some browser APIs that might require a HTTPS connection (e.g. `navigator.locks`). It's recommended to even use HTTPS during local development (e.g. via [Caddy](https://caddyserver.com/docs/automatic-https)).
## Browser support
- Notable required browser APIs: OPFS, SharedWorker, `navigator.locks`, WASM
- The web adapter of LiveStore currently doesn't work on Android browsers as they don't support the `SharedWorker` API (see [Chromium bug](https://issues.chromium.org/issues/40290702)).
## Best Practices
- It's recommended to develop in an incognito window to avoid issues with persistent storage (e.g. OPFS).
## FAQ
### What's the bundle size of the web adapter?
LiveStore with the web adapter adds two parts to your application bundle:
- The LiveStore JavaScript bundle (~180KB gzipped)
- SQLite WASM (~300KB gzipped)
# [Materializers](https://dev.docs.livestore.dev/reference/state/materializers/)
Materializers are functions that allow you to write to your database in response to events. Materializers are executed in the order of the events in the eventlog.
## Example
## `reference/state/materializers/example.ts`
```ts filename="reference/state/materializers/example.ts"
export const todos = State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text(),
completed: State.SQLite.boolean({ default: false }),
previousIds: State.SQLite.json({
schema: Schema.Array(Schema.String),
nullable: true,
}),
},
})
export const table1 = State.SQLite.table({
name: 'settings',
columns: {
id: State.SQLite.text({ primaryKey: true }),
someVal: State.SQLite.integer({ default: 0 }),
},
})
export const table2 = State.SQLite.table({
name: 'preferences',
columns: {
id: State.SQLite.text({ primaryKey: true }),
otherVal: State.SQLite.text({ default: 'default' }),
},
})
export const events = {
todoCreated: Events.synced({
name: 'todoCreated',
schema: Schema.Struct({
id: Schema.String,
text: Schema.String,
completed: Schema.Boolean.pipe(Schema.optional),
}),
}),
userPreferencesUpdated: Events.synced({
name: 'userPreferencesUpdated',
schema: Schema.Struct({ userId: Schema.String, theme: Schema.String }),
}),
factoryResetApplied: Events.synced({
name: 'factoryResetApplied',
schema: Schema.Struct({}),
}),
} as const
export const materializers = State.SQLite.materializers(events, {
[events.todoCreated.name]: defineMaterializer(events.todoCreated, ({ id, text, completed }) =>
todos.insert({ id, text, completed: completed ?? false }),
),
[events.userPreferencesUpdated.name]: defineMaterializer(events.userPreferencesUpdated, ({ userId, theme }) => {
console.log(`User ${userId} updated theme to ${theme}.`)
return []
}),
[events.factoryResetApplied.name]: defineMaterializer(events.factoryResetApplied, () => [
table1.update({ someVal: 0 }),
table2.update({ otherVal: 'default' }),
]),
})
```
## Reading from the database in materializers
Sometimes it can be useful to query your current state when executing a materializer. This can be done by using `ctx.query` in your materializer function.
## `reference/state/materializers/with-query.ts`
```ts filename="reference/state/materializers/with-query.ts"
const events = {
todoCreated: Events.synced({
name: 'todoCreated',
schema: Schema.Struct({
id: Schema.String,
text: Schema.String,
completed: Schema.Boolean.pipe(Schema.optional),
}),
}),
} as const
export const materializers = State.SQLite.materializers(events, {
[events.todoCreated.name]: defineMaterializer(events.todoCreated, ({ id, text, completed }, ctx) => {
const previousIds = ctx.query(todos.select('id'))
return todos.insert({ id, text, completed: completed ?? false, previousIds })
}),
})
```
## Transactional behaviour
A materializer is always executed in a transaction. This transaction applies to:
- All database write operations returned by the materializer.
- Any `ctx.query` calls made within the materializer, ensuring a consistent view of the data.
Materializers can return:
- A single database write operation.
- An array of database write operations.
- `void` (i.e., no return value) if no database modifications are needed.
- An `Effect` that resolves to one of the above (e.g., `Effect.succeed(writeOp)` or `Effect.void`).
The `context` object passed to each materializer provides `query` for database reads, `db` for direct database access if needed, and `event` for the full event details.
## Error Handling
If a materializer function throws an error, or if an `Effect` returned by a materializer fails, the entire transaction for that event will be rolled back. This means any database changes attempted by that materializer for the failing event will not be persisted. The error will be logged, and the system will typically halt or flag the event as problematic, depending on the specific LiveStore setup.
If the error happens on the client which tries to commit the event, the event will never be committed and pushed to the sync backend.
In the future there will be ways to configure the error-handling behaviour, e.g. to allow skipping an incoming event when a materializer fails in order to avoid the app getting stuck. However, skipping events might also lead to diverging state across clients and should be used with caution.
## Best practices
### Side-effect free / deterministic
It's strongly recommended to make sure your materializers are side-effect free and deterministic. This also implies passing in all necessary data via the event payload.
Example:
## `reference/state/materializers/deterministic.ts`
```ts filename="reference/state/materializers/deterministic.ts"
declare const store: Store
export const nondeterministicEvents = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ text: Schema.String }),
}),
} as const
export const nondeterministicMaterializers = State.SQLite.materializers(nondeterministicEvents, {
[nondeterministicEvents.todoCreated.name]: defineMaterializer(nondeterministicEvents.todoCreated, ({ text }) =>
todos.insert({ id: randomUUID(), text }),
),
})
store.commit(nondeterministicEvents.todoCreated({ text: 'Buy groceries' }))
export const deterministicEvents = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
} as const
export const deterministicMaterializers = State.SQLite.materializers(deterministicEvents, {
[deterministicEvents.todoCreated.name]: defineMaterializer(deterministicEvents.todoCreated, ({ id, text }) =>
todos.insert({ id, text }),
),
})
store.commit(deterministicEvents.todoCreated({ id: nanoid(), text: 'Buy groceries' }))
```
# [SQLite State Schema (Effect Schema)](https://dev.docs.livestore.dev/reference/state/sqlite-schema-effect/)
LiveStore supports defining SQLite tables using Effect Schema with annotations for database constraints. This approach provides strong type safety, composability, and automatic type mapping from TypeScript to SQLite.
> **Note**: This approach will become the default once Effect Schema v4 is released. See [livestore#382](https://github.com/livestorejs/livestore/issues/382) for details.
>
> For the traditional column-based approach, see [SQLite State Schema](/reference/state/sqlite-schema).
## Basic Usage
Define tables using Effect Schema with database constraint annotations:
## `reference/state/sqlite-schema/effect/basic.ts`
```ts filename="reference/state/sqlite-schema/effect/basic.ts"
const UserSchema = Schema.Struct({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
email: Schema.String.pipe(State.SQLite.withUnique),
name: Schema.String,
age: Schema.Int.pipe(State.SQLite.withDefault(0)),
isActive: Schema.Boolean.pipe(State.SQLite.withDefault(true)),
metadata: Schema.optional(
Schema.Record({
key: Schema.String,
value: Schema.Unknown,
}),
),
}).annotations({ title: 'users' })
export const userTable = State.SQLite.table({ schema: UserSchema })
```
## Schema Annotations
You can annotate schema fields with database constraints:
### Primary Keys
## `reference/state/sqlite-schema/effect/primary-key.ts`
```ts filename="reference/state/sqlite-schema/effect/primary-key.ts"
const _schema = Schema.Struct({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
// Other fields...
})
```
**Important**: Primary key columns cannot be nullable. This will throw an error:
## `reference/state/sqlite-schema/effect/primary-key-nullable.ts`
```ts filename="reference/state/sqlite-schema/effect/primary-key-nullable.ts"
// ❌ This will throw an error at runtime because primary keys cannot be nullable
const _badSchema = Schema.Struct({
id: Schema.NullOr(Schema.String).pipe(State.SQLite.withPrimaryKey),
})
```
### Auto-Increment
## `reference/state/sqlite-schema/effect/auto-increment.ts`
```ts filename="reference/state/sqlite-schema/effect/auto-increment.ts"
const _schema = Schema.Struct({
id: Schema.Int.pipe(State.SQLite.withPrimaryKey, State.SQLite.withAutoIncrement),
// Other fields...
})
```
### Default Values
## `reference/state/sqlite-schema/effect/default-values.ts`
```ts filename="reference/state/sqlite-schema/effect/default-values.ts"
const _schema = Schema.Struct({
status: Schema.String.pipe(State.SQLite.withDefault('active')),
createdAt: Schema.String.pipe(State.SQLite.withDefault('CURRENT_TIMESTAMP')),
count: Schema.Int.pipe(State.SQLite.withDefault(0)),
})
```
### Unique Constraints
## `reference/state/sqlite-schema/effect/unique-constraints.ts`
```ts filename="reference/state/sqlite-schema/effect/unique-constraints.ts"
const _schema = Schema.Struct({
email: Schema.String.pipe(State.SQLite.withUnique),
username: Schema.String.pipe(State.SQLite.withUnique),
})
```
Unique annotations automatically create unique indexes.
### Custom Column Types
Override the automatically inferred SQLite column type:
## `reference/state/sqlite-schema/effect/custom-column-types.ts`
```ts filename="reference/state/sqlite-schema/effect/custom-column-types.ts"
const _schema = Schema.Struct({
// Store a number as text instead of real
version: Schema.Number.pipe(State.SQLite.withColumnType('text')),
// Store binary data as blob
data: Schema.Uint8Array.pipe(State.SQLite.withColumnType('blob')),
})
```
### Combining Annotations
Annotations can be chained together:
## `reference/state/sqlite-schema/effect/combining-annotations.ts`
```ts filename="reference/state/sqlite-schema/effect/combining-annotations.ts"
const _schema = Schema.Struct({
id: Schema.Int.pipe(State.SQLite.withPrimaryKey, State.SQLite.withAutoIncrement),
email: Schema.String.pipe(State.SQLite.withUnique, State.SQLite.withColumnType('text')),
})
```
## Table Naming
You can specify table names in several ways:
### Using Schema Annotations
## `reference/state/sqlite-schema/effect/table-name-annotations.ts`
```ts filename="reference/state/sqlite-schema/effect/table-name-annotations.ts"
// Using title annotation
const UserSchema = Schema.Struct({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
name: Schema.String,
}).annotations({ title: 'users' })
export const userTable = State.SQLite.table({ schema: UserSchema })
// Using identifier annotation
const PostSchema = Schema.Struct({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
title: Schema.String,
}).annotations({ identifier: 'posts' })
export const postTable = State.SQLite.table({ schema: PostSchema })
```
### Explicit Name
## `reference/state/sqlite-schema/effect/table-name-explicit.ts`
```ts filename="reference/state/sqlite-schema/effect/table-name-explicit.ts"
const UserSchema = Schema.Struct({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
name: Schema.String,
})
export const userTable = State.SQLite.table({
name: 'users',
schema: UserSchema,
})
```
**Note**: Title annotation takes precedence over identifier annotation.
## Type Mapping
Effect Schema types are automatically mapped to SQLite column types:
| Schema Type | SQLite Type | TypeScript Type |
|-------------|-------------|-----------------|
| `Schema.String` | `text` | `string` |
| `Schema.Number` | `real` | `number` |
| `Schema.Int` | `integer` | `number` |
| `Schema.Boolean` | `integer` | `boolean` |
| `Schema.Date` | `text` | `Date` |
| `Schema.BigInt` | `text` | `bigint` |
| Complex types (Struct, Array, etc.) | `text` (JSON encoded) | Decoded type |
| `Schema.optional(T)` | Nullable column | `T \| undefined` |
| `Schema.NullOr(T)` | Nullable column | `T \| null` |
## Advanced Examples
### Complex Schema with Multiple Constraints
## `reference/state/sqlite-schema/effect/advanced-product.ts`
```ts filename="reference/state/sqlite-schema/effect/advanced-product.ts"
const ProductSchema = Schema.Struct({
id: Schema.Int.pipe(State.SQLite.withPrimaryKey, State.SQLite.withAutoIncrement),
sku: Schema.String.pipe(State.SQLite.withUnique),
name: Schema.String,
price: Schema.Number.pipe(State.SQLite.withDefault(0)),
category: Schema.Literal('electronics', 'clothing', 'books'),
metadata: Schema.optional(
Schema.Struct({
weight: Schema.Number,
dimensions: Schema.Struct({
width: Schema.Number,
height: Schema.Number,
depth: Schema.Number,
}),
}),
),
isActive: Schema.Boolean.pipe(State.SQLite.withDefault(true)),
createdAt: Schema.Date.pipe(State.SQLite.withDefault('CURRENT_TIMESTAMP')),
}).annotations({ title: 'products' })
export const productTable = State.SQLite.table({ schema: ProductSchema })
```
### Working with Schema.Class
## `reference/state/sqlite-schema/effect/schema-class.ts`
```ts filename="reference/state/sqlite-schema/effect/schema-class.ts"
class User extends Schema.Class('User')({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
email: Schema.String.pipe(State.SQLite.withUnique),
name: Schema.String,
age: Schema.Int,
}) {}
export const userTable = State.SQLite.table({
name: 'users',
schema: User,
})
```
### Custom Indexes
## `reference/state/sqlite-schema/effect/custom-indexes.ts`
```ts filename="reference/state/sqlite-schema/effect/custom-indexes.ts"
const PostSchema = Schema.Struct({
id: Schema.String.pipe(State.SQLite.withPrimaryKey),
title: Schema.String,
authorId: Schema.String,
createdAt: Schema.Date,
}).annotations({ title: 'posts' })
export const postTable = State.SQLite.table({
schema: PostSchema,
indexes: [
{ name: 'idx_posts_author', columns: ['authorId'] },
{ name: 'idx_posts_created', columns: ['createdAt'] },
],
})
```
## Best Practices
### Schema Design
- Always use `withPrimaryKey` for primary key columns - never combine it with nullable types
- Use `Schema.optional()` for truly optional fields that can be undefined
- Use `Schema.NullOr()` for fields that can explicitly be set to null
- Leverage schema annotations like `title` or `identifier` to avoid repeating table names
- Group related schemas in the same module for better organization
### Type Safety
- Let TypeScript infer table types rather than explicitly typing them
- Use Effect Schema's refinements and transformations for data validation
- Prefer Effect Schema's built-in types (`Schema.Int`, `Schema.Date`) over generic types where appropriate
### Performance
- Be mindful of complex types stored as JSON - they can impact query performance
- Use appropriate indexes for frequently queried columns
- Consider using `withColumnType` to optimize storage for specific use cases
## When to Use This Approach
**Use Effect Schema-based tables when:**
- You already have Effect Schema definitions to reuse
- You prefer Effect Schema's composability and transformations
- Your schemas are shared across different parts of your application
- You want automatic type mapping and strong type safety
- You plan to migrate to Effect Schema v4 when it becomes available
**Consider column-based tables when:**
- You need precise control over SQLite column types
- You're migrating from existing SQLite schemas
- You prefer explicit column configuration
- You're not already using Effect Schema extensively in your project
# [SQL Queries](https://dev.docs.livestore.dev/reference/state/sql-queries/)
## Query builder
LiveStore also provides a small query builder for the most common queries. The query builder automatically derives the appropriate result schema internally.
## `reference/state/sql-queries/query-builder.ts`
```ts filename="reference/state/sql-queries/query-builder.ts"
const table = State.SQLite.table({
name: 'my_table',
columns: {
id: State.SQLite.text({ primaryKey: true }),
name: State.SQLite.text(),
},
})
// Read queries
table.select('name')
table.where('name', '=', 'Alice')
table.where({ name: 'Alice' })
table.orderBy('name', 'desc').offset(10).limit(10)
table.count().where('name', 'LIKE', '%Ali%')
// Write queries
table.insert({ id: '123', name: 'Bob' })
table.update({ name: 'Alice' }).where({ id: '123' })
table.delete().where({ id: '123' })
table.insert({ id: '123', name: 'Charlie' }).onConflict('id', 'replace')
table.insert({ id: '456', name: 'Diana' }).onConflict('id', 'update', { name: 'Diana Updated' })
```
## Raw SQL queries
LiveStore supports arbitrary SQL queries on top of SQLite. In order for LiveStore to handle the query results correctly, you need to provide the result schema.
## `reference/state/sql-queries/raw-sql.ts`
```ts filename="reference/state/sql-queries/raw-sql.ts"
/** biome-ignore-all lint/correctness/noUnusedVariables: docs snippet keeps reactive references */
// ---cut---
const table = State.SQLite.table({
name: 'my_table',
columns: {
id: State.SQLite.text({ primaryKey: true }),
name: State.SQLite.text(),
},
})
const filtered$ = queryDb({
query: sql`select * from my_table where name = 'Alice'`,
schema: Schema.Array(table.rowSchema),
})
const count$ = queryDb({
query: sql`select count(*) as count from my_table`,
schema: Schema.Struct({ count: Schema.Number }).pipe(Schema.pluck('count'), Schema.Array, Schema.headOrElse()),
})
```
## Best Practices
- Query results should be treated as immutable/read-only
- For queries which could return many rows, it's recommended to paginate the results
- Usually both via paginated/virtualized rendering as well as paginated queries
- You'll get best query performance by using a `WHERE` clause over an indexed column combined with a `LIMIT` clause. Avoid `OFFSET` as it can be slow on large tables
- For very large/complex queries, it can also make sense to implement incremental view maintenance (IVM) for your queries
- You can for example do this by have a separate table which is a materialized version of your query results which you update manually (and ideally incrementally) as the underlying data changes.
# [SQLite State Schema](https://dev.docs.livestore.dev/reference/state/sqlite-schema/)
LiveStore provides a schema definition language for defining your database tables and mutation definitions using explicit column configurations. LiveStore automatically migrates your database schema when you change your schema definitions.
> **Alternative Approach**: You can also define tables using [Effect Schema with annotations](/reference/state/sqlite-schema-effect) for type-safe schema definitions.
### Example
## Defining Tables
Define SQLite tables using explicit column definitions:
## `reference/state/sqlite-schema/columns/table-basic.ts`
```ts filename="reference/state/sqlite-schema/columns/table-basic.ts"
export const userTable = State.SQLite.table({
name: 'users',
columns: {
id: State.SQLite.text({ primaryKey: true }),
email: State.SQLite.text(),
name: State.SQLite.text(),
age: State.SQLite.integer({ default: 0 }),
isActive: State.SQLite.boolean({ default: true }),
metadata: State.SQLite.json({ nullable: true }),
},
indexes: [{ name: 'idx_users_email', columns: ['email'], isUnique: true }],
})
```
Use the optional `indexes` array to declare secondary indexes or enforce uniqueness (set `isUnique: true`).
## Column Types
You can use these column types when defining tables:
### Core SQLite column types
- `State.SQLite.text`: A text field, returns `string`.
- `State.SQLite.integer`: An integer field, returns `number`.
- `State.SQLite.real`: A real field (floating point number), returns `number`.
- `State.SQLite.blob`: A blob field (binary data), returns `Uint8Array`.
### Higher level column types
- `State.SQLite.boolean`: An integer field that stores `0` for `false` and `1` for `true` and returns a `boolean`.
- `State.SQLite.json`: A text field that stores a stringified JSON object and returns a decoded JSON value.
- `State.SQLite.datetime`: A text field that stores dates as ISO 8601 strings and returns a `Date`.
- `State.SQLite.datetimeInteger`: A integer field that stores dates as the number of milliseconds since the epoch and returns a `Date`.
### Custom column schemas
You can also provide a custom schema for a column which is used to automatically encode and decode the column value.
### Example: JSON-encoded struct
## `reference/state/sqlite-schema/columns/json-struct.ts`
```ts filename="reference/state/sqlite-schema/columns/json-struct.ts"
export const UserMetadata = Schema.Struct({
petName: Schema.String,
favoriteColor: Schema.Literal('red', 'blue', 'green'),
})
export const userTable = State.SQLite.table({
name: 'user',
columns: {
id: State.SQLite.text({ primaryKey: true }),
name: State.SQLite.text(),
metadata: State.SQLite.json({ schema: UserMetadata }),
},
})
```
### Schema migrations
Migration strategies:
- `auto`: Automatically migrate the database to the newest schema and rematerializes the state from the eventlog.
- `manual`: Manually migrate the database to the newest schema.
## Client documents
- Meant for convenience
- Client-only
- Goal: Similar ease of use as `React.useState`
- When schema changes in a non-backwards compatible way, previous events are dropped and the state is reset
- Don't use client documents for sensitive data which must not be lost
- Implies
- Table with `id` and `value` columns
- `${MyTable}Set` event + materializer (which are auto-registered)
### Basic usage
## `reference/state/sqlite-schema/columns/client-document-basic.tsx`
```tsx filename="reference/state/sqlite-schema/columns/client-document-basic.tsx"
export const uiState = State.SQLite.clientDocument({
name: 'UiState',
schema: Schema.Struct({
newTodoText: Schema.String,
filter: Schema.Literal('all', 'active', 'completed'),
}),
default: { id: SessionIdSymbol, value: { newTodoText: '', filter: 'all' } },
})
export const readUiState = (store: Store): { newTodoText: string; filter: 'all' | 'active' | 'completed' } =>
store.query(uiState.get())
export const setNewTodoText = (store: Store, newTodoText: string): void => {
store.commit(uiState.set({ newTodoText }))
}
export const UiStateFilter: React.FC = () => {
const [state, setState] = useClientDocument(uiState)
const showActive = React.useCallback(() => {
setState({ filter: 'active' })
}, [setState])
const showAll = React.useCallback(() => {
setState({ filter: 'all' })
}, [setState])
return (
All
Active ({state.filter === 'active' ? 'selected' : 'select'})
)
}
```
### KV-style client document
Sometimes you want a simple key-value store for arbitrary values without partial merging. You can model this by using `Schema.Any` as the value schema. With `Schema.Any`, updates fully replace the stored value (no partial merge semantics).
## `reference/state/sqlite-schema/columns/client-document-kv.tsx`
```tsx filename="reference/state/sqlite-schema/columns/client-document-kv.tsx"
export const kv = State.SQLite.clientDocument({
name: 'Kv',
schema: Schema.Any,
default: { value: null },
})
export const readKvValue = (store: Store, id: string): unknown => store.query(kv.get(id))
export const setKvValue = (store: Store, id: string, value: unknown): void => {
store.commit(kv.set(value, id))
}
export const KvViewer: React.FC<{ id: string }> = ({ id }) => {
const [value, setValue] = useClientDocument(kv, id)
return (
setValue('hello')}>
Current value: {JSON.stringify(value)}
)
}
```
## Column Types
You can use these column types:
#### Core SQLite column types
- `State.SQLite.text`: A text field, returns `string`.
- `State.SQLite.integer`: An integer field, returns `number`.
- `State.SQLite.real`: A real field (floating point number), returns `number`.
- `State.SQLite.blob`: A blob field (binary data), returns `Uint8Array`.
#### Higher level column types
- `State.SQLite.boolean`: An integer field that stores `0` for `false` and `1` for `true` and returns a `boolean`.
- `State.SQLite.json`: A text field that stores a stringified JSON object and returns a decoded JSON value.
- `State.SQLite.datetime`: A text field that stores dates as ISO 8601 strings and returns a `Date`.
- `State.SQLite.datetimeInteger`: A integer field that stores dates as the number of milliseconds since the epoch and returns a `Date`.
#### Custom column schemas
You can also provide a custom schema for a column which is used to automatically encode and decode the column value.
#### Example: JSON-encoded struct
## Best Practices
### Column Configuration
- Use appropriate SQLite column types for your data (text, integer, real, blob)
- Set `primaryKey: true` for primary key columns
- Use `nullable: true` for columns that can contain NULL values
- Provide meaningful `default` values where appropriate
- Add unique constraints via table `indexes` using `isUnique: true`
### Schema Design
- Choose column types that match your data requirements
- Use custom schemas with `State.SQLite.json()` for complex data structures
- Group related table definitions in the same module
- Use descriptive table and column names
### General Practices
- It's usually recommend to **not distinguish** between app state vs app data but rather keep all state in LiveStore.
- This means you'll rarely use `React.useState` when using LiveStore
- In some cases for "fast changing values" it can make sense to keep a version of a state value outside of LiveStore with a reactive setter for React and a debounced setter for LiveStore to avoid excessive LiveStore mutations. Cases where this can make sense can include:
- Text input / rich text editing
- Scroll position tracking, resize events, move/drag events
- ...
# [SQLite in LiveStore](https://dev.docs.livestore.dev/reference/state/sqlite/)
LiveStore heavily uses SQLite as its default state/read model.
## Implementation notes
- LiveStore relies on the following SQLite extensions to be available: `-DSQLITE_ENABLE_BYTECODE_VTAB -DSQLITE_ENABLE_SESSION -DSQLITE_ENABLE_PREUPDATE_HOOK`
- [bytecode](https://www.sqlite.org/bytecodevtab.html)
- [session](https://www.sqlite.org/sessionintro.html) (incl. preupdate)
- For web / node adapter:
- LiveStore uses [a fork](https://github.com/livestorejs/wa-sqlite) of the [wa-sqlite](https://github.com/rhashimoto/wa-sqlite) SQLite WASM library.
- Write‑ahead logging (WAL) is currently not supported/enabled for the web adapter using OPFS (AccessHandlePoolVFS). The underlying VFS does not support WAL reliably in this setup; we disable it until it’s safe to use. See our tracking issue and upstream notes:
- LiveStore: https://github.com/livestorejs/livestore/issues/258
- wa‑sqlite examples (comparison table shows WAL unsupported for AccessHandlePoolVFS): https://github.com/rhashimoto/wa-sqlite/blob/master/src/examples/README.md
- Related discussion on single‑connection OPFS and locking: https://github.com/rhashimoto/wa-sqlite/discussions/81
- In the future LiveStore might use a non‑WASM build for Node/Bun/Deno/etc.
- For Expo adapter:
- LiveStore uses the official expo-sqlite library which supports LiveStore's SQLite requirements.
- LiveStore uses the `session` extension to enable efficient database rollback which is needed when the eventlog is rolled back as part of a rebase. An alternative implementation strategy would be to rely on snapshotting (i.e. periodically create database snapshots and roll back to the latest snapshot + applied missing mutations).
## Default tables
LiveStore operates two SQLite databases by default: a state database (your materialized tables) and an event log database (the immutable event stream and sync metadata). In addition to your own application tables, LiveStore creates a small set of internal tables in each database.
### State database
- `__livestore_schema`
- Tracks the schema hash and last update time per materialized table. Used for migrations and compatibility checks.
- `__livestore_schema_event_defs`
- Tracks the schema hash and last update time per event definition. Used to detect incompatible event schema changes during rematerialization.
- `__livestore_session_changeset`
- Stores SQLite session changeset blobs keyed by event sequence numbers. Used to efficiently roll back and re‑apply state during rebases.
- Your application tables
- All tables you define via `State.SQLite.table(...)` live in the state database.
### Eventlog database
- `eventlog`
- Append‑only table containing all events (sequence numbers, parent links, event name, encoded args, client/session IDs, schema hash, optional sync metadata). Used to reconstruct state and for sync.
- `__livestore_sync_status`
- Stores the current head and optional backend identity for synchronization bookkeeping.
Note: The event log database’s use of SQLite is an implementation detail. It is not a public interface and is not intended for direct reads or writes. Query state via your materialized tables and LiveStore APIs; do not depend on the event log database layout or mutate it directly.
# [Syncing](https://dev.docs.livestore.dev/reference/syncing//)
## How it works
LiveStore is based on [the idea of event-sourcing](/evaluation/event-sourcing) which means it syncs events across clients (via a central sync backend) and then materializes the events in the local SQLite database. This means LiveStore isn't syncing the SQLite database itself directly but only the events that are used to materialize the database making sure it's kept in sync across clients.
The syncing mechanism is similar to how Git works in that regard that it's based on a "push/pull" model. Upstream events always need to be pulled before a client can push its own events to preserve a [global total order of events](https://medium.com/baseds/ordering-distributed-events-29c1dd9d1eff). Local pending events which haven't been pushed yet need to be rebased on top of the latest upstream events before they can be pushed.
## Events
A LiveStore event consists of the following data:
- `seqNum`: event sequence number
- `parentSeqNum`: parent event sequence number
- `name`: event name (refers to a event definition in the schema)
- `args`: event arguments (encoded using the event's schema definition, usually JSON)
### Event sequence numbers
- Event sequence numbers: monotonically increasing integers
- client event sequence number to sync across client sessions (never exposed to the sync backend)
### Sync heads
- The latest event in a eventlog is referred to as the "head" (similar to how Git refers to the latest commit as the "head").
- Given that LiveStore does hierarchical syncing between the client session, the client leader and the sync backend, there are three heads (i.e. the client session head, the client leader head, and the sync backend head).
## Sync backend
The sync backend acts as the global authority and determines the total order of events ("causality"). It's responsible for storing and querying events and for notifying clients when new events are available.
### Requirements for sync backend
- Needs to provide an efficient way to query an ordered list of events given a starting event ID (often referred to as cursor).
- Ideally provides a "reactivity" mechanism to notify clients when new events are available (e.g. via WebSocket, HTTP long-polling, etc).
- Alternatively, the client can periodically query for new events which is less efficient.
## Clients
- Each client initially chooses a random `clientId` as its globally unique ID
- LiveStore uses a 6-char nanoid
- In the unlikely event of a collision which is detected by the sync backend the first time a client tries to push, the client chooses a new random `clientId`, patches the local events with the new `clientId`, and tries again.
### Client Sessions
- Each client has at least one client session
- Client sessions within the same client share local data
- In web adapters: multiple tabs/windows can be different sessions within the same client
- Sessions are identified by a `sessionId` which can persist (e.g., across tab reloads in web)
- For adapters which support multiple client sessions (e.g. web), LiveStore also supports local syncing across client sessions (e.g. across browser tabs or worker threads)
- Client session events are not synced to the sync backend
## Auth (Authentication & Authorization)
- TODO
- Provide basic example
- Encryption
## Advanced
### Sequence diagrams
#### Pulling events (without unpushed events)
```mermaid
sequenceDiagram
participant Client
participant Sync Backend
Client->>Sync Backend: `pull` request (head_cursor)
Sync Backend->>Sync Backend: Get new events (since head_cursor)
Sync Backend-->>Client: New events
activate Client
Note over Client: Client is in sync
deactivate Client
```
#### Pushing events
```mermaid
sequenceDiagram
participant Client
participant Sync Backend
Client->>Client: Commits events
Client->>Sync Backend: `push` request (new_local_events)
activate Sync Backend
Sync Backend->>Sync Backend: Process push request (validate, persist)
Sync Backend-->>Client: Push Success
deactivate Sync Backend
Note over Client: Client is in sync
```
### Rebasing
### Merge conflicts
- Merge conflict handling isn't implemented yet (see [this issue](https://github.com/livestorejs/livestore/issues/253)).
- Merge conflict detection and resolution will be based on the upcoming [facts system functionality](https://github.com/livestorejs/livestore/issues/254).
### Compaction
- Compaction isn't implemented yet (see [this issue](https://github.com/livestorejs/livestore/issues/136))
- Compaction will be based on the upcoming [facts system functionality](https://github.com/livestorejs/livestore/issues/254).
### Partitioning
- Currently LiveStore assumes a 1:1 mapping between an eventlog and a SQLite database.
- In the future, LiveStore aims to support multiple eventlogs (see [this issue](https://github.com/livestorejs/livestore/issues/255)).
## Design decisions / trade-offs
- Require a central sync backend to enforce a global total order of events.
- This means LiveStore can't be used in a fully decentralized/P2P manner.
- Do rebasing on the client side (instead of on the sync backend). This allows the user to have more control over the rebase process.
## Notes
- Rich text data is best handled via CRDTs (see [#263](https://github.com/livestorejs/livestore/issues/263))
## Further reading
- Distributed Systems lecture series by Martin Kleppmann: [YouTube playlist](https://www.youtube.com/playlist?list=PLeKd45zvjcDFUEv_ohr_HdUFe97RItdiB) / [lecture notes](https://www.cl.cam.ac.uk/teaching/2122/ConcDisSys/dist-sys-notes.pdf)
# [Server-side clients](https://dev.docs.livestore.dev/reference/syncing/server-side-clients/)
You can also use LiveStore on the server side e.g. via the `@livestore/adapter-node` adapter. This allows you to:
- have an up-to-date server-side SQLite database (read model)
- react to events / state changes on the server side (e.g. to send emails/push notifications)
- commit events on the server side (e.g. for sensitive/trusted operations)

Note about the schema: While the `events` schema needs to be shared across all clients, the `state` schema can be different for each client (e.g. to allow for a different SQLite table design on the server side).
## Example
## Further notes
### Cloudflare Workers
- The `@livestore/adapter-node` adapter doesn't yet work with Cloudflare Workers but you can follow [this issue](https://github.com/livestorejs/livestore/issues/266) for a Cloudflare adapter to enable this use case.
- Having a `@livestore/adapter-cf-worker` adapter could enable serverless server-side client scenarios.
# [Expo Adapter](https://dev.docs.livestore.dev/reference/platform-adapters/expo-adapter/)
## Notes on Android
- By default, Android requires `https` (including WebSocket connections) when communicating with a sync backend.
To allow for `http` / `ws`, you can run `expo install expo-build-properties` and add the following to your `app.json` (see [here](https://docs.expo.dev/versions/latest/sdk/build-properties/#pluginconfigtypeandroid) for more information):
```json
{
"expo": {
"plugins": [
"expo-build-properties",
{
"android": {
"usesCleartextTraffic": true
},
"ios": {}
}
]
}
}
```
## Resetting local persistence
When iterating locally you can ask the Expo adapter to drop the on-device state and eventlog databases before booting:
```ts
const resetPersistence = process.env.EXPO_PUBLIC_LIVESTORE_RESET === 'true'
const adapter = makePersistedAdapter({
storage: { subDirectory: 'dev' },
resetPersistence,
})
```
:::caution
Resetting persistence deletes all local LiveStore data for the configured store. This only clears data on the device and does not touch any connected sync backend. Make sure this flag is disabled in production builds.
:::
# [Vue integration for LiveStore](https://dev.docs.livestore.dev/reference/framework-integrations/vue-integration/)
The [vue-livestore](https://github.com/slashv/vue-livestore) package provides integration with Vue. It's currently in beta but aims to match feature parity with the React integration.
## API
### `LiveStoreProvider`
In order to use LiveStore with Vue, you need to wrap your application in a `LiveStoreProvider`.
```vue
```
### useClientDocument
**[!] The interface for useClientDocument is experimental and might change**
Since it's more common in Vue to work with a single writable ref (as compared to state, setState in React) the useClientDocument composable for Vue tries to make that easier by directly returning a collection of refs.
The current implementation destructures all client state variables into the return object which allows directly binding to v-model or editing the .value reactivly.
```vue
```
## Usage with ...
### Vite
LiveStore and vue-livestore works with Vite out of the box.
### Nuxt.js
Works out of the box with Nuxt if SSR is disabled by just wrapping the main content in a LiveStoreProvider. Example repo upcoming.
## Technical notes
- Vue-livestore uses the provider component pattern similar to the React integration. In Vue the plugin pattern is more common but it isn't clear that that's the most suitable structure for LiveStore in Vue. We might switch to the plugin pattern if we later find that more suitable especially with regards to Nuxt support and supporting multiple stores.
# [Cloudflare Workers](https://dev.docs.livestore.dev/reference/syncing/sync-provider/cloudflare/)
The `@livestore/sync-cf` package provides a comprehensive LiveStore sync provider for Cloudflare Workers. It uses Durable Objects for connectivity and, by default, persists events in the Durable Object’s own SQLite. You can optionally use Cloudflare D1 instead. Multiple transports are supported to fit different deployment scenarios.
## Installation
```bash
pnpm add @livestore/sync-cf
```
## Transport Modes
The sync provider supports three transport protocols, each optimized for different use cases:
### WebSocket Transport (Recommended)
Real-time bidirectional communication with automatic reconnection and live pull support.
## `reference/syncing/cloudflare/client-ws.ts`
```ts filename="reference/syncing/cloudflare/client-ws.ts"
export const syncBackend = makeWsSync({
url: 'wss://sync.example.com',
})
```
### HTTP Transport
HTTP-based sync with polling for live updates. Requires the `enable_request_signal` compatibility flag.
## `reference/syncing/cloudflare/client-http.ts`
```ts filename="reference/syncing/cloudflare/client-http.ts"
export const syncBackend = makeHttpSync({
url: 'https://sync.example.com',
livePull: {
pollInterval: 3000, // Poll every 3 seconds
},
})
```
### Durable Object RPC Transport
Direct RPC communication between Durable Objects (internal use by `@livestore/adapter-cloudflare`).
## `reference/syncing/cloudflare/client-do-rpc.ts`
```ts filename="reference/syncing/cloudflare/client-do-rpc.ts"
declare const state: CfTypes.DurableObjectState
declare const syncBackendDurableObject: CfTypes.DurableObjectStub
export const syncBackend = makeDoRpcSync({
syncBackendStub: syncBackendDurableObject,
durableObjectContext: {
bindingName: 'CLIENT_DO',
durableObjectId: state.id.toString(),
},
})
```
## Client API Reference
### `makeWsSync(options)`
Creates a WebSocket-based sync backend client.
**Options:**
- `url` - WebSocket URL (supports `ws`/`wss` or `http`/`https` protocols)
- `webSocketFactory?` - Custom WebSocket implementation
- `ping?` - Ping configuration:
- `enabled?: boolean` - Enable/disable ping (default: `true`)
- `requestTimeout?: Duration` - Ping timeout (default: 10 seconds)
- `requestInterval?: Duration` - Ping interval (default: 10 seconds)
**Features:**
- Real-time live pull
- Automatic reconnection
- Connection status tracking
- Ping/pong keep-alive
## `reference/syncing/cloudflare/client-ws-options.ts`
```ts filename="reference/syncing/cloudflare/client-ws-options.ts"
export const syncBackend = makeWsSync({
url: 'wss://sync.example.com',
ping: {
enabled: true,
requestTimeout: 5000,
requestInterval: 15000,
},
})
```
### `makeHttpSync(options)`
Creates an HTTP-based sync backend client with polling for live updates.
**Options:**
- `url` - HTTP endpoint URL
- `headers?` - Additional HTTP headers
- `livePull?` - Live pull configuration:
- `pollInterval?: Duration` - Polling interval (default: 5 seconds)
- `ping?` - Ping configuration (same as WebSocket)
**Features:**
- HTTP request/response based
- Polling-based live pull
- Custom headers support
- Connection status via ping
## `reference/syncing/cloudflare/client-http-options.ts`
```ts filename="reference/syncing/cloudflare/client-http-options.ts"
export const syncBackend = makeHttpSync({
url: 'https://sync.example.com',
headers: {
Authorization: 'Bearer token',
'X-Custom-Header': 'value',
},
livePull: {
pollInterval: 2000, // Poll every 2 seconds
},
})
```
### `makeDoRpcSync(options)`
Creates a Durable Object RPC-based sync backend (for internal use).
**Options:**
- `syncBackendStub` - Durable Object stub implementing `SyncBackendRpcInterface`
- `durableObjectContext` - Context for RPC callbacks:
- `bindingName` - Wrangler binding name for the client DO
- `durableObjectId` - Client Durable Object ID
**Features:**
- Direct RPC communication
- Real-time live pull via callbacks
- Hibernation support
### `handleSyncUpdateRpc(payload)`
Handles RPC callback for live pull updates in Durable Objects.
## Server API Reference
### `makeDurableObject(options)`
Creates a sync backend Durable Object class.
**Options:**
- `onPush?` - Callback for push events: `(message, context) => void | Promise`
- `onPushRes?` - Callback for push responses: `(message) => void | Promise`
- `onPull?` - Callback for pull requests: `(message, context) => void | Promise`
- `onPullRes?` - Callback for pull responses: `(message) => void | Promise`
- `storage?` - Storage engine: `{ _tag: 'do-sqlite' } | { _tag: 'd1', binding: string }` (default: `do-sqlite`)
- `enabledTransports?` - Set of enabled transports: `Set<'http' | 'ws' | 'do-rpc'>`
- `otel?` - OpenTelemetry configuration:
- `baseUrl?` - OTEL endpoint URL
- `serviceName?` - Service name for traces
## `reference/syncing/cloudflare/do-sync-backend.ts`
```ts filename="reference/syncing/cloudflare/do-sync-backend.ts"
const hasUserId = (p: unknown): p is { userId: string } =>
typeof p === 'object' && p !== undefined && p !== null && 'userId' in p
export class SyncBackendDO extends makeDurableObject({
onPush: async (message, { storeId, payload }) => {
console.log(`Push to store ${storeId}:`, message.batch)
// Custom business logic
if (hasUserId(payload)) {
await Promise.resolve()
}
},
onPull: async (_message, { storeId }) => {
console.log(`Pull from store ${storeId}`)
},
enabledTransports: new Set(['ws', 'http']), // Disable DO RPC
otel: {
baseUrl: 'https://otel.example.com',
serviceName: 'livestore-sync',
},
}) {}
```
### `makeWorker(options)`
Creates a complete Cloudflare Worker for the sync backend.
**Options:**
- `syncBackendBinding` - Durable Object binding name defined in `wrangler.toml`
- `validatePayload?` - Payload validation function: `(payload, context) => void | Promise`
- `enableCORS?` - Enable CORS headers (default: `false`)
`makeWorker` is a quick way to get started in simple demos. In most production workers you typically want to share routing logic with other endpoints, so prefer wiring your own `fetch` handler and call `handleSyncRequest` when you detect a sync request. A minimal example:
## `reference/syncing/cloudflare/worker-minimal.ts`
```ts filename="reference/syncing/cloudflare/worker-minimal.ts"
export default {
fetch: async (request: CfTypes.Request, env: Env, ctx: CfTypes.ExecutionContext) => {
const searchParams = matchSyncRequest(request)
if (searchParams !== undefined) {
return handleSyncRequest({
request,
searchParams,
env,
ctx,
syncBackendBinding: 'SYNC_BACKEND_DO',
})
}
// Custom routes, assets, etc.
return new Response('Not found', { status: 404 }) as unknown as CfTypes.Response
},
} satisfies CFWorker
```
### `reference/syncing/cloudflare/env.ts`
```ts filename="reference/syncing/cloudflare/env.ts"
export interface Env {
ADMIN_SECRET: string // Admin authentication
SYNC_BACKEND_DO: CfTypes.DurableObjectNamespace
}
```
## `reference/syncing/cloudflare/worker-makeWorker.ts`
```ts filename="reference/syncing/cloudflare/worker-makeWorker.ts"
export default makeWorker({
syncBackendBinding: 'SYNC_BACKEND_DO',
validatePayload: (payload, { storeId }) => {
// Simple token-based guard at connection time
const hasAuthToken = typeof payload === 'object' && payload !== null && 'authToken' in payload
if (!hasAuthToken) {
throw new Error('Missing auth token')
}
if ((payload as any).authToken !== 'insecure-token-change-me') {
throw new Error('Invalid auth token')
}
console.log(`Validated connection for store: ${storeId}`)
},
enableCORS: true,
})
```
### `handleSyncRequest(args)`
Handles sync backend HTTP requests in custom workers.
**Options:**
- `request` - The incoming request
- `searchParams` - Parsed sync request parameters
- `env` - Worker environment
- `ctx` - Worker execution context
- `syncBackendBinding` - Durable Object binding name defined in `wrangler.toml`
- `headers?` - Response headers
- `validatePayload?` - Payload validation function
## `reference/syncing/cloudflare/worker-handleSyncRequest.ts`
```ts filename="reference/syncing/cloudflare/worker-handleSyncRequest.ts"
export default {
fetch: async (request: CfTypes.Request, env: Env, ctx: CfTypes.ExecutionContext) => {
const searchParams = matchSyncRequest(request)
if (searchParams !== undefined) {
return handleSyncRequest({
request,
searchParams,
env,
ctx,
syncBackendBinding: 'SYNC_BACKEND_DO',
headers: { 'X-Custom': 'header' },
validatePayload: (payload, { storeId }) => {
// Custom validation logic
if (!(typeof payload === 'object' && payload !== null && 'authToken' in payload)) {
throw new Error('Missing auth token')
}
console.log('Validating store', storeId)
},
})
}
return new Response('Not found', { status: 404 }) as unknown as CfTypes.Response
},
} satisfies CFWorker
```
### `matchSyncRequest(request)`
Parses and validates sync request search parameters.
Returns the decoded search params or `undefined` if the request is not a LiveStore sync request.
## `reference/syncing/cloudflare/match-sync.ts`
```ts filename="reference/syncing/cloudflare/match-sync.ts"
declare const request: CfTypes.Request
const searchParams = matchSyncRequest(request)
if (searchParams !== undefined) {
const { storeId, payload, transport } = searchParams
console.log(`Sync request for store ${storeId} via ${transport}`)
console.log(payload)
}
```
## Configuration
### Wrangler Configuration
Configure your `wrangler.toml` for sync backend deployment (default: DO SQLite storage):
```toml
name = "livestore-sync"
main = "./src/worker.ts"
compatibility_date = "2025-05-07"
compatibility_flags = [
"enable_request_signal", # Required for HTTP streaming
]
[[durable_objects.bindings]]
name = "SYNC_BACKEND_DO"
class_name = "SyncBackendDO"
[[migrations]]
tag = "v1"
new_sqlite_classes = ["SyncBackendDO"]
[vars]
ADMIN_SECRET = "your-admin-secret"
```
To use D1 instead of DO SQLite, add a D1 binding and reference it from `makeDurableObject({ storage: { _tag: 'd1', binding: '...' } })`:
```toml
[[d1_databases]]
binding = "DB"
database_name = "livestore-sync"
database_id = "your-database-id"
[vars]
ADMIN_SECRET = "your-admin-secret"
```
### Environment Variables
Required environment bindings:
## Transport Protocol Details
LiveStore identifies sync requests purely by search parameters; the request path does not matter. Use `matchSyncRequest(request)` to detect sync traffic.
Required search parameters:
| Param | Type | Required | Description |
| --- | --- | --- | --- |
| `storeId` | `string` | Yes | Target LiveStore identifier. |
| `transport` | `'ws' \| 'http'` | Yes | Transport protocol selector. |
| `payload` | JSON (URI-encoded) | No | Arbitrary JSON used for auth/tenant routing; validated in `validatePayload`. |
Examples (any path):
- WebSocket: `https://sync.example.com?storeId=abc&transport=ws` (must include `Upgrade: websocket`)
- HTTP: `https://sync.example.com?storeId=abc&transport=http`
Notes:
- For `transport=ws`, if the request is not a WebSocket upgrade, the backend returns `426 Upgrade Required`.
- `transport='do-rpc'` is internal for Durable Object RPC and not exposed via URL parameters.
## Data Storage
By default, events are stored in the Durable Object’s SQLite with tables following the pattern:
```
eventlog_{PERSISTENCE_FORMAT_VERSION}_{storeId}
```
You can opt into D1 with the same table shape. The persistence format version is automatically managed and incremented when the storage schema changes.
### Storage Engines
- DO SQLite (default)
- Pros: easiest deploy (no D1), data co-located with the DO, lowest latency
- Cons: not directly inspectable outside the DO; operational tooling must go through the DO
- D1 (optional)
- Pros: inspectable using D1 tools/clients; enables cross-store analytics outside DOs
- Cons: extra hop, JSON response size considerations; requires D1 provisioning
## Deployment
Deploy to Cloudflare Workers:
```bash
# Deploy the worker
npx wrangler deploy
# Create D1 database
npx wrangler d1 create livestore-sync
# Run migrations if needed
npx wrangler d1 migrations apply livestore-sync
```
## Local Development
Run locally with Wrangler:
```bash
# Start local development server
npx wrangler dev
# Access local D1 database
# Located at: .wrangler/state/d1/miniflare-D1DatabaseObject/XXX.sqlite
```
## Examples
### Basic WebSocket Client
## `reference/syncing/cloudflare/basic-ws-client.ts`
```ts filename="reference/syncing/cloudflare/basic-ws-client.ts"
makeWorker({
schema,
sync: {
backend: makeWsSync({
url: 'wss://sync.example.com',
}),
},
})
```
### `reference/syncing/cloudflare/schema.ts`
```ts filename="reference/syncing/cloudflare/schema.ts"
export const tables = {
todos: State.SQLite.table({
name: 'todos',
columns: {
id: State.SQLite.text({ primaryKey: true }),
text: State.SQLite.text({ default: '' }),
completed: State.SQLite.boolean({ default: false }),
deletedAt: State.SQLite.integer({ nullable: true, schema: Schema.DateFromNumber }),
},
}),
}
export const events = {
todoCreated: Events.synced({
name: 'v1.TodoCreated',
schema: Schema.Struct({ id: Schema.String, text: Schema.String }),
}),
todoCompleted: Events.synced({
name: 'v1.TodoCompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoUncompleted: Events.synced({
name: 'v1.TodoUncompleted',
schema: Schema.Struct({ id: Schema.String }),
}),
todoDeleted: Events.synced({
name: 'v1.TodoDeleted',
schema: Schema.Struct({ id: Schema.String, deletedAt: Schema.Date }),
}),
todoClearedCompleted: Events.synced({
name: 'v1.TodoClearedCompleted',
schema: Schema.Struct({ deletedAt: Schema.Date }),
}),
}
const materializers = State.SQLite.materializers(events, {
'v1.TodoCreated': ({ id, text }) => tables.todos.insert({ id, text, completed: false }),
'v1.TodoCompleted': ({ id }) => tables.todos.update({ completed: true }).where({ id }),
'v1.TodoUncompleted': ({ id }) => tables.todos.update({ completed: false }).where({ id }),
'v1.TodoDeleted': ({ id, deletedAt }) => tables.todos.update({ deletedAt }).where({ id }),
'v1.TodoClearedCompleted': ({ deletedAt }) => tables.todos.update({ deletedAt }).where({ completed: true }),
})
const state = State.SQLite.makeState({ tables, materializers })
export const schema = makeSchema({ events, state })
```
### Custom Worker with Authentication
## `reference/syncing/cloudflare/worker-auth.ts`
```ts filename="reference/syncing/cloudflare/worker-auth.ts"
export class SyncBackendDO extends makeDurableObject({
onPush: async (message, { storeId }) => {
// Log all sync events
console.log(`Store ${storeId} received ${message.batch.length} events`)
},
}) {}
const hasStoreAccess = (_userId: string, _storeId: string): boolean => true
export default makeWorker({
syncBackendBinding: 'SYNC_BACKEND_DO',
validatePayload: (payload, { storeId }) => {
if (!(typeof payload === 'object' && payload !== null && 'userId' in payload)) {
throw new Error('User ID required')
}
// Validate user has access to store
if (!hasStoreAccess((payload as any).userId as string, storeId)) {
throw new Error('Unauthorized access to store')
}
},
enableCORS: true,
})
```
### Multi-Transport Setup
## `reference/syncing/cloudflare/multi-transport.ts`
```ts filename="reference/syncing/cloudflare/multi-transport.ts"
type Transport = 'http' | 'ws' | 'do-rpc'
const getTransportFromContext = (ctx: unknown): Transport => {
if (typeof ctx === 'object' && ctx !== null && 'transport' in (ctx as any)) {
const t = (ctx as any).transport
if (t === 'http' || t === 'ws' || t === 'do-rpc') return t
}
return 'http'
}
export class SyncBackendDO extends makeDurableObject({
// Enable all transport modes
enabledTransports: new Set(['http', 'ws', 'do-rpc']),
onPush: async (message, context) => {
const transport = getTransportFromContext(context)
console.log(`Push via ${transport}:`, message.batch.length)
},
}) {}
```
# [Build your own sync provider](https://dev.docs.livestore.dev/reference/syncing/sync-provider/custom/)
It's very straightforward to implement your own sync provider. A sync provider implementation needs to do the following:
## Client-side
Implement the `SyncBackend` interface (running in the client) which describes the protocol for syncing events between the client and the server.
```ts
// Slightly simplified API (see packages/@livestore/common/src/sync/sync.ts for the full API)
export type SyncBackend = {
pull: (cursor: EventSequenceNumber) => Stream<{ batch: LiveStoreEvent[] }, InvalidPullError>
push: (batch: LiveStoreEvent[]) => Effect
}
// my-sync-backend.ts
const makeMySyncBackend = (args: { /* ... */ }) => {
return {
pull: (cursor) => {
// ...
},
push: (batch) => {
// ...
}
}
}
// my-app.ts
const adapter = makeAdapter({
sync: {
backend: makeMySyncBackend({ /* ... */ })
}
})
```
The actual implementation of those methods is left to the developer and mostly depends on the network protocol used to communicate between the client and the server.
Ideally this implementation considers the following:
- Network connectivity (offline, unstable connection, etc.)
- Ordering of events in case of out-of-order delivery
- Backoff and retry logic
## Server-side
Implement the actual sync backend protocol (running in the server). At minimum this sync backend needs to do the following:
- For client `push` requests:
- Validate the batch of events
- Ensure the batch sequence numbers are in ascending order and larger than the sync backend head
- Further validation checks (e.g. schema-aware payload validation)
- Persist the events in the event store (implying a new sync backend head equal to the sequence number of the pushed last event)
- Return a success response
- It's important that the server only processes one push request at a time to ensure a total ordering of events.
- For client `pull` requests:
- Validate the cursor
- Query the events from the database
- Return the events to the client
- This can be done in a batch or streamed to the client
- `pull` requests can be handled in parallel by the server
## General recommendations
It's recommended to study the existing sync backend implementations for inspiration.
# [S2](https://dev.docs.livestore.dev/reference/syncing/sync-provider/s2/)
export const CODE = {
recordStructureExample: recordStructureExampleCode,
};
The `@livestore/sync-s2` package lets you sync LiveStore with the official S2 backend (s2.dev).
- Package: `pnpm add @livestore/sync-s2`
- Protocol: HTTP push/pull, live pull via SSE
## Architecture
```mermaid
graph LR
subgraph Browser
LS[LiveStore Client]
end
subgraph "Your Server"
AP[API Proxy '/api/s2']
end
subgraph "S2 Cloud"
S2[S2 Backend '*.s2.dev']
end
LS -->|"GET (pull) POST (push) HEAD (ping)"| AP
AP -->|"Authenticated Requests"| S2
style LS fill:#e1f5fe
style AP fill:#fff3e0
style S2 fill:#f3e5f5
```
The API proxy handles:
- **Business logic**: Any kind of business logic that is specific to your application (e.g. rate limiting, auth, logging, etc.)
- **S2 Stream Management**: Creates basins and streams as needed
- **S2 Request Translation**: Converts LiveStore sync operations to authenticated S2 API calls
## Client Setup
Basic usage in your worker/server code:
## `reference/syncing/s2/client-setup.ts`
```ts filename="reference/syncing/s2/client-setup.ts"
const _backend = makeSyncBackend({
endpoint: '/api/s2', // Your API proxy endpoint
// more options...
})
```
## API Proxy Implementation
S2 requires authentication and stream management that can't be handled directly from the browser. You'll need to implement an API proxy on your server that:
1. **Handles authentication** with S2 using your access token
2. **Manages basins and streams** (creates them if they don't exist)
3. **Proxies requests** between LiveStore and S2
Your proxy needs three endpoints:
### Using Helper Functions
The `@livestore/sync-s2` package provides helper functions to simplify the proxy implementation:
## `reference/syncing/s2/api-proxy-implementation.ts`
```ts filename="reference/syncing/s2/api-proxy-implementation.ts"
// Configure S2 connection
const s2Config: S2Helpers.S2Config = {
basin: process.env.S2_BASIN ?? 'your-basin',
token: process.env.S2_ACCESS_TOKEN!, // Your S2 access token
}
// HEAD /api/s2 - Health check/ping
export async function HEAD() {
return new Response(null, { status: 200 })
}
// GET /api/s2 - Pull events
export async function GET(request: Request) {
const url = new URL(request.url)
const args = S2.decodePullArgsFromSearchParams(url.searchParams)
const streamName = S2.makeS2StreamName(args.storeId)
// Ensure basin and stream exist
await S2Helpers.ensureBasin(s2Config)
await S2Helpers.ensureStream(s2Config, streamName)
// Build request with appropriate headers and URL
// Note: buildPullRequest handles cursor+1 conversion internally
const { url: pullUrl, headers } = S2Helpers.buildPullRequest({ config: s2Config, args })
const res = await fetch(pullUrl, { headers })
// For live pulls (SSE), proxy the response
if (args.live === true) {
if (!res.ok) {
return S2Helpers.sseKeepAliveResponse()
}
return new Response(res.body, {
status: 200,
headers: { 'content-type': 'text/event-stream' },
})
}
// For regular pulls
if (!res.ok) {
return S2Helpers.emptyBatchResponse()
}
const batch = await res.text()
return new Response(batch, {
headers: { 'content-type': 'application/json' },
})
}
// POST /api/s2 - Push events
export async function POST(request: Request) {
const requestBody = await request.json()
const parsed = Schema.decodeUnknownSync(S2.ApiSchema.PushPayload)(requestBody)
const streamName = S2.makeS2StreamName(parsed.storeId)
// Ensure basin and stream exist
await S2Helpers.ensureBasin(s2Config)
await S2Helpers.ensureStream(s2Config, streamName)
// Build push request with proper formatting
const pushRequests = S2Helpers.buildPushRequests({
config: s2Config,
storeId: parsed.storeId,
batch: parsed.batch,
})
for (const pushRequest of pushRequests) {
const res = await fetch(pushRequest.url, {
method: 'POST',
headers: pushRequest.headers,
body: pushRequest.body,
})
if (!res.ok) {
return S2Helpers.errorResponse('Push failed', 500)
}
}
return S2Helpers.successResponse()
}
```
### Cursor Semantics
The S2 sync provider uses a cursor that represents the **last processed record**:
- The cursor points to the last S2 sequence number we've seen
- S2's `seq_num` parameter expects where to start reading from (inclusive)
- The helper functions automatically handle the `+1` conversion: `seq_num = cursor + 1`
- When starting from the beginning, cursor is `'from-start'` which maps to `seq_num = 0`
### Important Considerations
- **Stream provisioning**: The helper functions provide `ensureBasin()` and `ensureStream()` to handle creation automatically.
- **Error handling**: The helpers include fallback responses (`emptyBatchResponse()`, `sseKeepAliveResponse()`) to maintain stream continuity during errors.
- **Authentication**: Store your S2 access token securely (e.g., environment variables).
- **Rate limiting**: Consider implementing rate limiting to protect your S2 quota.
- **Response helpers**: Use the provided response helpers (`successResponse()`, `errorResponse()`) for consistent API responses.
## Live Pull (SSE)
S2 provider supports live pulls over Server-Sent Events (SSE). When `live: true` is passed to `pull`, the client:
- Immediately emits one page (possibly empty) with `pageInfo: NoMore`.
- Parses SSE frames robustly (multi-line `data:` support) and reacts to typed events:
- `event: batch` → parses `data` as S2 `ReadBatch` and emits items.
- `event: ping` → ignored; keeps the stream alive.
- `event: error` → mapped to `InvalidPullError`.
## Implementation Notes
### Data Storage & Encoding
LiveStore leverages S2 streams for durable event storage. Understanding the mapping between LiveStore concepts and S2 primitives helps developers comprehend the persistence layer, though direct manipulation is discouraged.
#### LiveStore → S2 Mapping
**Store to Stream**: Each LiveStore `storeId` maps to exactly one S2 stream. The stream name is derived from the `storeId` after sanitization to meet S2 naming requirements.
**Event Encoding**: LiveStore events (`AnyEncodedGlobal`) are JSON-serialized and stored as the `body` field of S2 records. Each event contains:
- `name`: Event type identifier
- `args`: Event-specific payload data
- `seqNum`: LiveStore's global event sequence number
- `parentSeqNum`: Previous event's sequence number for ordering
- `clientId`: Origin client identifier
- `sessionId`: Session that created the event
**Record Structure**: When pushed to S2, each LiveStore event becomes one S2 record:
#### Sequence Number Handling
**LiveStore and S2 maintain completely independent sequence numbering systems**:
- **LiveStore's `seqNum`**: Stored inside the JSON event payload (starts at 0). Used for logical event ordering and cursor management within LiveStore.
- **S2's `seq_num`**: Assigned by S2 to each record in the stream (also starts at 0). Used solely for stream positioning when reading records.
These are **two separate numbering systems** that happen to both start at 0. While they often align numerically (first event is LiveStore seqNum 0, stored in S2 record with seq_num 0), this is coincidental rather than a direct mapping. The sync provider:
- Preserves LiveStore's sequence numbers unchanged in the event payload
- Uses S2's seq_num only for querying records from the stream (e.g., "read from position X")
- Never relies on S2's seq_num for LiveStore's logical event ordering
#### Technical Details
**Format**: The provider uses `s2-format: raw` when communicating with S2, treating record bodies as UTF-8 JSON strings.
**Headers**: S2 record headers are not utilized; all LiveStore metadata is contained within the JSON body.
**Batch Operations**: Multiple events can be pushed in a single batch, with each event becoming a separate S2 record while maintaining order.
#### Important Note
**Direct stream manipulation is strongly discouraged**. Always interact with S2 streams through LiveStore's sync provider to ensure:
- Proper event encoding/decoding
- Sequence number integrity
- Cursor management consistency
- Compatibility with LiveStore's sync protocol
Bypassing LiveStore to modify S2 streams directly may corrupt the event log and break synchronization.
# [ElectricSQL](https://dev.docs.livestore.dev/reference/syncing/sync-provider/electricsql/)
The `@livestore/sync-electric` package lets you sync LiveStore with ElectricSQL.
- Package: `pnpm add @livestore/sync-electric`
- Protocol: HTTP push/pull with long-polling support
## Architecture
```mermaid
graph LR
subgraph Browser
LS[LiveStore Client]
end
subgraph "Your Server"
AP[API Proxy '/api/electric']
end
subgraph "Infrastructure"
ES[Electric Server ':30000']
PG[(Postgres DB)]
end
LS -->|"GET (pull)"| AP
LS -->|"POST (push)"| AP
AP -->|"Pull requests (proxied)"| ES
AP -->|"Push events (direct write)"| PG
ES -->|"Listen"| PG
style LS fill:#e1f5fe
style AP fill:#fff3e0
style ES fill:#f3e5f5
style PG fill:#e8f5e9
```
The API proxy has dual responsibilities:
- **Push Events**: Writes events directly to Postgres tables (bypasses Electric)
- **Pull Requests**: Proxies to Electric server for reading events
- **Authentication**: Implements your custom auth logic
- **Database Management**: Initializes tables and manages connections
## Client Setup
Basic usage in your worker/server code:
```ts
const backend = makeSyncBackend({
endpoint: '/api/electric', // Your API proxy endpoint
ping: { enabled: true },
})
```
## API Proxy Implementation
ElectricSQL requires an API proxy on your server to handle authentication and database operations. Your proxy needs two endpoints:
### Minimal Implementation Example
```ts
const electricHost = 'http://localhost:30000' // Your Electric server
// GET /api/electric - Pull events (proxied through Electric)
export async function GET(request: Request) {
const searchParams = new URL(request.url).searchParams
const { url, storeId, needsInit } = makeElectricUrl({
electricHost,
searchParams,
apiSecret: 'your-electric-secret',
})
// Add your authentication logic here
// if (!isAuthenticated(request)) {
// return new Response('Unauthorized', { status: 401 })
// }
// Initialize database tables if needed
if (needsInit) {
const db = makeDb(storeId)
await db.migrate()
await db.disconnect()
}
// Proxy pull request to Electric server for reading
return fetch(url)
}
// POST /api/electric - Push events (direct database write)
export async function POST(request: Request) {
const payload = await request.json()
const parsed = Schema.decodeUnknownSync(ApiSchema.PushPayload)(payload)
// Write events directly to Postgres table (bypasses Electric)
const db = makeDb(parsed.storeId)
await db.createEvents(parsed.batch)
await db.disconnect()
return Response.json({ success: true })
}
```
### Important Considerations
- **Database Setup**: Ensure your Postgres database is configured for Electric
- **Authentication**: Implement proper auth checks in your proxy
- **Error Handling**: Add robust error handling for database operations
- **Connection Management**: Properly manage database connections
## Example
See the
[todomvc-sync-electric](https://github.com/livestorejs/livestore/tree/main/examples/web-todomvc-sync-electric)
example for a complete implementation.
## How the sync provider works
The initial version of the ElectricSQL sync provider will use the server-side
Postgres DB as a store for the mutation event history.
Events are stored in a table following the pattern
`eventlog_${PERSISTENCE_FORMAT_VERSION}_${storeId}` where
`PERSISTENCE_FORMAT_VERSION` is a number that is incremented whenever the
`sync-electric` internal storage format changes.
## F.A.Q.
### Can I use my existing Postgres database with the sync provider?
Unless the database is already modelled as a eventlog following the
`@livestore/sync-electric` storage format, you won't be able to easily use your
existing database with this sync backend implementation.
We might support this use case in the future, you can follow the progress
[here](https://github.com/livestorejs/livestore/issues/286). Please share any
feedback you have on this use case there.
### Why do I need an API proxy in front of the ElectricSQL server?
The API proxy is used to handle pull/push requests between LiveStore and ElectricSQL,
allowing you to implement custom logic such as:
- Authentication and authorization
- Rate limiting and quota management
- Database initialization and migration
- Custom business logic and validation