Devoured - April 30, 2026
Running SQLite in the browser with sql.js and WASM — a practical guide with Google Drive sync (5 minute read)

Running SQLite in the browser with sql.js and WASM — a practical guide with Google Drive sync (5 minute read)

Data Read original

A tutorial demonstrates running SQLite entirely in the browser via WebAssembly, persisting the database as a portable binary file on Google Drive instead of using IndexedDB or proprietary sync services.

What: The architecture uses sql.js (SQLite compiled to WASM) to run a full SQL database client-side, serializing it to a single binary file that syncs to the user's Google Drive with the minimal drive.file OAuth scope. Local state saves to localStorage after each mutation, Drive sync debounces by 10 seconds, and conflicts default to Drive as the source of truth.
Why it matters: Unlike IndexedDB or cloud-synced stores that lock data inside browser storage or proprietary formats, a SQLite file is truly portable—users can open it in any SQLite tool, email it, or move it between devices, giving them actual ownership of their data rather than dependency on a service provider.
Takeaway: The article includes working code for sql.js initialization with Vite, query wrappers, schema migrations with version tracking, localStorage persistence, and complete Google Drive upload/download/conflict logic that you can adapt for personal data apps.
Deep dive
  • The sql.js library compiles SQLite to a ~1.5MB WASM binary that must be copied to your public folder and lazily loaded on first database access to avoid blocking initial page render
  • Databases serialize to a Uint8Array representing the entire SQLite file, which becomes the atomic unit for all persistence operations—localStorage saves it as a JSON array, Drive stores it as application/octet-stream
  • The migration pattern uses a schema_version table with a single integer and a dictionary of migration functions keyed by version number, running pending migrations in order before any other database operations
  • localStorage persistence converts the Uint8Array to a regular array for JSON serialization (SQLite files with thousands of rows typically stay under 5MB, within localStorage limits for personal data tools)
  • Requesting navigator.storage.persist() on first load is critical—without durable storage, browsers can evict localStorage under storage pressure, making Drive sync the only backup
  • Google Drive integration uses the drive.file OAuth scope which only grants access to files this specific app created, not the user's entire Drive, making it appropriate for privacy-sensitive applications
  • The sync decision logic on login compares Drive's modifiedTime against local last_synced_at timestamp, downloading from Drive if it's newer or uploading local state if it's the first sync
  • Drive uploads debounce by 10 seconds after mutations to avoid hammering the API during active editing sessions, batching multiple rapid changes into a single upload
  • Conflict handling deliberately prefers Drive as source of truth rather than attempting complex merge logic, under the assumption that the most recently synced device has the canonical state
  • The PKCE OAuth flow for obtaining the access_token is mentioned but deferred to a follow-up article in the series
  • A Service Worker can cache the WASM binary after first load, making subsequent initializations instant despite the 1.5MB size
  • Query execution requires explicit statement preparation, binding, stepping through results, and freeing—the article provides wrappers (runQuery, execSQL, getOne) to abstract this boilerplate
  • The Origin Private File System is suggested as an alternative to localStorage for use cases where database size could exceed 5MB, though localStorage suffices for most personal data applications
  • This architecture is demonstrated in production at OvertimeIQ but presented as a general pattern applicable to personal finance tools, health tracking, or any app where user data portability matters
Decoder
  • WASM (WebAssembly): Binary instruction format that runs compiled code in browsers at near-native speed
  • sql.js: SQLite database engine compiled to WebAssembly, allowing full SQL databases to run client-side in browsers
  • IndexedDB: Browser-native NoSQL storage API that stores data in browser-internal formats not easily portable outside the browser
  • Uint8Array: JavaScript typed array representing binary data as 8-bit unsigned integers, used here to serialize the SQLite file
  • PKCE: Proof Key for Code Exchange, a secure OAuth flow for public clients like browser apps that can't store secrets
  • drive.file scope: Minimal Google Drive OAuth permission that only accesses files the requesting app created, not the entire Drive
  • Origin Private File System: Browser API for storing large files in a sandboxed filesystem partition with better performance than localStorage
Original article

Most tutorials on client-side data storage reach for IndexedDB, localStorage, or a third-party sync service. This one goes somewhere different: a real SQLite database, running as WebAssembly in the browser, with the database file living on the user's own Google Drive.

This is the setup behind OvertimeIQ — but everything in this article stands alone as a practical reference. You don't need to care about overtime tracking for any of this to be useful.

By the end, you'll know how to:

  • Initialize sql.js and run real SQL in the browser
  • Persist the database across page reloads via localStorage
  • Upload and download the database file from Google Drive
  • Handle sync conflicts correctly
  • Protect against data corruption on interrupted uploads

Why SQLite in the browser?

Before we write any code, it's worth asking why you'd reach for SQLite instead of IndexedDB or a cloud-synced store.

The answer is portability. A SQLite database is a single binary file. You can open it on any device, in any SQLite-compatible tool, without installing anything. You can attach it to an email, drop it in Dropbox, or — as we'll do here — store it on Google Drive. The user owns a file, not a schema locked inside a browser's internal storage.

For apps where user data portability matters — personal finance tools, health tracking, anything sensitive — this is a meaningful architectural choice, not just a curiosity.

The trade-off: sql.js ships a ~1.5MB WASM binary. We'll deal with that below.


Setting up sql.js

Install the package:

npm install sql.js

The WASM binary needs to be accessible at a URL your code can load. Copy it into your public folder at build time:

// vite.config.js
import { defineConfig } from 'vite'
import { viteStaticCopy } from 'vite-plugin-static-copy'

export default defineConfig({
  plugins: [
    viteStaticCopy({
      targets: [
        {
          src: 'node_modules/sql.js/dist/sql-wasm.wasm',
          dest: ''
        }
      ]
    })
  ]
})

Now initialise sql.js. This is async — the WASM binary has to load before you can do anything:

// lib/db.js
import initSqlJs from 'sql.js'

let db = null

export async function initDB(existingBuffer = null) {
  const SQL = await initSqlJs({
    locateFile: file => `/${file}` // points to /sql-wasm.wasm in public/
  })

  if (existingBuffer) {
    // Restore from a saved buffer (localStorage or Drive download)
    db = new SQL.Database(new Uint8Array(existingBuffer))
  } else {
    // Fresh database
    db = new SQL.Database()
  }

  return db
}

Lazy loading matters here. Don't initialise the database on app load. Initialise it on first access. With a Service Worker caching the WASM binary after the first load, subsequent loads are instant — but you still don't want to block your UI render on a 1.5MB download for users on their first visit.


Running SQL

sql.js has two main operations:

// For SELECT — returns an array of result objects
export function runQuery(sql, params = []) {
  const stmt = db.prepare(sql)
  stmt.bind(params)
  const rows = []
  while (stmt.step()) {
    rows.push(stmt.getAsObject())
  }
  stmt.free()
  return rows
}

// For INSERT / UPDATE / DELETE — no return value
export function execSQL(sql, params = []) {
  const stmt = db.prepare(sql)
  stmt.run(params)
  stmt.free()
}

// Convenience wrapper for single-row queries
export function getOne(sql, params = []) {
  const rows = runQuery(sql, params)
  return rows.length > 0 ? rows[0] : null
}

Usage is exactly what you'd expect from a SQL library:

execSQL(
  'INSERT INTO logs (job_id, date, start_time, end_time, duration_hours, location) VALUES (?, ?, ?, ?, ?, ?)',
  [1, '2025-04-14', '20:00', '23:30', 3.5, 'office']
)

const logs = runQuery(
  'SELECT * FROM logs WHERE date >= ? ORDER BY date DESC',
  ['2025-01-01']
)

Schema migrations

You need a migration runner. The pattern I use: a schema_version table with a single integer, and a list of migration functions keyed by version number.

const MIGRATIONS = {
  1: (db) => {
    db.run(`
      CREATE TABLE IF NOT EXISTS jobs (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        name TEXT NOT NULL,
        hourly_rate REAL NOT NULL,
        weekend_multiplier REAL DEFAULT 1.5,
        holiday_multiplier REAL DEFAULT 2.0,
        work_start TEXT NOT NULL,
        work_end TEXT NOT NULL,
        color TEXT DEFAULT '#3B8BD4',
        is_default INTEGER DEFAULT 0,
        created_at TEXT NOT NULL
      )
    `)
    db.run(`
      CREATE TABLE IF NOT EXISTS logs (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        job_id INTEGER REFERENCES jobs(id),
        date TEXT NOT NULL,
        start_time TEXT NOT NULL,
        end_time TEXT NOT NULL,
        crosses_midnight INTEGER DEFAULT 0,
        duration_hours REAL NOT NULL,
        location TEXT NOT NULL,
        notes TEXT,
        created_at TEXT NOT NULL,
        updated_at TEXT NOT NULL
      )
    `)
    db.run('CREATE INDEX IF NOT EXISTS idx_logs_date ON logs(date)')
  }
}

export async function runMigrations(db) {
  db.run('CREATE TABLE IF NOT EXISTS schema_version (version INTEGER NOT NULL)')

  const row = getOne('SELECT version FROM schema_version')
  const currentVersion = row ? row.version : 0

  const pendingVersions = Object.keys(MIGRATIONS)
    .map(Number)
    .filter(v => v > currentVersion)
    .sort((a, b) => a - b)

  for (const version of pendingVersions) {
    MIGRATIONS[version](db)
    if (currentVersion === 0) {
      execSQL('INSERT INTO schema_version (version) VALUES (?)', [version])
    } else {
      execSQL('UPDATE schema_version SET version = ?', [version])
    }
  }
}

Run migrations immediately after initializing the database, before anything else touches it.


Serializing to Uint8Array

This is the key operation that makes everything else work. sql.js can export the entire database state as a Uint8Array — a binary blob that is identical to what SQLite would write to disk.

export function serializeDB() {
  return db.export() // Returns Uint8Array
}

That Uint8Array is your database file. Everything that follows — localStorage persistence, Drive upload, Drive download — is just moving that blob around.


localStorage persistence

After every write operation, serialize and save:

const DB_STORAGE_KEY = 'otiq_db'

export function saveDB() {
  const buffer = serializeDB()
  // Convert Uint8Array to a regular array for JSON serialisation
  localStorage.setItem(DB_STORAGE_KEY, JSON.stringify(Array.from(buffer)))
  // Trigger the Drive upload debounce (see below)
  scheduleDriveUpload()
}

export function loadFromLocalStorage() {
  const stored = localStorage.getItem(DB_STORAGE_KEY)
  if (!stored) return null
  return new Uint8Array(JSON.parse(stored))
}

On app load, check localStorage first. If there's a saved buffer, restore from it. Then compare with Drive to decide whether to download a newer version.

Storage size note: A SQLite file with thousands of rows will likely stay well under 5MB — comfortably within localStorage limits. If your use case could grow very large, consider using the Origin Private File System instead, but for personal data tools localStorage is generally fine. Always call navigator.storage.persist() on first load to request durable storage — without it, browsers can evict localStorage under storage pressure.

async function requestDurableStorage() {
  if (navigator.storage && navigator.storage.persist) {
    const granted = await navigator.storage.persist()
    if (!granted) {
      // Show a warning banner — Drive sync is the backup
      showStorageWarning()
    }
  }
}

Google Drive as cloud sync

The Drive setup requires Google OAuth with the drive.file scope — the minimal scope that grants access only to files this specific app created. It cannot read other Drive files. This is the right choice for privacy-sensitive apps.

I'll cover the full PKCE OAuth flow in the next article in this series. For now, assume you have a valid access_token.

Finding the database file

On login, search for an existing database file:

async function findDBFile(accessToken) {
  const response = await fetch(
    `https://www.googleapis.com/drive/v3/files?q=name='overtimeiq.db'&fields=files(id,modifiedTime)`,
    { headers: { Authorization: `Bearer ${accessToken}` } }
  )
  const data = await response.json()
  return data.files?.[0] ?? null // { id, modifiedTime } or null
}

Creating the file (first time)

async function createDBFile(accessToken, dbBuffer) {
  const metadata = {
    name: 'overtimeiq.db',
    mimeType: 'application/octet-stream'
  }

  const formData = new FormData()
  formData.append('metadata', new Blob([JSON.stringify(metadata)], { type: 'application/json' }))
  formData.append('file', new Blob([dbBuffer], { type: 'application/octet-stream' }))

  const response = await fetch(
    'https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart&fields=id',
    {
      method: 'POST',
      headers: { Authorization: `Bearer ${accessToken}` },
      body: formData
    }
  )
  const data = await response.json()
  return data.id // Store this file ID in settings.drive_file_id
}

Downloading the file

async function downloadDBFile(accessToken, fileId) {
  const response = await fetch(
    `https://www.googleapis.com/drive/v3/files/${fileId}?alt=media`,
    { headers: { Authorization: `Bearer ${accessToken}` } }
  )
  const buffer = await response.arrayBuffer()
  return new Uint8Array(buffer)
}

The sync decision logic

On every app load after login, you need to decide: use the local database, or download from Drive?

async function syncOnLogin(accessToken) {
  const driveFile = await findDBFile(accessToken)

  if (!driveFile) {
    // First time — upload local DB and store the file ID
    const buffer = loadFromLocalStorage() ?? serializeDB()
    const fileId = await createDBFile(accessToken, buffer)
    execSQL('UPDATE settings SET drive_file_id = ? WHERE id = 1', [fileId])
    execSQL('UPDATE settings SET last_synced_at = ? WHERE id = 1', [new Date().toISOString()])
    return
  }

  const driveModifiedTime = new Date(driveFile.modifiedTime).getTime()
  const localSyncedAt = getOne('SELECT last_synced_at FROM settings WHERE id = 1')?.last_synced_at
  const localTime = localSyncedAt ? new Date(localSyncedAt).getTime() : 0

  const diff = Math.abs(driveModifiedTime - localTime)

  if (diff < 30_000) {
    // Within 30 seconds — same-device multi-tab edge case, no action
    return
  }

  if (driveModifiedTime > localTime) {
    // Drive is newer — download and replace
    const buffer = await downloadDBFile(accessToken, driveFile.id)
    await reinitializeFromBuffer(buffer) // Re-init sql.js with the new buffer
    showToast('Synced from Drive')
  } else {
    // Local is newer — upload
    await uploadDBToDrive(accessToken, driveFile.id)
  }

  execSQL('UPDATE settings SET last_synced_at = ? WHERE id = 1', [new Date().toISOString()])
}

Conflict resolution policy: When in doubt, prefer the Drive copy. Drive is the source of truth. If modifiedTime comparison is inconclusive (e.g., clock skew between devices), take the Drive copy and show a toast: "Synced from Drive — local changes from this session may have been overwritten."


The upload safety pattern

Never upload directly to overtimeiq.db. Upload to a temp file first, then rename atomically. A browser crash, network interruption, or error mid-upload should never corrupt the live database.

async function uploadDBToDrive(accessToken, fileId) {
  const buffer = serializeDB()

  // Step 1: Upload to temp file
  const tempMetadata = { name: 'overtimeiq_tmp.db' }
  const formData = new FormData()
  formData.append('metadata', new Blob([JSON.stringify(tempMetadata)], { type: 'application/json' }))
  formData.append('file', new Blob([buffer], { type: 'application/octet-stream' }))

  const uploadResponse = await fetch(
    `https://www.googleapis.com/upload/drive/v3/files/${fileId}?uploadType=multipart`,
    {
      method: 'PATCH',
      headers: { Authorization: `Bearer ${accessToken}` },
      body: formData
    }
  )

  if (!uploadResponse.ok) throw new Error('Upload failed')

  // Step 2: Rename temp file to live file atomically
  await fetch(
    `https://www.googleapis.com/drive/v3/files/${fileId}`,
    {
      method: 'PATCH',
      headers: {
        Authorization: `Bearer ${accessToken}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({ name: 'overtimeiq.db' })
    }
  )
}

If Step 1 succeeds but Step 2 fails, the user has a temp file but the live file is intact. On the next sync, the timestamp comparison will catch the discrepancy and prompt a re-upload.


Debouncing the Drive upload

You don't want to upload to Drive on every keypress or every individual log entry mutation. Debounce it:

let driveUploadTimeout = null

export function scheduleDriveUpload() {
  if (driveUploadTimeout) clearTimeout(driveUploadTimeout)
  driveUploadTimeout = setTimeout(async () => {
    const accessToken = getAccessToken() // From your auth store
    const fileId = getOne('SELECT drive_file_id FROM settings WHERE id = 1')?.drive_file_id
    if (accessToken && fileId) {
      await uploadDBToDrive(accessToken, fileId)
      execSQL('UPDATE settings SET last_synced_at = ? WHERE id = 1', [new Date().toISOString()])
    }
  }, 10_000) // 10 seconds after the last write
}

The localStorage write happens synchronously on every mutation — data is safe locally the instant you write it. The Drive upload is fire-and-forget with a 10-second debounce so a bulk import (100 rows at once) only triggers one upload.


Putting it all together

The initialization sequence on app load:

async function initializeApp() {
  // 1. Try to restore from localStorage
  const storedBuffer = loadFromLocalStorage()

  // 2. Initialize sql.js with the stored buffer (or fresh)
  await initDB(storedBuffer)

  // 3. Run schema migrations
  await runMigrations(db)

  // 4. Seed defaults if this is the first launch
  if (!getOne('SELECT id FROM jobs WHERE id = 1')) {
    seedDefaultJob()
    seedHolidays()
  }

  // 5. Request durable storage
  await requestDurableStorage()

  // 6. If authenticated, sync with Drive
  const accessToken = getStoredAccessToken()
  if (accessToken) {
    await syncOnLogin(accessToken)
  }
}

What this doesn't cover

This article focused on the storage and sync layer. Two things worth a separate deep dive:

The Google OAuth PKCE flow — how to get the access_token and refresh_token without a client secret, and how to silently refresh the token mid-session. That's the next article in this series.

The midnight rate calculation — how to correctly split a shift that crosses midnight across two different rate multipliers, including the December 31 → January 1 edge case. I'll cover that in a later article on the earnings engine.


The case for this architecture

The pattern here — SQLite on the user's cloud storage, managed entirely client-side — works well for a specific category of app: personal data tools where the data is sensitive, the user count is small, and data portability is a first-class feature.

It's not the right choice for collaborative tools, apps with large binary assets, or anything requiring server-side processing of the data. But for personal productivity software, financial tracking, health logging, or any domain where "your data should be yours" is a meaningful promise — this architecture delivers it genuinely, not as marketing copy.

The database is a file. The user can open it in DB Browser for SQLite today. They'll be able to open it in twenty years. That kind of portability is hard to promise with any other approach.