Migrating Gitea from FreeNAS Jail (SQLite) to TrueNAS 25.04 Docker App (PostgreSQL)

This week I migrated from FreeNAS 12 all the way to TrueNAS 25.04 one update at a time. Went great. The main challenge I faced was migration of gitea repositories to the new system, which was brutal. So I’ll leave here the procedure that ended up working after a LOT of work between me and Claude Opus 4.6 in extended thinking, cause I’m definitely not a database specialist. Hope this helps!

## The Scenario

- **Source:** Gitea ~1.12 running in a FreeNAS 12 iocage jail with SQLite backend

- **Target:** Gitea 1.25.4 running as a TrueNAS 25.04 Docker app with PostgreSQL backend

- **Challenges:** 13+ minor version gap, database backend change (SQLite → PostgreSQL), different file layout, TrueNAS Docker app lifecycle quirks

## What You Need

- SSH access to your TrueNAS box

- Access to the old FreeNAS jail filesystem (mounted or on disk)

- The new Gitea app installed via TrueNAS 25.04 app catalog (but stopped)

- Patience

## Placeholder Paths Used in This Guide

Throughout this guide, replace these placeholders with your actual paths:

- **`<old_jail_root>`** — The root filesystem of your old FreeNAS jail (e.g., `/mnt/tank0/iocage/jails/gitea/root`)

- **`<old_jail_gitea_data>`** — The Gitea data directory inside your old FreeNAS jail (e.g., `/mnt/tank0/iocage/jails/gitea/root/var/db/gitea`)

- **`<new_gitea_app>`** — The TrueNAS app directory for your new Gitea install (e.g., `/mnt/tank0/apps/gitea`)

## Key Paths

| What | Old Jail (FreeNAS) | New Docker App (TrueNAS 25.04) |

|------|-------------------|-------------------------------|

| Config | `<old_jail_root>/usr/local/etc/gitea/conf/app.ini` | `<new_gitea_app>/config/app.ini` |

| Repositories | `<old_jail_gitea_data>/gitea-repositories/` | `<new_gitea_app>/data/git/repositories/` |

| LFS | `<old_jail_gitea_data>/gitea-lfs/` | `<new_gitea_app>/data/git/lfs/` |

| Avatars | `<old_jail_gitea_data>/data/avatars/` | `<new_gitea_app>/data/data/avatars/` |

| SQLite DB | `<old_jail_gitea_data>/gitea.db` | `<new_gitea_app>/data/data/gitea.db` |

| Postgres data | N/A | `<new_gitea_app>/database/` |

## Important Notes Before You Start

- The TrueNAS Docker app runs as **UID 568:568**, not root and not 1000.

- TrueNAS injects database config via environment variables (`GITEA__database__*`), which **override** `app.ini`. You cannot simply change `app.ini` to switch to SQLite — the env vars win. This is why we use a temporary container for the SQLite migration step.

- When TrueNAS stops an app, it tears down all containers. You cannot `docker start` a container from a stopped app — you must use `midclt call app.start <appname>`.

- Container names may change between restarts. Always verify with `docker ps`.

- `pgloader` does **not** work well for this migration. It maps SQLite types incorrectly (booleans become bigints with string values, text becomes text instead of varchar) and chokes on schema changes between Gitea versions. Don't waste your time.

## Step 1: Check Your Old Gitea Version

If the binary is gone, check the migration version in the SQLite database:

```bash

sqlite3 <old_jail_gitea_data>/gitea.db "SELECT * FROM version;"

```

Migration version 118 = Gitea ~1.12.x. This matters because the schema must be migrated through all intermediate versions.

## Step 2: Copy Files to the New Location

```bash

OLD_JAIL="<old_jail_gitea_data>"

NEW_DATA="<new_gitea_app>/data"

# Back up the new install's data directory

cp -a "$NEW_DATA" "${NEW_DATA}.bak"

# Keep a pristine copy of the old SQLite DB

cp "$OLD_JAIL/gitea.db" <new_gitea_app>/gitea.db.original

# Copy repositories

rsync -av "$OLD_JAIL/gitea-repositories/" "$NEW_DATA/git/repositories/"

# Copy LFS objects

rsync -av "$OLD_JAIL/gitea-lfs/" "$NEW_DATA/git/lfs/"

# Copy avatars (note: old format uses numeric IDs, new uses hashes — these won't display, but copy anyway)

rsync -av "$OLD_JAIL/data/avatars/" "$NEW_DATA/data/avatars/"

# Place SQLite DB where Gitea can find it

cp "$OLD_JAIL/gitea.db" "$NEW_DATA/data/gitea.db"

# Fix ownership for the container's UID

chown -R 568:568 "$NEW_DATA"

```

## Step 3: Migrate the SQLite Schema (Old → Current)

The SQLite database has the old schema (e.g., migration 118 for Gitea 1.12). We need to run all migrations up to 1.25.4's schema. We do this with a temporary container that reads a custom `app.ini` pointing to SQLite, bypassing TrueNAS's env var overrides.

Create a temporary config:

```bash

mkdir -p /tmp/gitea-migration-config

cp <new_gitea_app>/config/app.ini /tmp/gitea-migration-config/app.ini

```

Edit `/tmp/gitea-migration-config/app.ini` — change the `[database]` section to:

```ini

[database]

DB_TYPE = sqlite3

PATH = /var/lib/gitea/data/gitea.db

```

Fix permissions so the container can read it:

```bash

chown -R 568:568 /tmp/gitea-migration-config

```

Run the migration:

```bash

docker run --rm -it \

--user 568:568 \

-v <new_gitea_app>/data:/var/lib/gitea \

-v /tmp/gitea-migration-config:/etc/gitea \

gitea/gitea:1.25.4-rootless \

gitea migrate

```

This runs all schema migrations from your old version up to 1.25.4 against the SQLite database. Watch the output for errors. If it completes cleanly, you now have a fully migrated SQLite database.

## Step 4: Let Gitea Create a Clean PostgreSQL Schema

Start the app normally so Gitea boots with an empty PostgreSQL database and creates the correct schema:

```bash

midclt call app.start gitea

```

Wait for everything to come up (15–20 seconds), then verify Gitea is healthy:

```bash

docker logs --tail 30 $(docker ps --filter "name=gitea-gitea" --format '{{.Names}}' | head -1)

```

You should see `Listen: http://0.0.0.0:30008` and healthy pings. Gitea has now created all tables with correct PostgreSQL types.

## Step 5: Transfer Data from SQLite to PostgreSQL

Stop the Gitea container but keep PostgreSQL running:

```bash

docker stop $(docker ps --filter "name=gitea-gitea" -q)

```

Verify PostgreSQL is still up:

```bash

docker ps | grep postgres

```

Now run the migration script. This Python script reads from your migrated SQLite DB, matches only columns that exist in both schemas, handles boolean casting (SQLite 0/1 → PostgreSQL true/false), skips removed tables and columns, and inserts data into the Gitea-created PostgreSQL schema.

Save the script below as `/tmp/migrate_gitea.py`:

```python

#!/usr/bin/env python3

"""

Migrate Gitea data from SQLite (already migrated to current schema)

into a fresh Postgres database (schema created by Gitea on first boot).

Handles:

- Column mismatches (only copies columns that exist in both)

- Boolean casting (SQLite 0/1 → Postgres true/false)

- Type coercion (text→varchar, bigint→int, etc.)

- Tables that exist in one but not the other (skipped)

"""

import sqlite3

import psycopg2

import sys

# Config — adjust these to match your setup

SQLITE_PATH = "/data/gitea.db"

PG_HOST = "postgres"

PG_PORT = 5432

PG_DB = "gitea"

PG_USER = "gitea"

PG_PASS = "YOUR_POSTGRES_PASSWORD"

def get_sqlite_tables(sqlite_conn):

cur = sqlite_conn.execute(

"SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%' ORDER BY name"

)

return [r[0] for r in cur.fetchall()]

def get_sqlite_columns(sqlite_conn, table):

cur = sqlite_conn.execute(f'PRAGMA table_info("{table}")')

return [r[1] for r in cur.fetchall()]

def get_pg_tables(pg_conn):

cur = pg_conn.cursor()

cur.execute("""

SELECT table_name FROM information_schema.tables

WHERE table_schema = 'public' AND table_type = 'BASE TABLE'

ORDER BY table_name

""")

return [r[0] for r in cur.fetchall()]

def get_pg_columns(pg_conn, table):

cur = pg_conn.cursor()

cur.execute("""

SELECT column_name FROM information_schema.columns

WHERE table_schema = 'public' AND table_name = %s

ORDER BY ordinal_position

""", (table,))

return [r[0] for r in cur.fetchall()]

def get_pg_column_types(pg_conn, table):

cur = pg_conn.cursor()

cur.execute("""

SELECT column_name, data_type FROM information_schema.columns

WHERE table_schema = 'public' AND table_name = %s

""", (table,))

return {r[0]: r[1] for r in cur.fetchall()}

def cast_value(value, pg_type):

if value is None:

return None

if pg_type == 'boolean':

if isinstance(value, str):

return value.lower() in ('true', '1', 'yes')

return bool(value)

if pg_type in ('integer', 'smallint'):

try:

return int(value)

except (ValueError, TypeError):

return 0

if pg_type == 'bigint':

try:

return int(value)

except (ValueError, TypeError):

return 0

return value

def migrate():

print("Connecting to SQLite...")

sqlite_conn = sqlite3.connect(SQLITE_PATH)

sqlite_conn.row_factory = sqlite3.Row

print("Connecting to Postgres...")

pg_conn = psycopg2.connect(

host=PG_HOST, port=PG_PORT, dbname=PG_DB,

user=PG_USER, password=PG_PASS

)

pg_conn.autocommit = False

sqlite_tables = set(get_sqlite_tables(sqlite_conn))

pg_tables = set(get_pg_tables(pg_conn))

common_tables = sorted(sqlite_tables & pg_tables)

skipped_sqlite = sorted(sqlite_tables - pg_tables)

skipped_pg = sorted(pg_tables - sqlite_tables)

if skipped_sqlite:

print(f"\nTables in SQLite but NOT in Postgres (skipping): {skipped_sqlite}")

if skipped_pg:

print(f"\nTables in Postgres but NOT in SQLite (empty, OK): {skipped_pg}")

print(f"\nMigrating {len(common_tables)} common tables...\n")

total_rows = 0

errors = []

for table in common_tables:

sqlite_cols = get_sqlite_columns(sqlite_conn, table)

pg_cols = get_pg_columns(pg_conn, table)

pg_types = get_pg_column_types(pg_conn, table)

common_cols = [c for c in sqlite_cols if c in pg_cols]

if not common_cols:

print(f" {table}: no common columns, skipping")

continue

skipped_cols = set(sqlite_cols) - set(pg_cols)

if skipped_cols:

print(f" {table}: skipping columns not in Postgres: {skipped_cols}")

col_list = ', '.join(f'"{c}"' for c in common_cols)

rows = sqlite_conn.execute(f'SELECT {col_list} FROM "{table}"').fetchall()

if not rows:

print(f" {table}: 0 rows (empty)")

continue

cur = pg_conn.cursor()

try:

cur.execute(f'TRUNCATE TABLE "{table}" CASCADE')

except Exception as e:

print(f" {table}: TRUNCATE failed: {e}")

pg_conn.rollback()

errors.append((table, str(e)))

continue

placeholders = ', '.join(['%s'] * len(common_cols))

insert_sql = f'INSERT INTO "{table}" ({col_list}) VALUES ({placeholders})'

row_count = 0

batch = []

for row in rows:

values = []

for i, col in enumerate(common_cols):

pg_type = pg_types.get(col, 'text')

values.append(cast_value(row[i], pg_type))

batch.append(tuple(values))

row_count += 1

if len(batch) >= 1000:

try:

cur.executemany(insert_sql, batch)

except Exception as e:

print(f" {table}: INSERT batch failed: {e}")

pg_conn.rollback()

errors.append((table, str(e)))

row_count = 0

break

batch = []

if batch:

try:

cur.executemany(insert_sql, batch)

except Exception as e:

print(f" {table}: INSERT final batch failed: {e}")

pg_conn.rollback()

errors.append((table, str(e)))

row_count = 0

if row_count > 0:

try:

pg_conn.commit()

print(f" {table}: {row_count} rows migrated")

total_rows += row_count

except Exception as e:

print(f" {table}: COMMIT failed: {e}")

pg_conn.rollback()

errors.append((table, str(e)))

# Reset sequences

print("\nResetting sequences...")

cur = pg_conn.cursor()

cur.execute("""

SELECT sequence_name FROM information_schema.sequences

WHERE sequence_schema = 'public'

""")

sequences = [r[0] for r in cur.fetchall()]

for seq in sequences:

parts = seq.rsplit('_', 1)

if len(parts) >= 2 and parts[-1] == 'seq':

table_col = parts[0]

for table in common_tables:

if table_col == f"{table}_id":

try:

cur.execute(f"""

SELECT setval('"{seq}"',

COALESCE((SELECT MAX(id) FROM "{table}"), 0) + 1,

false)

""")

pg_conn.commit()

except Exception:

pg_conn.rollback()

break

print(f"\n{'='*60}")

print(f"Migration complete: {total_rows} total rows across {len(common_tables)} tables")

if errors:

print(f"\nERRORS ({len(errors)}):")

for table, err in errors:

print(f" {table}: {err}")

else:

print("No errors!")

sqlite_conn.close()

pg_conn.close()

if __name__ == '__main__':

migrate()

```

Run it in a temporary Python container on the same Docker network as PostgreSQL:

```bash

docker run --rm -it \

--network $(docker network ls --filter "name=gitea" --format '{{.Name}}' | head -1) \

-v <new_gitea_app>/data/data:/data \

-v /tmp:/tmp \

python:3.11-slim \

bash -c "pip install psycopg2-binary && python /tmp/migrate_gitea.py"

```

Expected output: all tables migrated, zero errors.

## Step 6: Restart and Verify

```bash

midclt call app.stop gitea

sleep 10

midclt call app.start gitea

```

Wait 20–30 seconds, then check the logs for a clean startup:

```bash

docker logs --tail 30 $(docker ps --filter "name=gitea-gitea" --format '{{.Names}}' | head -1)

```

Log in at `http://<your-nas-ip>:30008` with your old credentials. Verify your repositories, users, and data are intact.

## Step 7: Post-Migration Cleanup

Generate a new `SECRET_KEY` (the default is empty, which is insecure):

```bash

docker exec $(docker ps --filter "name=gitea-gitea" -q) gitea generate secret SECRET_KEY

```

Add the output to `<new_gitea_app>/config/app.ini` under `[security]`.

Remove temporary files:

```bash

rm -rf /tmp/pgloader-gitea.load

rm -f /tmp/migrate_gitea.py

rm -rf /tmp/gitea-migration-config

rm -f <new_gitea_app>/data/data/gitea.db

rm -f <new_gitea_app>/gitea.db.original

rm -rf <new_gitea_app>/data.bak

rm -f <new_gitea_app>/config/app.ini.bak

docker rmi dimitri/pgloader:latest

docker rmi python:3.11-slim

```

## What Didn't Work (So You Don't Have To Try)

### pgloader (SQLite → PostgreSQL directly)

pgloader maps SQLite types incorrectly for Gitea's schema. Booleans become `bigint` columns with string values `"true"`/`"false"`, `text` stays as `text` instead of `varchar`, and it can't handle columns/tables that were renamed or removed between Gitea versions. When Gitea tries to query these mangled types, you get errors like `invalid input syntax for type bigint: "true"` in an infinite retry loop.

### `gitea doctor convert`

Despite what some guides suggest, `gitea doctor convert` in Gitea 1.25.4 only converts MySQL charset encoding (utf8 → utf8mb4) or MSSQL varchar → nvarchar. It does **not** convert between database backends (SQLite → PostgreSQL).

### Direct schema + data migration with pgloader using `data only` mode

Even in `data only` mode with Gitea's correct schema, pgloader fails because the SQLite dump contains columns that no longer exist in the new schema (e.g., `hook_task.repo_id`) and tables that were removed (e.g., `u2f_registration`, `oauth2_session`). pgloader has no mechanism to skip missing columns.

## What Worked

The winning strategy was a three-phase approach:

1. **`gitea migrate` on SQLite** — Run the new Gitea binary against the old SQLite database to apply all schema migrations in place. This updates the SQLite schema from 1.12 to 1.25.4 without changing the database backend.

2. **Let Gitea create the PostgreSQL schema** — Boot Gitea normally with an empty PostgreSQL database. It creates all tables with correct types, constraints, and indexes.

3. **Custom Python script for data transfer** — A script that reads from SQLite, inspects both schemas, copies only matching columns, properly casts types (especially booleans), skips removed tables/columns, and inserts into the Gitea-created PostgreSQL tables. This handles all the edge cases that pgloader cannot.

## Notes

- Avatar files won't carry over properly. Old Gitea used numeric user IDs as filenames; new Gitea uses content hashes. Users can re-upload avatars through the web UI.

- Mirror repos will try to fetch on their original schedules. If URLs or SSH keys changed, they'll fail but won't break anything. Update them through the Gitea admin UI.

- Keep the old jail filesystem around until you've thoroughly verified everything works.

1 Like