Upgrading Atelier
This guide covers upgrading an existing Atelier installation to a newer release.
Release notes — what’s in this version
Before upgrading, read the release notes for the version you’re moving to. They live alongside the binaries on Azure Blob Storage and are reachable without any GitHub login:
https://tryatelier.blob.core.windows.net/tryatelier/latest/release-notes.md # most recent releasehttps://tryatelier.blob.core.windows.net/tryatelier/v0.9.7-beta/release-notes.md # pinned to a specific tagMost browsers display markdown as plain text; curl <url> | less works in a terminal. Every tag from v0.9.7-beta onwards has a corresponding release-notes.md — release workflows that don’t include one fail before any binaries ship.
Upgrade paths
Two paths are supported, each suited to a different network situation:
- GHCR mode —
kubectl set imageagainstghcr.io/atelier-project/atelier-core:vX.Y.Zandatelier-ui:vX.Y.Z. Requires the cluster to reachghcr.io. Recommended for all v0.9.8-beta+ clusters. - Tarball mode — import
atelier-core-image.tar.gz/atelier-ui-image.tar.gz/mcp-fetch-image.tar.gzinto k3s containerd, then rollout-restart. Only works for upgrading clusters originally installed before v0.9.8-beta (post-Phase-2A manifests reference versionedghcr.io/...tags that a tarball import doesn’t satisfy). Versioned offline upgrade is tracked in the offline-install follow-up epic.
The same upgrade.sh script handles both — it picks the mode automatically based on whether the first argument looks like a version tag (vX.Y.Z or vX.Y.Z-suffix).
Prerequisites
- An existing Atelier install (any released version) running on a node you can SSH to.
- Shell access to that node with permission to run
sudo k3s ...andkubectlagainst the local cluster (theKUBECONFIGdefaults to/etc/rancher/k3s/k3s.yaml). - For GHCR mode: outbound HTTPS to
ghcr.iofrom the cluster node. No GitHub account, no PAT, no pull secret needed — the platform packages are public. - For tarball mode: enough disk space for the downloaded tarballs (~200 MB total) and a way to get them onto the node (download directly,
scp, USB stick, etc).
About the download URLs. Release artifacts and release notes are mirrored to
https://tryatelier.blob.core.windows.net/tryatelier/so testers cancurlthem anonymously.latest/always points at the most recent release; pin a specific version by swappinglatestfor the tag (e.g.v0.9.7-beta).
GHCR mode — recommended
# Run on the cluster node, in any directory.curl -Lo upgrade.sh https://tryatelier.blob.core.windows.net/tryatelier/latest/upgrade.shchmod +x upgrade.sh
./upgrade.sh v0.9.5-betaWhat this does, in order:
- Detects
v0.9.5-betalooks like a version tag → enters GHCR mode. - Runs
kubectl set image deployment/atelier-core atelier-core=ghcr.io/atelier-project/atelier-core:v0.9.5-beta -n atelier. - Repeats for
atelier-ui. - Waits for both Deployments to roll out (180-second timeout per Deployment).
- Reports completion.
The user-facing UI may briefly show a “reconnecting” indicator while the new pods come up — typically a few seconds. App data is unaffected; running app pods are not restarted.
Targeting individual components
./upgrade.sh v0.9.5-beta core # core only./upgrade.sh v0.9.5-beta ui # ui onlyWhat about mcp-fetch?
From v0.9.8-beta onwards, atelier-core’s catalog bootstrap deploys mcp-fetch from ghcr.io/atelier-project/atelier-mcp-fetch:<core-version> directly — the version pin tracks the running core, so a v0.9.8-beta core deploys v0.9.8-beta mcp-fetch. Fresh installs and re-creations of the deployment use this path automatically.
upgrade.sh still does not rewrite an existing mcp-fetch deployment. Clusters that came up before v0.9.8-beta retain their old registry.atelier.local/mcp-fetch:latest deployment after upgrading core/ui — that keeps working but doesn’t move to GHCR. To migrate an existing cluster’s mcp-fetch, delete and re-deploy the MCP server via Settings → MCP Servers; the catalog will recreate it with the GHCR ref.
Tarball mode — offline / air-gapped
Compatibility note. Tarball-mode upgrade only works on clusters originally installed before v0.9.8-beta. v0.9.8-beta and later install platform deployments with
image: ghcr.io/atelier-project/...:vX.Y.Z— a tarball import lands asregistry.atelier.local/...:latest, so a rollout-restart won’t pick it up. Versioned offline upgrade and offline fresh-install are tracked in the offline-install follow-up epic.
# Run on the cluster node, in any directory.curl -Lo upgrade.sh https://tryatelier.blob.core.windows.net/tryatelier/latest/upgrade.shcurl -Lo atelier-core-image.tar.gz https://tryatelier.blob.core.windows.net/tryatelier/latest/atelier-core-image.tar.gzcurl -Lo atelier-ui-image.tar.gz https://tryatelier.blob.core.windows.net/tryatelier/latest/atelier-ui-image.tar.gzcurl -Lo mcp-fetch-image.tar.gz https://tryatelier.blob.core.windows.net/tryatelier/latest/mcp-fetch-image.tar.gzchmod +x upgrade.sh
./upgrade.sh(For genuinely air-gapped clusters, swap the curl commands for whatever transport you use to get files in — scp, USB, internal artifact store, etc. The script only cares that the four files end up in the same directory.)
What this does, in order:
- No version argument → enters tarball mode.
- For each tarball found alongside the script:
gunzip→sudo k3s ctr -n k8s.io images importto load it into containerd. - For
mcp-fetch: also pushes the image to the in-cluster registry (localhost:32000) soatelier-corecan pull it on demand for the MCP catalog. kubectl rollout restarton the affected Deployments.- Waits for rollouts.
Manifests use imagePullPolicy: IfNotPresent, so the rollout-restart picks up the freshly-imported image without needing to change the image tag.
Rolling back
For both modes, the cleanest rollback is kubectl rollout undo:
kubectl rollout undo deployment/atelier-core -n atelierkubectl rollout undo deployment/atelier-ui -n atelierkubectl rollout status deployment/atelier-core -n atelierkubectl rollout status deployment/atelier-ui -n atelierThis reverts each Deployment to the previous ReplicaSet — same effect as re-running the upgrade with the previous version tag.
To roll back to an arbitrary earlier version:
./upgrade.sh v0.9.4-beta # GHCR mode# or./upgrade.sh # tarball mode (with the older tarballs alongside)Verifying the upgrade
# Pods running the new images:kubectl get pods -n atelier -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}{end}'
# Recent atelier-core logs:kubectl logs -n atelier deployment/atelier-core --tail=50
# Confirm the portal is reachable:curl -sS -o /dev/null -w "%{http_code}\n" http://atelier.home.arpa/Troubleshooting
ErrImagePull on the new pod (GHCR mode)
Means containerd can’t reach ghcr.io or the requested tag doesn’t exist.
- Check tag exists:
If you don’t see your version, the release workflow probably hasn’t finished publishing yet — wait a few minutes after a tag push.
Terminal window curl -sS https://ghcr.io/v2/atelier-project/atelier-core/tags/list | head - Check connectivity from the node:
Terminal window sudo k3s crictl pull ghcr.io/atelier-project/atelier-core:v0.9.5-beta - Confirm packages are public:
https://github.com/atelier-project/atelier/pkgs/container/atelier-coreshould be accessible without authentication. If you see a 404 the package is still private — flip its visibility in GitHub → Settings → Packages.
kubectl set image returns “no such deployment”
Your cluster’s namespace or Deployment names differ from the defaults. Check with kubectl get deploy -A | grep atelier. The script assumes namespace atelier and Deployment names atelier-core / atelier-ui — overrides aren’t currently exposed; edit the script directly if your install is unusual.
Tarball mode reports “no images found”
upgrade.sh resolves tarballs relative to its own directory, not your CWD. Make sure all four *-image.tar.gz files are alongside the script itself.
Rollout times out
Default per-Deployment timeout is 180 seconds. If your image pull is slow (large image, slow link), watch live with kubectl rollout status deployment/atelier-core -n atelier --watch and re-run the upgrade after the pull completes — rollouts are idempotent at the same image tag.
”Unknown target ‘foo’”
You passed a component selector that isn’t core, ui, or mcp-fetch. The version tag (if any) must come first, before any selectors:
./upgrade.sh v0.9.5-beta core # ✓./upgrade.sh core v0.9.5-beta # ✗ — interpreted as tarball-mode, target 'core', then unknown target 'v0.9.5-beta'What gets upgraded vs preserved
| Resource | Behaviour during upgrade |
|---|---|
atelier-core pod | Replaced with the new image (RollingUpdate). |
atelier-ui pod | Replaced with the new image. |
SQLite DB (/data/atelier.db) | Preserved on its PVC. |
App workloads (atelier-apps namespace) | Untouched. Running apps stay up across the upgrade. |
Secrets (atelier-secrets) | Untouched. |
| Gitea + registry pods | Untouched. |
| Workflow runs in flight | Continue to completion; no interruption. |
If a release ever requires a destructive migration (DB schema change, breaking config rename, etc.), the release notes will call it out explicitly. The default assumption is upgrades are safe to run during the working day.