Atelier User Guide
A step-by-step walkthrough of every feature in Atelier.
Table of Contents
- Getting Started
- Creating Your First App
- Interacting with Your App
- Updating Your App
- Other Ways to Deploy
- Build History & Rollback
- Security
- Managing Secrets
- App Storage
- Webhooks & Auto-Deploy
- Pause, Resume & Archive
- Scheduled Apps (CronJobs)
- Workflows
- Build Errors & Auto-Fix
- Crash Alerts
- Nova — AI Operations Agent
- Settings
- MCP Servers
- External MCP Integration (Claude Desktop, Cursor, etc.)
- Export to Docker Compose
- Backup & Restore
- Registry Management
- User Management (Admin)
- Activity & System Logs
Getting Started
Accessing Atelier
Open your browser and navigate to your Atelier instance (e.g. http://atelier.home.arpa). You’ll see the login screen.
Registering Your Account
If this is a fresh installation with no users, you’ll see a registration form instead of the login form.
- Enter a username and password.
- Optionally enter an email address.
- Click Register.
The first user registered is automatically promoted to Admin, giving you full control over the platform.
Logging In
If an account already exists:
- Enter your username and password.
- Click Login.
You’ll be taken to the main workspace.
The Main Interface
The interface has two main areas:
- Icon sidebar (left, ~48px wide) — compact navigation rail. From top to bottom: the Atelier logo, primary nav (Apps, Workflows, Nova), utility nav (Approvals with pending-count badge, Activity, Archived, Settings), and your user avatar (click for Change Password / Sign Out).
- Content area (right, full width) — shows whichever section you selected. The default landing page is Apps.
Apps page (the default view) is a full-width grid of your apps with a search box, a status-filter dropdown, a cluster CPU/memory summary, and a + Create button at the top-right that opens the creation menu (New App, Import Image, Clone Repo, Upload Local, From Gitea).
Each app card shows:
- Display name (editable inline via the three-dot menu)
- Status badge with a colour-coded indicator:
- Green — running
- Blue — building
- Yellow — updating
- Red — error
- Grey — paused
- Build number, kind (deployment / cronjob), last-updated timestamp
- A three-dot menu for per-app actions (rename, pause/resume, view in Gitea, archive)
Settings menu — clicking the gear icon in the sidebar opens a submenu with Settings, Users (admin only), Registry, Token Usage, and a Documentation ↗ link to the in-app docs.
Creating Your First App
Atelier uses a chat-based interface to create applications. You describe what you want, and the AI generates the code, containers, and deployment configuration.
Step 1: Start a New Conversation
On the Apps page, click the + Create button in the top-right and choose New App from the menu. A creation pane opens with a chat input at the bottom.
Step 2: Describe Your App
Type a description of what you want to build. Be as specific as you like — the more detail you give, the better the result. For example:
“Build a todo list app with a React frontend and a Node.js API backend. Use SQLite for storage. The frontend should have a clean, minimal design.”
Press Enter or click the send button.
Step 3: Review the Plan
The AI will first generate a build plan outlining what it intends to create. Review the plan to make sure it matches what you want.
- If the plan looks good, click Build to start.
- If you want changes, type your feedback and the plan will be revised.
Step 4: Attach Context Documents (Optional)
Before building, you can attach reference documents (API specs, design docs, etc.) by clicking the attachment icon next to the chat input. These give the AI additional context.
- Supported formats: text files, markdown, JSON, YAML
- Maximum: 5MB total across all documents
Step 5: Watch the Build Pipeline
Once building starts, the Progress Panel appears showing real-time status:
The panel shows:
- Thinking — what the AI is currently working on
- Pipeline steps — each completed step gets a green checkmark
- Generated files — expandable list of files being written (click to preview code)
- Build logs — raw build output (collapsed by default)
Step 6: Answer Builder Questions
Sometimes the AI needs clarification during the build. A question panel will appear with suggested answers and a text input for custom responses.
Click one of the suggested options, or type your own answer and click Send.
Tip: Questions have a timeout. If you don’t answer in time, the build continues with a default choice.
Step 7: Build Complete
When the build succeeds, you’ll see a green success banner at the bottom of the progress panel. Your app is now live.
The app appears as a card on the Apps page with a green status indicator. Click the card to view its details.
Interacting with Your App
The App Detail View
Click any app card on the Apps page to open its detail view. At the top you’ll see:
- App name and status badge
- URL — click to open your running app in a new tab
- Action buttons — Pause, Re-deploy, and a menu with more options
Below the header are several tabs:
| Tab | What it shows |
|---|---|
| Chat | Your conversation history with the AI builder |
| Logs | Live container output |
| Deploy | Build history with rollback |
| Commits | Git commit history with rebuild option |
| Resources | CPU and memory usage |
| Security | Vulnerability scans and code review |
| Secrets | Environment variables |
| Activity | Event log for this app |
Viewing Live Logs
- Click the Logs tab.
- Logs stream in real-time from your running containers.
- If your app has multiple services (e.g. frontend + backend), use the service filter to pick which one to view.
Logs auto-scroll as new entries arrive. Timestamps are shown in your local timezone.
Checking Resource Metrics
- Click the Resources tab.
- You’ll see CPU usage (in millicores) and memory usage (in MiB) for each pod.
Metrics refresh every 15 seconds while the tab is open.
Viewing Running Containers
The app detail header shows a pod count indicator. This tells you how many container replicas are currently running.
Updating Your App
Sending Update Instructions
-
Open your app from the Apps page.
-
In the Chat tab, type what you want to change. For example:
“Add a dark mode toggle to the settings page.”
-
Press Enter. The AI will review your existing code and generate updates.
The build pipeline runs again, showing progress in the Progress Panel. When complete, your updated app is deployed automatically.
Re-deploying Without Changes
Sometimes you want to rebuild and redeploy your app without making any code changes (for example, after updating secrets or storage settings).
- Click the Re-deploy button in the app detail header.
- Confirm when prompted.
- The pipeline rebuilds from the existing code and redeploys.
Other Ways to Deploy
Besides the AI chat flow, Atelier supports several other ways to deploy applications.
Import a Docker Image
Deploy a pre-built Docker image directly.
- On the Apps page, click + Create and choose Import Image.
- Enter the image reference (e.g.
nginx:latestorghcr.io/user/app:v1). - Set the port your app listens on.
- Optionally add environment variables.
- Give it a name and click Deploy.
Note: The image must be accessible from within the cluster. Public images from Docker Hub, GHCR, etc. work automatically.
Review-and-revise step (default)
When you upload files, clone a repo, or pick a Gitea repo in AI Build mode, Atelier first runs a planner pass over the source — examining the file tree and key config files (Dockerfile, package.json, pyproject.toml, etc.) — and presents an analysis you can review before the build runs:
- What I’ll build — the services it intends to create
- Detected services — language, framework, port for each
- Build steps — concrete commands the Dockerfile will run
- Manifests I’ll generate — the K8s resources (Deployments, Services, Ingress)
- Concerns and assumptions — anything it had to guess that you should confirm
You can refine the plan via chat the same way as New App (“actually use Postgres”, “the backend listens on 3000 not 8080”) and the planner revises. When the plan looks right, click Build to start the actual pipeline. The chat history is preserved on the resulting app so you can keep the conversation going post-deploy.
Each modal has a “Review the plan before building” checkbox (default on). Uncheck it for the legacy single-step behaviour — useful when you already trust your setup and just want it built.
Direct Deploy in From Gitea is always single-step regardless — that mode is explicitly opt-in for users who don’t want LLM involvement at all.
Clone a Git Repo
Containerise any public Git repository.
- On the Apps page, click + Create and choose Clone Repo.
- Enter the Git URL (e.g.
https://github.com/user/repo). - Optionally specify a branch.
- Give it a name.
- Leave Review the plan before building checked (default) for the planner-first flow, or uncheck it for the legacy direct build.
- Click Clone & Review → (or Clone & Build → if you unchecked review).
In the review path: Atelier clones the repo, runs the source analysis, and lands you in the chat workspace where you can refine before building. In the direct path: clone-then-build runs straight through, same as before.
Upload Files
Upload source code files directly from your browser.
- On the Apps page, click + Create and choose Upload Local.
- Drag and drop files or click to browse.
- Give it a name.
- Leave Review the plan before building checked (default) for the planner-first flow, or uncheck it for the legacy direct build.
- Click Upload & Review → (or Upload & Build → if you unchecked review).
Maximum upload size is 50MB. In the review path: files upload, the planner analyses the tree + key config files, and you land in the chat workspace. In the direct path: upload-then-build runs straight through.
Build from Gitea
Build from a repository in the internal Gitea instance.
- On the Apps page, click + Create and choose From Gitea.
- Select a repository from the dropdown.
- Select a branch.
- Optionally pick a specific commit from the history.
- Choose a build mode:
- AI Build — the AI generates Dockerfiles and manifests. The “Review the plan before building” checkbox appears in this mode (default on); uncheck for legacy direct build.
- Direct Deploy — uses the existing Dockerfile in the repo (faster, no AI). Always single-step.
- For direct deploy, set the port your app listens on.
- Click Build or Deploy.
Direct Deploy from Gitea
The quickest path to deploy — no AI involved.
- Follow the “Build from Gitea” steps above.
- Switch to Direct Deploy mode.
- The repo must already contain a
Dockerfile. - Set the port and click Deploy.
The pipeline skips code generation entirely. It builds the Docker image from the existing Dockerfile and deploys immediately.
Tip: Direct deploy is ideal for repos where you manage the Dockerfile yourself and just want Atelier to handle the Kubernetes deployment.
Build History & Rollback
Viewing Past Builds
- Select your app and click the Deploy tab.
- You’ll see a list of all builds, newest first.
- Each build shows:
- Build number (the current build is tagged “current”)
- Commit SHA — click to view the code in Gitea
- Commit message
- Timestamp
- Security scan badge — green “clean” or coloured with severity counts
Rolling Back to a Previous Build
- In the Deploy tab, find the build you want to restore.
- Click Rollback on that row.
- A confirmation appears — click Confirm to proceed.
The rollback restores the exact images and manifests from that build. It does not re-run the build pipeline — it re-applies the saved configuration.
Rebuilding from a Specific Commit
- Click the Commits tab.
- Find the commit you want to rebuild from.
- Click Re-deploy next to that commit.
- Confirm when prompted.
This triggers a full rebuild using the code at that specific commit.
Security
Atelier automatically scans every built image for known vulnerabilities using Trivy.
Viewing Scan Results
- Select your app and click the Security tab.
- You’ll see scan results for each image in your app.
- Vulnerabilities are grouped by severity:
- Critical (red) — should be fixed immediately
- High (orange) — should be fixed soon
- Medium (yellow) — fix when practical
- Low (grey) — informational
Click on a severity group to expand the full CVE list with package names, installed versions, and fixed versions.
Scan Badges in Build History
Each build in the Deploy tab shows a scan badge:
- Green “clean” — no critical or high vulnerabilities
- Red with count — critical vulnerabilities found (e.g. “2C”)
- Amber with count — high vulnerabilities found (e.g. “3H”)
Click the badge to jump to the full scan results.
Triggering a Rescan
- In the Security tab, click Rescan on any image.
- A new Trivy job runs against the current image.
- Results update when the scan completes.
Security Holds
Trivy runs after the container is already deployed, so the scan can never stop a rollout from happening — it only flags it once the CVE list is in. When Block on Critical is enabled in Settings and the scan finds critical CVEs, the app is marked with a security hold:
- A red “Security hold” banner appears on the app detail page naming the build and critical count.
- A
scan.blockedevent is written to the activity feed. - The app card on the Apps page shows the same indicator.
The hold is advisory — the app keeps running and serving traffic as normal. Its purpose is to flag that a human needs to look at the findings before the next change lands.
To clear it:
- Click Override Security Block on the banner.
- Confirm the override.
A subsequent clean build (no criticals) also clears the hold automatically.
Code Review
If code review is enabled, the AI reviews your generated code after each build.
- In the Security tab, scroll down to Code Review.
- Review findings are shown as markdown with specific recommendations.
- You can trigger a new review by clicking Review.
Managing Secrets
Secrets are environment variables injected into your app’s containers at runtime.
Adding Secrets
- Select your app and click the Secrets tab.
- Enter a key and value in the input fields.
- Click Save.
Secrets are stored as Kubernetes Secrets and mounted as environment variables in your app’s containers. Your app needs to be redeployed for new secrets to take effect.
Editing Secrets
- Add a new secret with the same key — it will overwrite the existing value.
- Re-deploy your app for the change to take effect.
Deleting Secrets
- Click the delete button next to the secret you want to remove.
- The secret is removed immediately.
- Re-deploy for the change to take effect.
Note: Secret values are never shown in the UI after being saved. You’ll only see the key names.
App Storage
Each app can have a persistent volume for data that survives container restarts.
Viewing Storage
The storage size is shown in the app detail view. By default, apps get 1Gi of persistent storage.
Changing Storage Size
- In the app detail view, look for the Storage section.
- Enter the desired size in Gi (e.g.
5for 5 GiB). - Click Save.
Note: Storage can be increased but typically cannot be decreased (Kubernetes PVC limitation). The app needs to be redeployed for the change to take effect.
Webhooks & Auto-Deploy
Webhooks allow your app to automatically rebuild whenever code is pushed to Gitea.
Enabling a Webhook
- Select your app.
- In the app detail header, click the menu (three dots).
- Click Enable Webhook.
A Gitea webhook is created that triggers a rebuild whenever changes are pushed to the main branch of the app’s repository.
When a webhook is active, you’ll see a webhook indicator on the app card on the Apps page.
How It Works
- You push code to the app’s Gitea repository.
- Gitea sends a webhook notification to Atelier.
- Atelier verifies the HMAC signature.
- A rebuild pipeline starts automatically.
Disabling a Webhook
- Click the menu and select Disable Webhook.
- The Gitea webhook is deleted and auto-deploy stops.
Pause, Resume & Archive
Pausing an App
Pausing scales your app to zero replicas — it stops running but keeps all configuration and data.
- Click the Pause button in the app detail header (or right-click the app card on the Apps page).
- The status changes to paused (grey indicator).
The app’s URL will no longer respond while paused. No compute resources are consumed.
Resuming a Paused App
- Click the Resume button.
- The app scales back to one replica and starts serving traffic again.
Archiving an App
Archiving removes all Kubernetes resources (pods, services, etc.) but preserves the app’s data in the database for future restoration.
- Click the menu and select Archive.
- Confirm in the modal dialog.
- The app disappears from the main Apps page.
Viewing & Restoring Archived Apps
- Click the Archived icon in the sidebar’s utility section.
- You’ll see a list of all archived apps.
- Click Restore to bring an app back — its latest build will be redeployed.
Permanently Deleting an App
- In the Archived apps view, click Delete Permanently.
- Confirm in the dialog. This is irreversible — all data, build history, and conversation history are deleted.
Scheduled Apps (CronJobs)
Some apps run on a schedule rather than serving traffic continuously — compliance checks, report generators, API pollers, cleanup scripts. Atelier supports this via Kubernetes CronJobs.
How CronJob Apps Are Created
When you describe an app that should run periodically, the AI generates a CronJob manifest instead of a Deployment. Atelier automatically detects the CronJob kind and extracts the schedule.
You can also include a CronJob manifest when using Direct Deploy or Upload.
CronJob apps are marked with a clock icon on their card on the Apps page.
Viewing Job History
- Open a CronJob app from the Apps page.
- Click the Jobs tab.
- You’ll see a list of recent job runs showing:
- Status — succeeded, failed, or running
- Name — the Kubernetes Job name
- Start time — when the job started
- Duration — how long it ran
- Click Logs on any job to see its output.
Editing the Schedule
You can change the cron schedule without rebuilding the app:
- In the Jobs tab, click the current schedule (displayed at the top).
- Edit the cron expression (e.g.
*/5 * * * *for every 5 minutes). - Click Save.
The CronJob is updated in Kubernetes immediately.
Running a Job Manually
Click Run Now in the Jobs tab header to trigger an immediate execution outside the schedule. Manual jobs appear in the history alongside scheduled runs.
Workflows
Workflows let you chain multiple apps into automated multi-step pipelines. Each step runs as a one-off job using an existing app’s container image, with shared storage for passing data between steps.
When to Use Workflows
Workflows are ideal for tasks that involve multiple apps working together in sequence or parallel:
- Data pipelines — scrape data, process it, then generate a report
- LLM pipelines — fetch → summarise → notify
- Scheduled batch processing — daily digest generation, cleanup routines
- Event-driven automation — webhook triggers a multi-step processing pipeline
How Workflows and Apps Relate
A workflow step is an existing Atelier app — you build each piece as a normal app first, then assemble them into a workflow. Each step’s container is launched as a Job, runs to completion, and the next step starts only when it succeeds.
This means:
- Each app is independently testable and redeployable.
- Apps can appear in multiple workflows.
- Secrets configured on each app are available to its workflow Job automatically.
Walkthrough: LLM News Digest
This example builds a three-step pipeline that fetches news, summarises it with an LLM, and posts the result to Slack.
Step 1 — Create the apps
App 1: news-fetcher
Create a new app with the description:
“Python script that fetches the top 10 headlines from NewsAPI and writes them as JSON to
/data/workflow/news.json, then exits.”
No secrets needed — this step is pure data retrieval. The app will be detected as a CronJob-style batch container.
App 2: news-summarizer
Create a new app with the description:
“Python script that reads
/data/workflow/news.json, calls the Claude API to write a concise markdown summary for each story, and writes the result to/data/workflow/summary.md, then exits.”
After the app is created, add its secret:
| Key | Value |
|---|---|
ANTHROPIC_API_KEY | sk-ant-... |
The app code uses the anthropic Python SDK directly — it has its own LLM client independent of Atelier’s build pipeline.
App 3: slack-sender
Create a new app with the description:
“Python script that reads
/data/workflow/summary.mdand posts it to the #news-digest Slack channel using the Slack API, then exits.”
After the app is created, add its secret:
| Key | Value |
|---|---|
SLACK_BOT_TOKEN | xoxb-... |
Step 2 — Create the workflow
- Click Workflows in the sidebar, then click + New.
- Fill in the form:
- Name:
daily-news-digest - Trigger: Schedule →
0 8 * * *(runs at 8 AM daily)
- Name:
- Add steps in order:
news-fetchernews-summarizerslack-sender
- Click Create Workflow.
How it runs
At 8 AM the workflow engine:
- Creates a shared PVC for this run.
- Launches
news-fetcheras a Job — it writes/data/workflow/news.jsonand exits. - On success, launches
news-summarizerwith the same PVC mounted — it reads the news JSON, calls Claude, and writes/data/workflow/summary.md. - On success, launches
slack-sender— it reads the summary and posts to Slack. - Marks the run as succeeded and cleans up the PVC.
If any step fails, subsequent steps are skipped and the run is marked failed. You can inspect per-step logs from the run detail view.
Creating a Workflow
- Click Workflows in the sidebar, then click + New.
- Enter a name and optional description.
- Choose a trigger type:
- Manual — triggered by clicking a button or calling the API
- Schedule — runs on a cron schedule
- Webhook — triggered by an external HTTP POST with HMAC authentication
- Add steps by selecting from your existing apps:
- Each step uses the container image from a deployed app (Deployment or CronJob)
- Set the step order to control sequencing
- Optionally assign a parallel group — steps in the same group run concurrently
- Set a timeout per step (default: 30 minutes)
- Click Create Workflow.
The Workflow Detail View
On the Workflows page, click any workflow to open its detail view. The view has three tabs:
Overview Tab:
- DAG visualization — shows your pipeline steps as a visual graph, colour-coded by status
- Info cards — total steps, trigger type, last run status
- Quick actions — Trigger, Pause, Resume, Archive
Runs Tab:
- Run history — a table of all past and current runs showing run number, status, trigger source, start time, and duration
- Run detail — click a run to see each step’s status, job name, and timing. Expand a step to view its container logs
Config Tab:
- Trigger configuration (cron expression for schedules, webhook secret for webhooks)
- Step details (app name, order, parallel group, timeout, dependencies, failure handler)
Triggering a Workflow
| Trigger type | How to configure | How to invoke |
|---|---|---|
| Manual | No configuration needed | Click Trigger in the workflow detail |
| Schedule | Cron expression (e.g. 0 8 * * *) | Runs automatically via a K8s CronJob |
| Webhook | Secret generated at creation time | POST /api/workflows/{id}/webhook with HMAC signature |
For webhook triggers, send an HTTP POST with an X-Atelier-Signature header:
SECRET="your-webhook-secret"BODY='{"event":"deploy"}'SIG=$(echo -n "$BODY" | openssl dgst -sha256 -hmac "$SECRET" | cut -d' ' -f2)curl -X POST http://atelier.home.arpa/api/workflows/my-workflow/webhook \ -H "Content-Type: application/json" \ -H "X-Atelier-Signature: sha256=$SIG" \ -d "$BODY"Parallel Steps
Steps in the same parallel group run concurrently. For example, if steps 2 and 3 are in group 1, they both start as soon as step 1 succeeds:
Step 1 (fetch) ├── Step 2 (summarise English) ← parallel group 1 └── Step 3 (summarise Spanish) ← parallel group 1 └── Step 4 (merge & send)Configure parallel groups in the workflow creation form by assigning the same group number to concurrent steps.
Monitoring Runs
The Workflows page shows a status dot next to each workflow indicating the last run’s result:
- Green — last run succeeded
- Red — last run failed
- Blue — a run is currently in progress
- No dot — no runs yet
The workflow detail view updates in real-time via Server-Sent Events. You can watch steps transition through pending → running → succeeded/failed live.
How Steps Execute
Each step runs as an isolated Kubernetes Job:
- The container image is resolved from the referenced app’s Deployment or CronJob
- A shared PVC (1Gi) is mounted at
/data/workflow/in every step — use this to pass files between steps - Environment variables are injected:
WORKFLOW_ID,WORKFLOW_RUN_ID,STEP_ORDER,STEP_APP_NAME - The app’s secrets are automatically available via the same K8s Secret used by the app
- Steps can set env overrides for workflow-specific configuration
restartPolicy: Never,backoffLimit: 0— each step gets one attempt
Failure Handling
When a step fails:
- All downstream steps that depend on it are skipped
- If the step has an on_failure handler configured (referencing another app), a failure handler Job is spawned automatically
- The failure handler receives the
FAILED_STEP_IDenvironment variable - The overall run is marked as failed
Cancelling a Run
- Open the workflow detail → Runs tab.
- Click Cancel on the running run.
- All running K8s Jobs are deleted and the run is marked
cancelled.
Pausing & Archiving Workflows
- Pause — disables all triggers (manual, schedule, webhook). Existing runs continue to completion.
- Resume — re-enables triggers.
- Archive — soft-deletes the workflow. It disappears from the Workflows page but data is preserved.
Build Errors & Auto-Fix
Understanding Build Errors
When a build fails, you’ll see a red error banner in the Progress Panel with details about what went wrong.
Common causes include:
- Syntax errors in generated code
- Missing dependencies
- Docker build failures
- Deployment timeout (pod not starting)
The Auto-Fix Panel
If the error is fixable, an amber panel appears instead of a red one. This means Atelier can attempt to fix the issue automatically.
The panel shows:
- A description of the error
- The relevant build log output
- A Fix it button
- An Auto-accept future fixes checkbox
Triggering a Fix
- Review the error details.
- Click Fix it to let the AI attempt a fix.
- The pipeline re-runs with the error context.
Auto-Accept
Check Auto-accept future fixes to skip the manual approval step. When enabled, fixable errors are automatically retried without waiting for you to click “Fix it”.
This setting persists across sessions and can also be configured in Settings.
Crash Alerts
Atelier monitors your running apps for container crashes and provides diagnostic suggestions.
What Crash Alerts Look Like
When a container enters a failure state (CrashLoopBackOff, OOMKilled, ImagePullBackOff, etc.), an alert banner appears on the app’s card (Apps page) and in the app detail view.
The alert includes:
- Failure type (e.g. “CrashLoopBackOff”, “OOMKilled”)
- Diagnostic suggestion — an AI-generated analysis based on pod logs and Kubernetes events
Dismissing Alerts
Once you’ve addressed the issue, click Dismiss to clear the alert. If the problem recurs, a new alert will be generated.
Tip: Click “Open in Nova” on the alert to continue troubleshooting with the Nova agent.
Nova — AI Operations Agent
Nova is an AI assistant specialised in cluster operations and troubleshooting.
Accessing Nova
Click the Nova icon in the sidebar to open the Nova chat panel.
Asking Nova for Help
Type your question or describe a problem. Nova can:
- Diagnose why an app is crashing
- Explain error messages
- Suggest configuration changes
- Help with Kubernetes concepts
Nova has access to your cluster state and can inspect pods, logs, and events when actions are enabled.
Nova Memory
Nova can remember context across conversations. Memory entries are shown in the panel and can be:
- Viewed — see what Nova remembers
- Deleted individually — click the delete button on any memory entry
- Cleared entirely — click “Clear All Memory”
Enabling/Disabling Actions
By default, Nova can only answer questions. To let Nova inspect your cluster:
- Go to Settings > Nova.
- Enable Actions.
With actions enabled, Nova can read pod logs, check deployment status, and view Kubernetes events.
Settings
Access settings by clicking the gear icon in the sidebar’s utility section and choosing Settings from the popover.
Pipeline Profile
The pipeline profile is a single control that flips the main quality-gate toggles (code review, pre-build review + fix, vulnerability scanning, scan blocking) together. Pick one to match how much friction you want in the build path:
| Profile | Pre-build review | Pre-build auto-fix | Post-build code review | Scanning | Block on critical |
|---|---|---|---|---|---|
| Fast | off | off | off | off | off |
| Standard (default) | off | off | on | on | off |
| Hardened | on | on | on | on | on |
| Custom | — | — | — | — | — |
- Fast — skips all gates for the quickest possible iteration. Use for throwaway prototypes.
- Standard — post-build review and scanning run; nothing blocks the rollout. This is the default.
- Hardened — everything on, including a post-deploy security hold if critical CVEs are found.
- Custom — shown when the individual toggles below don’t match any named profile. Selecting a named profile overwrites the toggles to match.
Switching profile applies the toggles first, then updates the profile label. A diff preview shows exactly which toggles are about to change before you save.
Note: Scans run after deployment in every profile — scan blocking is advisory (a hold flag), not a rollout gate. See the Security Holds section for details.
LLM Profiles & Roles
Atelier drives every AI surface — app generation, planning, code review, Nova, the supervisor — through the same profile-based LLM system. You save one or more named profiles (each a provider + model + credentials), then assign a profile to each role. Roles without an explicit assignment fall back to the legacy flat llm.* settings, then to the startup env-var configuration — not to any named profile.
Note: Older versions of Atelier used a single flat LLM configuration. That flat form still works for backward compatibility, and on first startup Atelier migrates it into a named
Defaultprofile so it shows up in the Profiles panel like any other. This migratedDefaultprofile is just a regular entry in the list — it isn’t a special fallback, and clearing a role assignment does not implicitly route to it.
Profile fields
| Field | Description |
|---|---|
| Name | Human-friendly label (e.g. Haiku Fast, Local LM Studio) |
| Provider | claude, openai, openrouter, or ollama |
| Model | Model identifier (defaults per-provider if left blank) |
| API Key | Provider API key — stored, never returned by the API. ollama doesn’t need one |
| Base URL | Only used when Provider is ollama. Point at any OpenAI-compatible local endpoint — Ollama itself, LM Studio, vLLM, llama.cpp, etc. Must include /v1 — e.g. http://192.168.0.54:1234/v1. The field is silently ignored for claude, openai, and openrouter providers. |
| Max output tokens | Optional cap on response length |
The five roles
| Role | What uses it |
|---|---|
| Build | App code generation and the iterative build loop — the main heavy-lifter |
| Plan | The planning phase that runs before a build. Short one-shot call; a smaller/faster model is fine |
| Nova | The operations assistant in the sidebar (crash diagnosis, cluster questions, action dispatch). Also used by the crash monitor for automatic post-incident analysis |
| Supervisor | Background cluster checks and auto-corrections |
| Code Review | Automated review of generated code after each build |
Assigning a profile to a role is what makes it take effect. For example, if you want Nova to run against a local LM Studio model but continue building apps with Claude, you’d have two profiles — one Claude, one Ollama-style pointing at LM Studio — and assign the Ollama profile to Nova only.
Auto-assign on create
When you add a new profile, Atelier assigns it to every role that doesn’t already have an explicit assignment. This means a first-time user who creates a single profile gets a working system immediately without hunting through the role table.
To split roles across profiles:
- Add all the profiles you want.
- Use the Role Assignments table at the bottom of the LLM Profiles panel to re-point individual roles — the dropdown under each role lets you pick any profile.
- Choosing Default (env config) for a role removes the explicit assignment; that role then falls back to the startup configuration.
Role assignments are read fresh on every LLM call — no pod restart required.
Practical sizings
- Homelab / solo dev — one profile is fine. Auto-assign fills everything with it.
- Cost-conscious — put a small model (e.g. Claude Haiku, GPT-4o-mini) on Plan and Supervisor, keep the capable model on Build and Code Review.
- Local-first Nova — point Nova at Ollama / LM Studio so operational chat doesn’t leave the network, while keeping a hosted model for Build.
- Experimentation — add a second Build profile with a different model, swap between them via the role dropdown without editing configuration files.
Diagnosing “the LLM isn’t being used”
If a role seems to ignore its assigned profile, check kubectl logs deploy/atelier-core for a line that starts No usable LLM provider could be constructed for this role. The message names the specific failure (missing API key, unreachable base URL, etc.) and the role that tried to resolve. See Troubleshooting for the common causes.
Code Review
Control AI-powered code review behaviour.
| Setting | Description |
|---|---|
| Auto-enabled | Run code review after every build |
| Pre-build review | Review generated code before building |
| Pre-build fix | Automatically fix issues found in pre-build review |
| Provider/Model | Use a different LLM for code review |
Build Settings
| Setting | Default | Description |
|---|---|---|
| Auto-remediate | On | Offer auto-fix on build failures |
| Auto-accept | Off | Automatically apply fixes without confirmation |
| Max iterations | 40 | Maximum AI tool calls per build |
Vulnerability Scanning
| Setting | Default | Description |
|---|---|---|
| Enabled | On | Run Trivy scans after each build |
| Block on critical | Off | Flag the app with an advisory security hold when critical CVEs are found. The container keeps running — see Security Holds. |
Monitoring
| Setting | Description |
|---|---|
| Crash monitoring | Detect pod failures and generate diagnostic alerts |
| Image update monitoring | Check for upstream base image updates |
| Check interval | Hours between image update checks (default: 24) |
Custom Prompts
You can customise the system prompts used by the AI for different tasks:
- Build prompt — used during app creation and updates
- Plan prompt — used during the planning phase
- Code review prompt — used for code review
Click Reset to Defaults to restore the original prompts.
Token Usage
Open the Settings popover (gear icon in the sidebar) and choose Token Usage to see a breakdown of LLM token consumption per app and globally. This helps track API costs.
MCP Servers
MCP (Model Context Protocol) servers extend the AI build pipeline with additional capabilities — web search, URL fetching, and more — without requiring custom code in Atelier. Each MCP server runs as a container in the cluster and exposes tools that the AI can use during app generation.
Accessing MCP Servers
- Open Settings from the sidebar’s gear-icon popover.
- Scroll to the MCP Servers section.
You’ll see three areas: Deployed Servers (servers currently running), Available (servers you can deploy from the catalog), and an Add custom MCP server link.
Deploying from the Catalog
- In the Available section, find the server you want (e.g. Fetch or Brave Search).
- If the server requires configuration (like an API key), fill in the required fields.
- Click Deploy.
- The server status shows deploying while the container starts up.
- Once running, the status changes to running (green badge).
Note: The Fetch and Brave Search images are included with Atelier.
Deploying a Custom MCP Server
Most community MCP servers are distributed as npm or pip packages. You can deploy them directly from the UI — no Docker builds required.
From an npm or pip Package (recommended)
- Open Settings > MCP Servers.
- Click + Add MCP server.
- Select the npm package or pip package tab.
- Fill in:
- Name — a lowercase DNS label (e.g.
github-search) - Package — the npm or pip package name (e.g.
@modelcontextprotocol/server-github) - Command — the stdio binary the package installs (e.g.
mcp-server-github)
- Name — a lowercase DNS label (e.g.
- Optionally set a Display Name and add Environment Variables (e.g. API keys).
- Click Deploy.
The package is installed automatically when the container starts. The server will appear with a deploying status, changing to running once ready. First startup may take a minute while the package installs.
From a Custom Docker Image (advanced)
For MCP servers not available as npm/pip packages, you can deploy a custom Docker image:
- Build or pull the image (
linux/amd64), import it into the cluster:
docker save mcp-my-server:latest | gzip > /tmp/mcp-my-server.tar.gzrsync -az /tmp/mcp-my-server.tar.gz atelier@<cluster-ip>:/tmp/ssh atelier@<cluster-ip> "\ sudo k3s ctr images import /tmp/mcp-my-server.tar.gz \ && sudo k3s ctr images tag docker.io/library/mcp-my-server:latest \ registry.atelier.local/mcp-my-server:latest"- In Settings > MCP Servers, click + Add MCP server > Docker image tab.
- Enter the Name and Image reference (e.g.
registry.atelier.local/mcp-my-server:latest). - Click Deploy.
Enabling Servers for the Pipeline
By default, deployed servers are not used during builds. To enable a server:
- Find the server in the Deployed Servers list.
- Check the Pipeline checkbox.
- The server’s tools are now available to the AI during app builds.
When enabled, the AI will discover the server’s tools at the start of each build and can call them as needed. For example, with the Fetch server enabled, the AI can retrieve content from URLs while generating your app.
Discovering Available Tools
- Click Tools on a deployed server to expand its tool list.
- Each tool shows its name and a description of what it does.
Tool names are prefixed with the server name (e.g. mcp_fetch_fetch, mcp_brave-search_brave_web_search) to avoid conflicts.
Deleting an MCP Server
- Click the Delete button on the deployed server.
- Confirm when prompted.
- The server’s Kubernetes Deployment and Service are removed.
Available MCP Servers
| Server | Description | Required Config |
|---|---|---|
| Fetch | Fetches web content and converts it to markdown | None |
| Brave Search | Web and local search via Brave Search API | BRAVE_API_KEY |
External MCP Integration (Claude Desktop, Cursor, etc.)
Atelier ships with atelier-mcp, a standalone binary that exposes the full Atelier API as MCP (Model Context Protocol) tools. This lets external AI clients — Claude Desktop, Cursor, VS Code with Copilot, and others — manage your apps, view logs, deploy changes, and query Nova directly.
How It Works
atelier-mcp is a local binary that runs on your machine (not in the cluster). Your AI client launches it as a subprocess and communicates over stdin/stdout using the JSON-RPC-based MCP protocol. The binary translates MCP tool calls into Atelier REST API requests.
Claude Desktop / Cursor / VS Code | | stdin/stdout (JSON-RPC) v atelier-mcp (local binary) | | HTTP (REST API) v Atelier cluster (atelier-core)Installing the Binary
atelier-mcp is a Linux/amd64 musl binary published alongside every release. Download it directly — no source build required.
curl -Lo atelier-mcp https://tryatelier.blob.core.windows.net/tryatelier/latest/atelier-mcpchmod +x atelier-mcpsudo mv atelier-mcp /usr/local/bin/To pin a specific version, swap latest for the tag in the curl URL above and re-run the same chmod +x and sudo mv steps:
curl -Lo atelier-mcp https://tryatelier.blob.core.windows.net/tryatelier/v0.9.7-beta/atelier-mcpchmod +x atelier-mcpsudo mv atelier-mcp /usr/local/bin/The binary is statically linked, so it has no runtime dependencies on the host’s libc. macOS / non-amd64 Linux users currently need to either run it inside a Linux VM or build from source (see the appendix below).
Building from source (maintainers only)
If you have access to the Atelier repository and want to build from a working tree:
cargo build --release -p atelier-mcpcp target/release/atelier-mcp /usr/local/bin/For a Linux/amd64 cross-build on macOS, use cargo zigbuild --release --target x86_64-unknown-linux-musl -p atelier-mcp. For everyone else, the published binary above is the supported path.
Getting an API Token
The binary requires a JWT token to authenticate with the Atelier API. Get one by logging in:
curl -s http://atelier.home.arpa/api/auth/login \ -H 'Content-Type: application/json' \ -d '{"username":"your-username","password":"your-password"}' | jq -r .tokenAlternatively, log in through the Atelier web UI and copy the token from your browser’s DevTools (Application > Local Storage).
Configuration
atelier-mcp reads two environment variables:
| Variable | Required | Default | Description |
|---|---|---|---|
ATELIER_API_URL | No | http://localhost:8080 | Base URL of your Atelier instance (e.g. http://atelier.home.arpa) |
ATELIER_API_TOKEN | Yes | — | JWT token from login |
Claude Desktop Setup
Add the following to your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{ "mcpServers": { "atelier": { "command": "/usr/local/bin/atelier-mcp", "env": { "ATELIER_API_URL": "http://atelier.home.arpa", "ATELIER_API_TOKEN": "eyJhbG..." } } }}Restart Claude Desktop after saving. You should see “atelier” listed in the MCP servers panel.
Cursor / VS Code Setup
For Cursor or VS Code with MCP support, add the same configuration to your editor’s MCP settings. The format may vary slightly — consult your editor’s MCP documentation for the exact config file location.
Available Tools
Once connected, the following 18 tools are available to your AI client:
Read-only tools:
| Tool | Description |
|---|---|
list_apps | List all deployed applications with status and metadata |
get_app | Get detailed info about a specific app |
get_build_logs | Get recent build log entries for an app |
get_pod_logs | Get runtime logs (stdout/stderr) from an app’s pods |
get_app_metrics | Get CPU and memory usage metrics |
list_secrets | List secret key names (values are never returned) |
get_settings | Get current platform settings |
Mutation tools:
| Tool | Description |
|---|---|
create_app | Create a new app from a natural language description |
update_app | Update an existing app with follow-up instructions |
redeploy_app | Rebuild and redeploy an app using current code |
pause_app | Pause an app (scale to zero) |
resume_app | Resume a paused app |
delete_app | Archive an app |
set_secrets | Create or update environment variable secrets |
delete_secrets | Remove specific secret keys |
update_settings | Update platform settings |
answer_question | Answer a pending build question |
nova_query | Ask Nova a question about the platform or apps |
Example Usage
Once configured, you can ask Claude Desktop things like:
- “List all my Atelier apps and their status”
- “Show me the logs for my-app — is it healthy?”
- “Create a new app: a simple REST API that returns random quotes”
- “Update my-app to add a /health endpoint”
- “Ask Nova why my-app is crashing”
The AI client will automatically call the appropriate Atelier tools behind the scenes.
Troubleshooting
“ATELIER_API_TOKEN must be set” — The binary was launched without the token env var. Check your MCP config.
Connection refused — The ATELIER_API_URL is not reachable from your machine. Make sure the cluster is accessible on your network.
401 Unauthorized — The JWT token has expired. Generate a new one by logging in again.
Tool not appearing — Restart your AI client after changing the MCP config. Check stderr output for error messages (atelier-mcp logs to stderr).
Export to Docker Compose
Atelier apps don’t have to run on Kubernetes. The atelier-compose CLI reads an app’s source from a local checkout or any Git repository and emits a docker-compose.yml + .env.example, so you can run any Atelier app on Docker Desktop, Podman, or any plain Docker host — no cluster required.
This is useful for:
- Local development — iterate on an app with
docker compose upon your laptop. - Sharing apps — send an app to a colleague who has Docker but no Atelier cluster.
- Portability demos — show that an Atelier-built app isn’t locked to the platform.
Getting the binary
atelier-compose ships as a pre-built Linux x86-64 binary, published alongside the platform installer.
curl -Lo atelier-compose https://tryatelier.blob.core.windows.net/tryatelier/latest/atelier-composechmod +x atelier-composesudo mv atelier-compose /usr/local/bin/ # optional — put it on your PATHNote: The download is the raw binary, not a tar.gz — no extraction step. For macOS / Windows use WSL, a Linux VM, or build from source with
cargo build --release -p atelier-compose.
Quick start
Clone the app’s Gitea repo, generate Compose files inside it, then run:
# Gitea is private by default — generate a personal access token at# Gitea → user menu → Settings → Applications → Manage Access Tokens,# then use your Gitea username + token as credentials.git clone http://<user>:<token>@gitea.atelier.home.arpa/<user>/my-app.gitcd my-appatelier-compose init
cp .env.example .env # fill in any secretsdocker compose up --buildOpen http://localhost:<frontend-port>/ in your browser.
Alternatively, pass --from <git-url> and atelier-compose clones into a temporary directory for you — the path is printed at the end so you can cd into it.
What gets translated
From atelier-spec.yaml | To docker-compose.yml |
|---|---|
Each services[] entry | One Compose service with the same name |
dockerfile + context | build: { context, dockerfile } |
image (for pre-built services like postgres:16) | image: <value> |
port | expose for internal services, ports for the frontend-equivalent |
env[] | One entry per var in .env.example (blank value for secrets); every service gets env_file: .env when the spec has at least one env var (omitted otherwise) |
storage.size_gi | Named volume app-data mounted at /data on the primary service |
Atelier-specific K8s concerns (ingress rules, StripPrefix middlewares, atelier.io/app labels, resource limits) are deliberately dropped because they have no equivalent in Compose.
To keep existing nginx configs working, every service gets a network alias <app-name>-<role> — so proxy_pass http://my-app-backend:8000/ resolves correctly without code edits.
Limitations
- Linux images only. Atelier’s generated Dockerfiles target
linux/amd64. On Apple Silicon you’ll hit an emulation layer; still works, just slower. - K8s-specific app code won’t translate. If the generated code assumes
KUBERNETES_SERVICE_HOST, in-cluster DNS, or other cluster-only facilities, those won’t resolve under Compose. Most LLM-generated apps don’t do this, but check before relying on it. - One-way generation.
atelier-composeemits Compose files; it doesn’t track Atelier rebuilds or sync changes back. Re-run with--forceafter pulling fresh source to regenerate.
See atelier-compose/README.md for the complete translation table and flag reference.
Backup & Restore
Protect your platform data by creating downloadable backup archives and restoring from them when needed.
Creating a Backup
- Open Settings from the sidebar’s gear-icon popover.
- Scroll to the Backup & Restore section.
- Click Create Backup.
- A progress bar shows the current step (snapshotting database, archiving each repo).
- When complete, click Download backup to save the
.tar.gzarchive.
The backup includes:
- SQLite database — all app definitions, chat history, build records, settings, Nova memory, and events.
- Gitea repositories — the generated source code for every app.
Restoring from a Backup
Warning: Restoring replaces all platform data. This action cannot be undone.
- Open Settings from the sidebar’s gear-icon popover.
- In the Backup & Restore section, click Choose File and select a backup archive.
- Click Restore.
- Read the confirmation warning carefully, then click Yes, restore from backup.
- The platform restores Gitea repositories and writes the backup database.
- The platform automatically restarts to apply the restored database.
After the restart, all apps, settings, and history from the backup will be in place. Container images will need to be rebuilt by redeploying apps.
What’s Not Included
- Container images — these are rebuilt from the Gitea source code when you redeploy.
- App persistent volumes — user data in PVCs is not included in the backup.
- Platform secrets — API keys and provider credentials should be re-entered in Settings after restoring to a new server.
Registry Management
Browse and manage container images stored in the internal registry.
Accessing the Registry
Open the Settings popover (gear icon in the sidebar) and choose Registry.
Browsing Images
The registry page shows all stored images grouped by repository. Each entry shows:
- Repository name (e.g.
myapp-backend,myapp-frontend) - Tags (e.g.
latest,build-1,build-2)
Deploying from the Registry
- Click Deploy next to any image.
- The Import modal opens pre-filled with the image reference.
- Set the port and click Deploy.
Deleting Images
- Click the delete button next to a specific image tag.
- The image tag is removed from the registry.
Garbage Collection
After deleting images, the underlying storage isn’t freed immediately. Click Run GC to trigger garbage collection and reclaim disk space.
User Management (Admin)
Admin users can manage other users on the platform.
Accessing User Management
Open the Settings popover (gear icon in the sidebar) and choose Users (only visible to admins).
Creating a User
- Click Add User.
- Enter a username, password, and optionally an email.
- Select a role:
- Viewer — read-only access (view apps, logs, metrics)
- Developer — can create, update, and deploy apps
- Admin — full access including user management and settings
- Click Create.
Changing a User’s Role
- Find the user in the list.
- Use the role dropdown to select a new role.
- The change takes effect immediately.
Activating / Deactivating Users
- Toggle the Active switch for the user.
- Deactivated users cannot log in but their account is preserved.
Deleting a User
- Click the delete button next to the user.
- Confirm when prompted. The user account is permanently removed.
Changing Your Password
- Click your avatar at the bottom of the sidebar.
- Select Change Password.
- Enter your current password and the new password.
- Click Save.
Forgotten Password / Locked Out
If you’ve forgotten your password and can’t log in, reset it from the server:
# Interactive — reads password from stdin (preferred)kubectl exec -it -n atelier deployment/atelier-core -- \ atelier-core reset-password --user <username>This requires kubectl access to the cluster (i.e. SSH access to the server). The command validates the new password against the standard policy and updates the hash directly in the database.
For other recovery scenarios (Gitea admin, K10 dashboard, expired JWTs), see the Troubleshooting guide.
Activity & System Logs
App Activity Log
Each app tracks significant events. View them in the Activity tab within the app detail view.
Events include:
- App created
- Build triggered / completed / failed
- Deployment rollback
- Secrets updated
- Webhook configured
- App paused / resumed / archived
Global Activity Feed
Click the Activity icon in the sidebar’s utility section to see events across all apps in a single timeline.
System Logs
System-level events from background services are available in Settings > System Logs.
These cover:
- Crash monitor — pod failure detection and recovery
- Image monitor — base image update checks
- Background tasks — scan jobs, code review runs
Quick Reference
| Action | Where |
|---|---|
| Create an app | Apps page > + Create > New App |
| Import a Docker image | Apps page > + Create > Import Image |
| Clone a Git repo | Apps page > + Create > Clone Repo |
| Upload source files | Apps page > + Create > Upload Local |
| Build from Gitea | Apps page > + Create > From Gitea |
| View live logs | App detail > Logs tab |
| Rollback a build | App detail > Deploy tab > Rollback |
| Add secrets | App detail > Secrets tab |
| View scan results | App detail > Security tab |
| Pause/Resume | App detail header buttons |
| Create a workflow | Sidebar Workflows > + New |
| Trigger a workflow | Workflow detail > Trigger button |
| View workflow runs | Workflow detail > Runs tab |
| Configure LLM | Sidebar gear icon > Settings |
| Choose a pipeline profile | Sidebar gear icon > Settings > Pipeline Profile |
| Deploy MCP server (catalog) | Sidebar gear icon > Settings > MCP Servers > Deploy |
| Deploy custom MCP server | Settings > MCP Servers > + Add MCP server (npm/pip package or Docker image) |
| Enable MCP for builds | Settings > MCP Servers > Pipeline checkbox |
| Ask Nova for help | Sidebar > Nova icon |
| View archived apps | Sidebar > Archived icon |
| Global activity feed | Sidebar > Activity icon |
| Manage users | Sidebar gear icon > Users (admin only) |
| Registry browser | Sidebar gear icon > Registry |
| Token usage | Sidebar gear icon > Token Usage |
| Documentation | Sidebar gear icon > Documentation ↗ |