Hosting n8n on OpenShift Local (CRC)#
This guide walks you through deploying n8n on OpenShift Local (CRC), Red Hat's tool for running a local OpenShift cluster. It mirrors AWS/EKS deployment, but runs entirely on your local machine. It's designed for testing n8n in an OpenShift environment locally, without cloud costs.
You will need a machine with significant resources available, given how many resources OpenShift itself consumes.
OpenShift concepts vs standard Kubernetes#
OpenShift is built on Kubernetes but uses different terminology and has stricter security defaults. If you are familiar with standard Kubernetes, or with a guide that targets a managed Kubernetes service such as EKS, the table below maps the equivalent concepts so you know what to expect.
| Standard Kubernetes / EKS | OpenShift Local (CRC) |
|---|---|
kubectl |
oc (OpenShift CLI; also understands kubectl commands) |
| Namespace | Project (same concept, different command) |
| Ingress / LoadBalancer | Route (built into OpenShift, no controller needed) |
| EBS StorageClass (gp3) | CRC built-in storage provisioner (no setup needed) |
| RDS PostgreSQL | In-cluster PostgreSQL via Helm (Bitnami) |
| ElastiCache Redis | In-cluster Redis via Helm (Bitnami) |
| AWS S3 | MinIO in-cluster (S3-compatible) |
| Pod Identity / IRSA | Access keys via Kubernetes Secret |
| AWS Load Balancer Controller | Not needed (Routes are built-in) |
| OIDC / IAM | Not needed |
| ~$135–400/month | Free (runs on your machine) |
Prerequisites#
Before starting, confirm your machine has:
- CPU: 4 or more physical cores (not just threads) with virtualization support
- RAM: 32+ GB free minimum (CRC reserves 9 GB for its VM)
- Disk: 100 GB free
- OS: Ubuntu (22.04 LTS or newer)
Prepare Ubuntu#
Open a terminal#
Press Ctrl+Alt+T or search for Terminal in the Applications menu.
Every command in this guide is typed into the terminal and run by pressing Enter.
Update your system#
Start with a system update to avoid dependency issues:
1 | |
sudo
sudo means “run as administrator”. You will be prompted for your password. Characters you type won't appear on screen, this is normal.
Check CPU virtualization support#
CRC runs a virtual machine. Your CPU must support hardware virtualization:
1 | |
- Output
0: Virtualization is disabled. Enter your BIOS/UEFI settings and enable VT-x (Intel) or AMD-V (AMD), then reboot and try again. - Output
1or higher: You are good to continue.
Install KVM and libvirt#
KVM is Linux’s built-in hypervisor. CRC uses it to run the OpenShift cluster VM:
1 | |
Install virtiofsd, which CRC requires to share the filesystem with the cluster VM:
1 | |
Start the libvirt service and configure it to start automatically on boot:
1 2 | |
Verify it's running:
1 | |
Look for Active: active (running) in green. Press q to exit.
Add user to required groups#
This allows you to use KVM and libvirt without typing sudo for every command:
1 2 | |
Warning
You must log out and log back in (or reboot) for this to take effect. If you skip this step, CRC will fail with a “permission denied” error.
Reboot now:
1 | |
After logging back in, open a terminal and verify group membership:
1 | |
You should see libvirt and kvm listed.
Install NetworkManager#
CRC requires NetworkManager to manage DNS entries for the cluster’s internal domains (*.apps-crc.testing, api.crc.testing):
1 2 3 | |
Verify it's connected:
1 | |
The STATE column should show connected.
Install tools#
Get a Red Hat account and pull secret#
CRC requires a free Red Hat account to pull container images.
- Create a free Red Hat account, if you don't already have one.
- In console.redhat.com/openshift/create/local, click Download OpenShift Local.
- Select Linux, and download the
.tar.xzfile to~/Downloads. - On the same page of the Red Hat console, click Copy pull secret. Paste it into a text file and save it for later.
Install CRC#
Open a terminal in your Downloads folder.
1 | |
Extract the archive.
1 | |
Move the crc binary to a system-wide location, so it's available in any terminal:
1 | |
Verify the installation:
1 | |
A version number should print to the terminal.
Install Helm#
Helm installs n8n and supporting services into the cluster:
1 | |
Verify:
1 | |
Set environment variables#
1 2 | |
Variable persistence
These variables only last for the current terminal session. Re-run this line whenever you open a new terminal before continuing.
Start OpenShift Local#
Run CRC setup#
You only need to run this once. It configures KVM networking, checks system requirements, and downloads the CRC bundle (~2.5 GB):
1 | |
This takes several minutes. If it reports any missing packages, install them with sudo apt install -y <package-name> and re-run.
Configure CRC memory and start the cluster#
CRC defaults to 9 GB of RAM for its VM. n8n and its supporting services need more headroom. Set the memory to 14 GB before starting:
1 | |
You only need to run this once. The setting persists across crc stop / crc start cycles.
Recommended: Save your pull secret to a file first so you don’t have to paste it every time:
1 2 3 4 5 | |
Start CRC using the file:
1 | |
Alternatively, run crc start without the flag and paste the secret when prompted.
This takes 10–15 minutes. When complete you will see something like:
1 2 3 4 5 6 7 8 9 10 11 12 | |
Save the kubeadmin password now. You will need it in the next step. You can retrieve it later using crc console --credentials.
Verify DNS resolution#
On Ubuntu, CRC configures the system resolver automatically with NetworkManager and systemd-resolved. No manual /etc/hosts entries are needed.
Verify the API is reachable:
1 | |
You should see a process bound to 127.0.0.1:6443. If nothing appears, re-run crc start. If DNS doesn't resolve *.apps-crc.testing, see the troubleshooting section.
Configure your shell#
CRC bundles the oc CLI inside the VM. This command makes it available in your terminal:
1 | |
To make this permanent so you don't have to run it every time you open a terminal:
1 2 | |
Verify oc works:
1 | |
Log in to the cluster#
1 | |
Replace <your-kubeadmin-password> with the password printed when you configured CRC memory and started the cluster.
Verify you are logged in:
1 | |
kubeadmin should print to the screen.
Standalone deployment#
Standalone mode runs n8n as a single pod with SQLite. No external database or Redis is required. This is ideal for exploring n8n and testing workflows locally.
Create the project#
In OpenShift, a project is the same as a Kubernetes namespace: an isolated space for your resources:
1 | |
Grant the required security permission#
OpenShift enforces strict security policies called Security Context Constraints (SCCs). By default, pods can't run with a specific user ID. The n8n chart runs as user ID 1000, so you must explicitly allow this.
Use the full explicit form. The shorthand -z flag can silently fail in some OpenShift versions:
1 2 | |
Verify the binding was created:
1 | |
You should see a binding referencing system:openshift:scc:anyuid.
Create the required secret#
1 2 3 4 5 6 | |
Back up the encryption key immediately:
1 2 | |
Copy that output and store it somewhere safe. Losing it means all stored credentials in your workflows become permanently unreadable.
Create your values file#
Create a file called n8n-standalone-values.yaml. You can use nano (a simple text editor):
1 | |
Paste the following, then press Ctrl+O to save and Ctrl+X to exit:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | |
Deploy n8n#
The n8n Helm chart hard codes seccompProfile: RuntimeDefault in the pod spec. OpenShift 4.14+ converts this to a deprecated alpha annotation that's rejected at admission, even when anyuid SCC is granted. The fix is to pull the chart locally, remove those two lines, and install from the patched copy.
Pull and patch the chart:
1 2 3 4 5 | |
Install from the patched chart:
1 2 3 4 5 | |
Access n8n using port forward#
OpenShift Routes require a hostname, which adds complexity for standalone local access. Port-forward is simpler:
1 | |
Leave this running, then open your browser to:
1 | |
n8n will prompt you to create an owner account.
Stop tunnel
Press Ctrl+C to stop the tunnel. Re-run the port-forward command to access n8n again later.
Check deployment status#
1 | |
Expected:
1 2 | |
Standalone deployment complete.
Multi-instance queue mode#
Multi-instance queue mode runs multiple n8n pods with a shared database, message queue, and object storage. It requires an n8n Enterprise license.
Instead of AWS managed services, this guide uses in-cluster equivalents that mirror what you would find in an on-premises or customer OpenShift environment:
| AWS Service | Local Equivalent |
|---|---|
| RDS PostgreSQL | PostgreSQL (Bitnami Helm chart) |
| ElastiCache Redis | Redis (Bitnami Helm chart) |
| S3 | MinIO (S3-compatible, Bitnami Helm chart) |
Install in-cluster services#
Create the Project and add Bitnami Helm repo#
1 | |
Add the Bitnami chart repository (only needed once):
1 2 | |
Install PostgreSQL#
In the command below, replace YourStrongPassword123 with a suitable complex password.
1 2 3 4 5 6 7 | |
Flag
The global.compatibility.openshift.adaptSecurityContext=auto flag tells Bitnami to let OpenShift assign the correct user ID automatically (avoids SCC errors).
Save the endpoint, as it's fixed for in-cluster services:
1 | |
Replace YOUR_NAMESPACE with your actual $NAMESPACE value (e.g. n8n-20260306).
Install Redis#
1 2 3 4 5 6 | |
Redis endpoint: redis-master.$NAMESPACE.svc.cluster.local
Install MinIO (S3-compatible storage)#
In the command below, replace MinioStrongPassword123 with a suitable complex password.
1 2 3 4 5 6 | |
MinIO endpoint: http://minio:9000 (within the same namespace, just the service name works)
Create the n8n storage bucket in MinIO#
MinIO needs a bucket created before n8n can use it. Use the MinIO web console:
Open the MinIO console:
1 | |
Leave this running, then open your browser to http://localhost:9001.
Log in with:
- Username: minioadmin
- Password: MinioStrongPassword123
In the console:
1. Click Buckets in the left sidebar → Create Bucket
2. Bucket Name: n8n-data
3. Click Create Bucket
Go back to the terminal and press Ctrl+C to stop the port-forward.
Deploy n8n#
Grant SCC for n8n#
1 2 | |
Verify that oc get rolebindings -n $NAMESPACE shows a binding for system:openshift:scc:anyuid.
Create required secrets#
1 2 3 4 5 6 7 | |
Back up the encryption key immediately:
1 2 | |
Store that value somewhere safe.
In the commands below, replace YourStrongPassword123 and MinioStrongPassword123 with the passwords from the earlier steps.
1 2 3 4 5 6 7 8 9 | |
Create values file#
Create n8n-multimain-ocp-values.yaml. Replace the 3 placeholder values marked # <-- REPLACE:
1 | |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | |
Save and exit nano (Ctrl+O, Ctrl+X).
Before deploying, replace the two YOUR_NAMESPACE placeholders with your actual namespace value:
1 2 3 4 5 | |
Verify the replacements:
1 | |
Both lines should show your actual namespace name, not YOUR_NAMESPACE.
Deploy n8n#
If you didn't patch the chart previously, pull and patch it now:
1 2 3 | |
Install from the patched chart:
1 2 3 4 5 | |
Create a route for external access#
In OpenShift, a Route exposes a service to the outside world. It's the equivalent of a Kubernetes Ingress or LoadBalancer, and requires no extra controller:
1 | |
Get the URL:
1 2 | |
The URL will look like: http://n8n-main-n8n-20260306.apps-crc.testing
Update the host secret#
n8n needs to know its public URL. Update the secret with the Route hostname, then restart the pods:
1 2 3 4 5 6 7 8 9 10 11 12 | |
Wait for the rollout to complete:
1 | |
Verify all pods are running#
1 | |
Expected (all Running):
1 2 3 4 5 6 7 8 9 | |
Open your browser to the URL printed above.
Multi-instance deployment complete.
Updating n8n#
To change configuration or upgrade the chart version, pull and re-patch the new chart version, then upgrade:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | |
Stopping and resuming CRC#
CRC doesn't need to be deleted between sessions. You can stop and restart it:
1 2 3 4 5 | |
After restarting, re-run:
1 2 3 | |
Troubleshooting#
crc setup fails with “libvirt not found”#
1 2 | |
Then re-run crc setup.
crc start fails with “insufficient memory”#
CRC requires at least 9 GB of free RAM. Close other applications and try again. If you followed instructions for configuring CRC memory, CRC is configured to use 14 GB.
n8n pod stuck in Pending or never created SCC error#
Check events for the error:
1 | |
If you see unable to validate against any security context constraint or seccomp may not be set, the chart’s hard coded seccompProfile: RuntimeDefault is being rejected. OpenShift 4.14+ converts this to a deprecated alpha annotation that admission rejects even when anyuid SCC is granted.
1. Grant anyuid using the explicit form (the -z shorthand can silently fail):
1 2 3 4 5 6 7 | |
Verify: run oc get rolebindings -n $NAMESPACE. You should see a binding for system:openshift:scc:anyuid.
2. Pull the chart locally and remove the seccompProfile lines:
1 2 3 4 5 | |
3. Uninstall and reinstall from the patched chart:
1 2 3 4 5 6 | |
Route URL returns “Application not available”#
The pods may still be starting. Check:
1 2 | |
Also confirm the Route exists:
1 | |
n8n pod stuck in Pending with Insufficient memory#
The CRC node doesn’t have enough free memory to schedule the pod.
Fix: Increase CRC’s VM memory and restart:
1 2 3 | |
After CRC restarts, the pod should schedule automatically. If the pod is still pending after a few minutes, delete it to force a reschedule:
1 | |
If your machine can’t spare 14 GB, you can also lower the pod’s memory request in n8n-standalone-values.yaml:
1 2 3 4 | |
Then upgrade: helm upgrade n8n ~/n8n/ -n $NAMESPACE -f n8n-standalone-values.yaml
DNS not resolving .apps-crc.testing or api.crc.testing#
On Ubuntu, CRC configures DNS automatically. If it fails, restart NetworkManager:
1 | |
If still broken, add entries manually (CRC routes traffic through 127.0.0.1):
1 2 3 4 5 6 | |
Subdomains
When you expose Routes in the multi-instance section, new *.apps-crc.testing subdomains are created. Add them to /etc/hosts pointing to 127.0.0.1 if your browser can’t reach them.
n8n pod crashes with EACCES: permission denied writing to /home/node/.n8n/#
This means the pod is running as a random OpenShift-assigned UID instead of UID 1000 (the node user the n8n image expects). It happens when securityContext.enabled: false is set in values without runAsUser: 1000 and fsGroup: 1000, OpenShift assigns a random UID that can’t write to the PVC.
Fix: Ensure securityContext.enabled: true is set in your values file, and that the chart has been patched to remove seccompProfile (see the SCC error section above). Both are required together.
View pod logs#
1 2 3 4 5 6 7 8 | |
All events in the namespace#
1 | |
Quick Reference#
Re-export variables after reopening terminal#
1 2 3 | |
Check cluster status#
1 | |
Open the OpenShift web console#
1 | |
Log in with kubeadmin / your password to see a graphical view of everything running.
Things to save#
| Item | Why it matters |
|---|---|
kubeadmin password |
Log in to the cluster |
| n8n encryption key | Lose this = all stored credentials unreadable |
n8n-standalone-values.yaml |
Required for helm upgrade |
n8n-multimain-ocp-values.yaml |
Required for helm upgrade |
| MinIO root password | Access the MinIO console |
| PostgreSQL password | Database access |
Next steps#
- Learn more about configuring and scaling n8n.
- Or explore using n8n: try the Quickstarts.