Troubleshooting
Common issues during and after install, and how to resolve them.
Last updated 18 Apr 2026
Issues during an Operayde install cluster around networking, DNS, and IdP configuration. A pragmatic checklist below, in the order we usually walk through it with customers.
Appliance won’t enrol
- Verify outbound HTTPS to
ops.<region>.operayde.com:443is allowed. - Check the token hasn’t expired (72 h TTL).
- Confirm the hardware serial on the docket matches the unit.
# From a laptop on the same LAN as the appliance:
curl -sv https://ops.eu-1.operayde.com/healthz | headIf curl succeeds but the appliance can’t reach us, the appliance itself isn’t being allowed out by the firewall — ask your network team.
Staff can’t sign in
- Check
appliance.<your-domain>resolves to the appliance LAN IP. - Confirm the OIDC client in your IdP has the appliance callback URL pinned.
- Make sure the tenant ID in your IdP’s
tidclaim matches the enrolment.
429 from the gateway
The appliance is healthy; the tenant or key budget is exhausted. Check
Virtual keys → Usage in the portal for the offending key, or raise the
budget: scope on a new key.
Health checks failing
Every service exposes /healthz on its local port. The appliance dashboard
aggregates them into a single “green / degraded / red” indicator,
but for diagnostics:
# On the appliance, as the operayde user:
systemctl status operayde-workspace
systemctl status operayde-gateway
systemctl status operayde-audit-writerAny non-active (running) state warrants an incident. Collect
journalctl -u <service> -n 200 and attach to the ticket.
When to call us
- Tamper-evident seal broken on arrival.
systemctlshows repeated restart loops.operayde-workspaceOOMs on Micro during RAG ingest (indicates oversized docs; we have workarounds).- TPM measurements fail (appliance refuses to unlock disks).
Open an incident from Portal → Fleet → [appliance] → Open incident. Include console screenshots where possible.