Configuration
Configuration
Section titled “Configuration”Link Cable’s runtime config lives at config/linkcable/config.json on each backend. This page documents every field. The defaults are tuned for stable networks with normal latency; sections below call out which values are worth tuning and when.
{ "debug": false, "role": "PARTICIPANT", "serverId": "server1", "orchestratorServerId": "server1", "swapWindowSeconds": 10, "snapshotCacheTtlSeconds": 300, "heartbeatIntervalSeconds": 15, "periodicFlushIntervalSeconds": 15, "serverTimeoutSeconds": 45, "loadTimeoutSeconds": 15, "saveTimeoutSeconds": 15, "backupMaxPerPlayer": 20, "backupMaxAgeDays": 30, "ultraSyncJoinMigrationEnabled": false, "ultraSyncMongoUri": "active", "ultraSyncMongoDatabase": "UltraSync", "ultraSyncMongoCollection": "users", "operationModeThresholds": { "degradedFailureCount": 3, "emergencyFailureCount": 3 }}Identity Fields
Section titled “Identity Fields”| Field | Type | Default | Purpose |
|---|---|---|---|
role | enum | PARTICIPANT | ORCHESTRATOR or PARTICIPANT. Exactly one backend in the network is the orchestrator; everyone else is a participant. |
serverId | String | server1 | Unique identifier for this backend within the network. Must be unique. Conventionally the same name your proxy uses for the server (hub, survival, battles, etc.). |
orchestratorServerId | String | server1 | The serverId of the orchestrator backend. Must match the orchestrator’s value exactly (case-sensitive). |
These three fields are the only ones that must be set per-backend. Everything else can stay at defaults for most networks.
Timing Fields
Section titled “Timing Fields”The defaults work for stable networks with normal proxy and backend latency. If your network is high-latency, has slow backend startup, or runs through several proxies, raise the relevant values rather than disabling them.
| Field | Type | Default | Purpose |
|---|---|---|---|
swapWindowSeconds | Int | 10 | How long fast-switch joins retry through a settle window before giving up. Raise this first if you see Player state is busy, retrying spam during otherwise-successful transfers. |
heartbeatIntervalSeconds | Int | 15 | How often each backend publishes its heartbeat to Redis. Lower values detect orchestrator/participant outages faster but raise Redis traffic. |
serverTimeoutSeconds | Int | 45 | How long a backend can go without publishing a heartbeat before peers consider it offline. Should be at least three times heartbeatIntervalSeconds. |
periodicFlushIntervalSeconds | Int | 15 | How often live state is flushed to canonical storage during normal play. Do not set this to 0 in beta or production — it limits how much unsaved state is exposed during churn. |
loadTimeoutSeconds | Int | 15 | Per-attempt timeout for loading a player’s snapshot from canonical storage. |
saveTimeoutSeconds | Int | 15 | Per-attempt timeout for writing a snapshot to canonical storage. |
snapshotCacheTtlSeconds | Int | 300 | How long a source-server snapshot stays cached after a player transfers, so pasture-tethered Pokémon stay resolvable on the destination. |
When to raise the timing values
Section titled “When to raise the timing values”- High-latency proxy or VPN-linked backends. Raise
swapWindowSecondsbefore raising the heartbeat values. The settle window is what gives a slow proxy time to land the player on the destination cleanly. - Slow backend startup. Raise
serverTimeoutSecondsso participants don’t briefly mark a restarting orchestrator as offline. - Frequent restart waves. Stagger restarts; if you can’t, raise
serverTimeoutSecondsso peers don’t churn through “online → offline → online” thrash.
When not to raise them
Section titled “When not to raise them”- “Players sometimes lose ownership during transfers.” That isn’t a timing problem; it’s a transfer-storm problem. See the Beta Rollout guidance on chaotic transfer storms before reaching for higher timeouts.
Backup Fields
Section titled “Backup Fields”| Field | Type | Default | Purpose |
|---|---|---|---|
backupMaxPerPlayer | Int | 20 | Maximum number of automatic backups retained per player. Older backups beyond this count are pruned. Manual backups created via /linkcable backup create still count toward this cap. |
backupMaxAgeDays | Int | 30 | Maximum age of an automatic backup before it’s pruned. Manual backups age the same way. |
A network with very active players may want to lower backupMaxPerPlayer (each backup is a full snapshot, so storage adds up) or shorten backupMaxAgeDays. For low-volume networks the defaults are fine.
Pre-restore backups are taken automatically before any restore overwrites state. Those count toward the cap as well.
UltraSync Migration Fields
Section titled “UltraSync Migration Fields”These only matter if you’re migrating off UltraSync. If you’ve never used UltraSync, leave them at defaults.
| Field | Type | Default | Purpose |
|---|---|---|---|
ultraSyncJoinMigrationEnabled | Boolean | false | When true, players who don’t already exist in Link Cable but do exist in UltraSync are migrated on first join. Off by default; enable only after a successful bulk migration test pass. |
ultraSyncMongoUri | String | "active" | Mongo connection string for the UltraSync database. Use "active" to reuse the same Mongo deployment Cobblemon/Ceremony already use; otherwise a full URI like "mongodb://host:27017". |
ultraSyncMongoDatabase | String | "UltraSync" | Database name. The UltraSync exports use UltraSync — don’t change this unless you’ve manually relocated the data. |
ultraSyncMongoCollection | String | "users" | Collection name. UltraSync stores player records in users. |
See UltraSync Migration for the command and the safety rules.
Operation Mode Thresholds
Section titled “Operation Mode Thresholds”"operationModeThresholds": { "degradedFailureCount": 3, "emergencyFailureCount": 3}| Field | Type | Default |
|---|---|---|
degradedFailureCount | Int | 3 |
emergencyFailureCount | Int | 3 |
These fields are reserved for future automatic mode transitions and are not currently consumed by any code path — changing them has no behavioral effect today. Leave them at defaults.
The operation mode itself (NORMAL, DEGRADED, EMERGENCY) does affect runtime behavior when set: any non-NORMAL mode causes Link Cable to reject new joins (Link Cable is not accepting joins right now), skip snapshot audits, and skip periodic flush. EMERGENCY additionally pauses automatic expiration of stuck handoffs. In normal operation the mode stays at NORMAL.
| Field | Type | Default | Purpose |
|---|---|---|---|
debug | Boolean | false | Enables verbose [Link Cable] logging. Useful while validating a setup; noisy in production. |
When debug = true, every ownership transition, snapshot load/save, heartbeat exchange, and pasture safety check is logged. With it off, you still get [safety] log lines, ownership accept/release lines, and warnings — just without the per-tick noise.
Reload Behavior
Section titled “Reload Behavior”There’s no live config reload. Edit config.json and restart the backend that owns the change. Identity fields (role, serverId, orchestratorServerId) are evaluated at startup; timing fields are read each time they’re consulted, but a clean restart is still the safe path for any change.
If you’re tuning a single backend’s timing fields under live conditions, restart that backend during a low-traffic window. The orchestrator can be restarted; in-flight transfers will retry through the settle window when it comes back.
Recommended Beta / Production Values
Section titled “Recommended Beta / Production Values”For a beta or first production rollout, the defaults in this page are also the recommended values:
"swapWindowSeconds": 10,"heartbeatIntervalSeconds": 15,"serverTimeoutSeconds": 45,"periodicFlushIntervalSeconds": 15,"backupMaxPerPlayer": 20,"backupMaxAgeDays": 30If a high-latency proxy or slow backend startup pushes you off these defaults, raise swapWindowSeconds before raising heartbeat values. Higher heartbeats reduce churn but slow down crash recovery; the settle window is the right knob for “transfers are slightly racy”.
Next Steps
Section titled “Next Steps”- Validation Checklist — the test pass to run before opening the network.
- Backups — the backup command suite.
- Beta and Production Rollout — production hardening notes.
- Commands Reference — every
/linkcablesubcommand.