Skip to content

Configuration

Link Cable’s runtime config lives at config/linkcable/config.json on each backend. This page documents every field. The defaults are tuned for stable networks with normal latency; sections below call out which values are worth tuning and when.

{
"debug": false,
"role": "PARTICIPANT",
"serverId": "server1",
"orchestratorServerId": "server1",
"swapWindowSeconds": 10,
"snapshotCacheTtlSeconds": 300,
"heartbeatIntervalSeconds": 15,
"periodicFlushIntervalSeconds": 15,
"serverTimeoutSeconds": 45,
"loadTimeoutSeconds": 15,
"saveTimeoutSeconds": 15,
"backupMaxPerPlayer": 20,
"backupMaxAgeDays": 30,
"ultraSyncJoinMigrationEnabled": false,
"ultraSyncMongoUri": "active",
"ultraSyncMongoDatabase": "UltraSync",
"ultraSyncMongoCollection": "users",
"operationModeThresholds": {
"degradedFailureCount": 3,
"emergencyFailureCount": 3
}
}

FieldTypeDefaultPurpose
roleenumPARTICIPANTORCHESTRATOR or PARTICIPANT. Exactly one backend in the network is the orchestrator; everyone else is a participant.
serverIdStringserver1Unique identifier for this backend within the network. Must be unique. Conventionally the same name your proxy uses for the server (hub, survival, battles, etc.).
orchestratorServerIdStringserver1The serverId of the orchestrator backend. Must match the orchestrator’s value exactly (case-sensitive).

These three fields are the only ones that must be set per-backend. Everything else can stay at defaults for most networks.


The defaults work for stable networks with normal proxy and backend latency. If your network is high-latency, has slow backend startup, or runs through several proxies, raise the relevant values rather than disabling them.

FieldTypeDefaultPurpose
swapWindowSecondsInt10How long fast-switch joins retry through a settle window before giving up. Raise this first if you see Player state is busy, retrying spam during otherwise-successful transfers.
heartbeatIntervalSecondsInt15How often each backend publishes its heartbeat to Redis. Lower values detect orchestrator/participant outages faster but raise Redis traffic.
serverTimeoutSecondsInt45How long a backend can go without publishing a heartbeat before peers consider it offline. Should be at least three times heartbeatIntervalSeconds.
periodicFlushIntervalSecondsInt15How often live state is flushed to canonical storage during normal play. Do not set this to 0 in beta or production — it limits how much unsaved state is exposed during churn.
loadTimeoutSecondsInt15Per-attempt timeout for loading a player’s snapshot from canonical storage.
saveTimeoutSecondsInt15Per-attempt timeout for writing a snapshot to canonical storage.
snapshotCacheTtlSecondsInt300How long a source-server snapshot stays cached after a player transfers, so pasture-tethered Pokémon stay resolvable on the destination.
  • High-latency proxy or VPN-linked backends. Raise swapWindowSeconds before raising the heartbeat values. The settle window is what gives a slow proxy time to land the player on the destination cleanly.
  • Slow backend startup. Raise serverTimeoutSeconds so participants don’t briefly mark a restarting orchestrator as offline.
  • Frequent restart waves. Stagger restarts; if you can’t, raise serverTimeoutSeconds so peers don’t churn through “online → offline → online” thrash.
  • “Players sometimes lose ownership during transfers.” That isn’t a timing problem; it’s a transfer-storm problem. See the Beta Rollout guidance on chaotic transfer storms before reaching for higher timeouts.

FieldTypeDefaultPurpose
backupMaxPerPlayerInt20Maximum number of automatic backups retained per player. Older backups beyond this count are pruned. Manual backups created via /linkcable backup create still count toward this cap.
backupMaxAgeDaysInt30Maximum age of an automatic backup before it’s pruned. Manual backups age the same way.

A network with very active players may want to lower backupMaxPerPlayer (each backup is a full snapshot, so storage adds up) or shorten backupMaxAgeDays. For low-volume networks the defaults are fine.

Pre-restore backups are taken automatically before any restore overwrites state. Those count toward the cap as well.


These only matter if you’re migrating off UltraSync. If you’ve never used UltraSync, leave them at defaults.

FieldTypeDefaultPurpose
ultraSyncJoinMigrationEnabledBooleanfalseWhen true, players who don’t already exist in Link Cable but do exist in UltraSync are migrated on first join. Off by default; enable only after a successful bulk migration test pass.
ultraSyncMongoUriString"active"Mongo connection string for the UltraSync database. Use "active" to reuse the same Mongo deployment Cobblemon/Ceremony already use; otherwise a full URI like "mongodb://host:27017".
ultraSyncMongoDatabaseString"UltraSync"Database name. The UltraSync exports use UltraSync — don’t change this unless you’ve manually relocated the data.
ultraSyncMongoCollectionString"users"Collection name. UltraSync stores player records in users.

See UltraSync Migration for the command and the safety rules.


"operationModeThresholds": {
"degradedFailureCount": 3,
"emergencyFailureCount": 3
}
FieldTypeDefault
degradedFailureCountInt3
emergencyFailureCountInt3

These fields are reserved for future automatic mode transitions and are not currently consumed by any code path — changing them has no behavioral effect today. Leave them at defaults.

The operation mode itself (NORMAL, DEGRADED, EMERGENCY) does affect runtime behavior when set: any non-NORMAL mode causes Link Cable to reject new joins (Link Cable is not accepting joins right now), skip snapshot audits, and skip periodic flush. EMERGENCY additionally pauses automatic expiration of stuck handoffs. In normal operation the mode stays at NORMAL.


FieldTypeDefaultPurpose
debugBooleanfalseEnables verbose [Link Cable] logging. Useful while validating a setup; noisy in production.

When debug = true, every ownership transition, snapshot load/save, heartbeat exchange, and pasture safety check is logged. With it off, you still get [safety] log lines, ownership accept/release lines, and warnings — just without the per-tick noise.


There’s no live config reload. Edit config.json and restart the backend that owns the change. Identity fields (role, serverId, orchestratorServerId) are evaluated at startup; timing fields are read each time they’re consulted, but a clean restart is still the safe path for any change.

If you’re tuning a single backend’s timing fields under live conditions, restart that backend during a low-traffic window. The orchestrator can be restarted; in-flight transfers will retry through the settle window when it comes back.


For a beta or first production rollout, the defaults in this page are also the recommended values:

"swapWindowSeconds": 10,
"heartbeatIntervalSeconds": 15,
"serverTimeoutSeconds": 45,
"periodicFlushIntervalSeconds": 15,
"backupMaxPerPlayer": 20,
"backupMaxAgeDays": 30

If a high-latency proxy or slow backend startup pushes you off these defaults, raise swapWindowSeconds before raising heartbeat values. Higher heartbeats reduce churn but slow down crash recovery; the settle window is the right knob for “transfers are slightly racy”.