Skip to content

Beta and Production Rollout

Link Cable is ready for closed beta and production traffic on a properly-configured network. The defaults are tuned for stability over latency; the patterns on this page cover everything that matters once the setup is past validation.

If you haven’t run Validation Checklist yet, do that first. The notes here assume cross-server switching, every sync lane, backups, and (if applicable) a one-account UltraSync migration are all confirmed working.


For a beta or first production rollout, use these values:

"swapWindowSeconds": 10,
"heartbeatIntervalSeconds": 15,
"serverTimeoutSeconds": 45,
"periodicFlushIntervalSeconds": 15,
"backupMaxPerPlayer": 20,
"backupMaxAgeDays": 30

If you have a high-latency proxy or slow server startup, raise swapWindowSeconds before raising the heartbeat values. The settle window is the right knob for “transfers are slightly racy”; higher heartbeats reduce churn but slow down crash recovery.

See Configuration for the full field reference.


Run through this list before opening to players.

  • Run Redis and MongoDB on stable hosts, not ephemeral test containers. Both are part of the critical path for every transfer; their failure modes are your failure modes.
  • Keep all Link Cable backends on the same Redis deployment. Cross-deployment Redis isn’t supported; coordination needs a shared keyspace.
  • Keep the orchestrator on the most stable backend in the network. If it goes down, in-flight transfers retry through the settle window; live players on participants stay online but can’t switch backends cleanly until it returns.
  • Confirm Ceremony is using MongoDB (not the file-system fallback) as canonical storage on every backend.
  • Confirm every participant is up and configured for the right orchestrator. Each backend logs Loaded Link Cable config for role X on Y at startup. Cross-server transfers are the real test that participants are reaching the orchestrator — if transfers work, heartbeats and presence are working.
  • Run linkcable backup create <player> and linkcable backup list <player> on a test account before live testing. If backups fail, the canonical storage path isn’t writable and live sync will hit the same wall.
  • Run the dev tooling on staging before opening the beta wider. linkcable dev mutate-bounce, linkcable dev audit, and linkcable dev pasture-audit catch issues that cosmetic test passes miss.

Watch these log patterns. Most are signal-bearing in either direction — a few of them under stress is expected; sustained spam means something is wrong.

Watch these first. They surface pasture safety violations and other guardrail trips. A small number around player flow with tethered Pokémon is normal (the guards are there to catch admin mistakes and edge cases). Repeated [safety] lines from the same player or same operation almost always indicates a real bug — file it, don’t suppress it.

Normal under transfer stress. Sustained spam means the network is thrashing.

The most common cause is a proxy plugin or script firing multiple /server connect attempts for the same player in rapid succession. Audit your proxy-side scripts for retry storms before raising any timeout values.

If the spam is genuinely from real player traffic (manual /server mashing), rate-limit at the proxy. Don’t loosen Link Cable’s ownership checks to make the spam “work” — that opens doors to actual bugs.

This line only appears with debug = true. With debug off, the same recovery still happens silently — the visible symptom is “transfers seem fine after a crash”. Turn debug on if you suspect stale-owner churn is happening repeatedly during normal play.

Repeated stale-owner releases during normal play mean heartbeat or proxy timing is off relative to your network’s actual latency. If you see this pattern persistently:

  1. Confirm serverTimeoutSeconds is at least three times heartbeatIntervalSeconds.
  2. Raise swapWindowSeconds if transfers are landing right at the edge of the window.
  3. Audit the proxy for “ghost” disconnect/reconnect cycles — some plugins recycle connections in ways that look like rapid offline/online state changes.

Failed to start handoff, Failed to mark player … active, Failed verified persistence

Section titled “Failed to start handoff, Failed to mark player … active, Failed verified persistence”

Concrete failures in the transfer or save path. These aren’t recoverable by retrying inside Link Cable — they almost always indicate a Mongo connectivity issue, a permission problem with the storage user, or a Redis lock contention pattern that needs investigation. Check Mongo and Redis health first, then the Ceremony storage configuration on the affected backend.

Errors during backup create or canonical storage writes usually point at the same underlying cause as the persistence failures above — the storage backend isn’t writable from this backend. Check Mongo connectivity and the Ceremony configuration.


"periodicFlushIntervalSeconds": 15

It’s tempting to set this to 0 to “reduce Mongo write pressure”. Don’t. The periodic flush is what limits how much unsaved state is exposed during churn. Without it, an unexpected backend crash could lose state that hadn’t yet flushed.

If Mongo write pressure is genuinely a problem, the right answer is faster Mongo, not less flushing.

When you bring backends down for maintenance, stagger the restarts so handoff and reconnect storms don’t stack on top of each other. A network with five backends restarting simultaneously will produce more transfer churn than the same five backends restarting one minute apart.

If you can, restart participants first and the orchestrator last. The orchestrator coming back triggers a settle pass that’s cheaper if participants are already online.

Keep players sticky to one logical destination

Section titled “Keep players sticky to one logical destination”

Link Cable handles bouncing players between backends correctly, but bouncing is still expensive. If your gameplay has a “natural home” for players (a survival server, a hub, etc.), keep them there as the default and only move them on demand. Avoid proxy logic that bounces players through lobbies for every minor action.

If a single player is firing /server commands many times per second, rate-limit at the proxy. Link Cable’s per-player serialization will keep them from corrupting state, but the spam still produces churn that costs CPU and Redis.


The important mitigation for transfer storms is not a longer timeout. It’s keeping per-player transitions serialized and bounded.

Link Cable already does this:

  • Per-player orchestrator requests are serialized behind a distributed lock.
  • Fast-switch joins retry for a short settle window instead of immediately kicking.
  • Owner epoch fencing stops stale writers from landing after ownership changes.
  • Periodic flush reduces the amount of unsaved state exposed during churn.

What you should do operationally:

  • Avoid proxy plugins or scripts that spam multiple connect attempts for the same player.
  • Keep players sticky to one logical destination when possible instead of bouncing them through lobbies.
  • Stagger restart waves so handoff and reconnect storms don’t stack.
  • Rate-limit manual /server spam at the proxy if needed.

If a network eventually needs stricter protection, the next step is a per-player transfer rate limit in front of the join path, not looser ownership checks.


The full backup surface is part of the production runtime, not just a beta feature:

/linkcable backup create <player> [reason]
/linkcable backup list <player>
/linkcable backup browse <player>
/linkcable backup inspect <backupId>
/linkcable backup restore <player> <backupId>
/linkcable backup export <backupId> <path>
/linkcable backup import <path>

Restores stay offline-only and take pre-restore backups automatically. See Backups for the full operational guide.

The two patterns most useful in production:

  • Take a manual backup before any controversial admin action. Bulk economy changes, mass items, etc. — the cap of 20 manual backups per player is plenty of headroom for these.
  • Use inspect and browse before restoring. Confirm the backup is the one you want before overwriting state. The browser GUI shows party and PC contents directly; you don’t have to guess from a timestamp.

SymptomWhere to look
Players disconnect during transfersProxy logs first. If the proxy is fine, turn debug = true on the orchestrator briefly and watch for stale-owner releases or Failed to start handoff warnings.
Party / PC missing on destination after a transferCobblemon Mongo connectivity on the destination. Compare [Link Cable] logs around the transfer with the destination’s Cobblemon storage logs.
Inventory missing on destination after a transferPlayer-core lane sync issue. Check [Link Cable] logs around the transfer for player-core warnings. Often points at a Ceremony storage issue rather than Link Cable.
[safety] lines with same player repeatedlyReal bug or genuine corruption. File a report, get a backup off the player, restore them to their last known-good state if needed.
Orchestrator is offlineLive players stay online. New transfers stall in the settle window. Get the orchestrator back; in-flight transfers will resume.
Both Redis and Mongo are offlineNetwork is functionally down. Bring services back, restart backends, verify with the validation pass before reopening.

For anything that smells like data corruption, take a manual backup of the affected player before attempting any fix. The backup is your rollback.