Skip to content

Installation Guide

Link Cable installs as a single jar on every Cobblemon backend in your network. There’s no separate proxy plugin — the proxy stays vanilla. Coordination happens at the backend level over Redis and Mongo.

This guide gets you from “I have the jar” to “transfers actually move state”.


RequirementDetails
Minecraft1.21.1 (Fabric) on every backend
Java21 or higher
CobblemonCompatible build, configured for MongoDB persistence
CeremonyCompatible version (provides the persistence layer)
Fabric LoaderLatest stable
Fabric APIMatching your Minecraft version
Fabric Language KotlinRequired
RedisReachable from every backend
MongoDBReachable from every backend (Cobblemon already needs this)

Optional:

  • LuckPerms or fabric-permissions-api — gates the linkcable.command.admin permission node. Falls through to operator level 2 when absent.

If Ceremony is already bundled in your mod stack, you don’t need to install it separately.


Link Cable assumes the entire network shares one Redis and one MongoDB. This is non-negotiable for cross-server sync to work — the locks, owner-epoch fences, and snapshot storage all live in those two services.

Redis : Used for distributed locks, owner-epoch fencing, hot-cache handoffs, heartbeat publishing, and Ceremony’s distributed services. Run it on a stable host, not an ephemeral test container. Memory footprint is small; the dataset is mostly short-lived TTL keys.

MongoDB : Canonical storage. Cobblemon’s existing collections hold the party, PC, Cobblemon general, and Pokedex lanes; Link Cable adds its own collections alongside for ownership state, backups, the player-core lane (inventory, hunger, XP, abilities, recipe book, Cardinal Components and Fabric Attachment API data, etc.), Molang sync, and migration metadata.

Confirm before going further:

  • Every backend server can reach Redis on the same host:port.
  • Every backend server can reach Mongo on the same connection string.
  • Cobblemon is already using MongoDB as its canonical storage backend (not the file-system fallback).

  1. Stop every backend server.
  2. Place linkcable-<version>.jar in each backend’s mods/ directory, alongside Cobblemon, Ceremony, Fabric API, and Fabric Language Kotlin.
  3. Don’t start the servers yet — the config needs to be set first.

Exactly one backend in the network is the orchestrator. Every other backend is a participant. The orchestrator is the authority for cross-server coordination decisions; participants ask the orchestrator before completing handoffs.

Pick the most stable backend in your network for the orchestrator role. The hub or lobby server is usually a good choice — it has the highest uptime and is the cheapest to leave running. The orchestrator does not need to be where players spawn, just where the network’s coordination lives.

If the orchestrator goes down, in-flight transfers will retry and stall during the settle window. Live players on participants stay online but can’t change servers cleanly until the orchestrator comes back.

Start each backend once. Link Cable will create:

config/linkcable/config.json

Stop the server again. Open the file. The defaults look like this:

{
"debug": false,
"role": "PARTICIPANT",
"serverId": "server1",
"orchestratorServerId": "server1",
"swapWindowSeconds": 10,
"snapshotCacheTtlSeconds": 300,
"heartbeatIntervalSeconds": 15,
"periodicFlushIntervalSeconds": 15,
"serverTimeoutSeconds": 45,
"loadTimeoutSeconds": 15,
"saveTimeoutSeconds": 15,
"backupMaxPerPlayer": 20,
"backupMaxAgeDays": 30,
"ultraSyncJoinMigrationEnabled": false,
"ultraSyncMongoUri": "active",
"ultraSyncMongoDatabase": "UltraSync",
"ultraSyncMongoCollection": "users",
"operationModeThresholds": {
"degradedFailureCount": 3,
"emergencyFailureCount": 3
}
}

You’ll edit at minimum role, serverId, and orchestratorServerId on each backend.

On the chosen orchestrator backend, edit config.json:

{
"role": "ORCHESTRATOR",
"serverId": "hub",
"orchestratorServerId": "hub"
}

The serverId is this backend’s unique identifier in the network. The orchestratorServerId matches it because this backend is the orchestrator.

On each other backend, edit config.json:

{
"role": "PARTICIPANT",
"serverId": "survival",
"orchestratorServerId": "hub"
}

serverId must be unique per backend. orchestratorServerId must match the orchestrator’s serverId exactly.

The conventional naming is short, lowercase, hyphen-free strings that match what the proxy calls the server (hub, survival, battles, creative, etc.).

6. Configure Redis and Mongo Through Ceremony

Section titled “6. Configure Redis and Mongo Through Ceremony”

Link Cable doesn’t have its own Redis or Mongo configuration — it uses whatever Ceremony is configured with. Open Ceremony’s configuration on each backend and point it at the shared Redis and shared MongoDB. The Ceremony docs cover the format; the only requirement Link Cable adds is “every backend points at the same services”.

Bring up the orchestrator first, then the participants. Each backend logs a single confirmation line at config load:

Loaded Link Cable config for role ORCHESTRATOR on hub
Loaded Link Cable config for role PARTICIPANT on survival

If you see a different role or serverId than you expected, the file you edited isn’t the one this backend loaded — confirm config/linkcable/config.json on that backend.

With the orchestrator and at least one participant up, send a test player from the proxy to the orchestrator backend. Then /server <participant> (or whatever your proxy command is).

The player’s Cobblemon party, PC, Pokedex, plus their Minecraft inventory, ender chest, hunger, XP, recipe book, abilities, and any third-party mod data attached via Cardinal Components or the Fabric Attachment API should all be present on the participant. Switch back. Verify state on the orchestrator. If anything is missing, head to the Validation Checklist for the structured test pass.

If a transfer fails or the player loses state, watch the source and destination consoles for [safety] lines and any Failed … warnings. Those are the real signal — successful transfers are silent at the default log level (turn debug = true in config.conf for the per-step trace).


Every admin command sits behind:

linkcable.command.admin

With LuckPerms:

/lp group default permission unset linkcable.command.admin
/lp group admin permission set linkcable.command.admin true

Without a permissions plugin, Link Cable falls back to operator level 2.

The dev-only commands (/linkcable dev …) are only registered when running in a development environment. They are not available on production servers regardless of permission level.


Backends start cleanly, but a transfer hangs or rejects : Confirm Redis is reachable from both backends and points at the same instance. Confirm orchestratorServerId on every participant matches the orchestrator’s serverId exactly (case-sensitive). The participant needs to see the orchestrator’s presence in Redis to complete a handoff.

Loaded config (role=ORCHESTRATOR, serverId=hub, orchestrator=hub) on more than one backend : Two backends both think they’re the orchestrator. Pick one, set the other to PARTICIPANT, and restart the participant.

Players join, but party/PC is empty after a transfer : Confirm Cobblemon is using MongoDB persistence on every backend (not the file-system fallback). Run /linkcable backup create <player> on the backend they were last on — if the resulting backup is empty, the source server didn’t have the data either, which means Cobblemon storage isn’t shared.

Repeated Released stale owner lines during normal play (only visible with debug = true) : The settle window or heartbeat timing is off for your network’s latency. Raise swapWindowSeconds first, then serverTimeoutSeconds if needed. See Configuration for the field semantics.

Player state is busy, retrying log spam during transfers : A few of these under stress is normal. Sustained spam usually means a proxy plugin or script is firing multiple /server connect attempts for the same player. Audit proxy-side scripts for retry storms before raising timeouts.


  1. Read Configuration for the full field reference.
  2. Run the Validation Checklist before opening the network.
  3. Skim Beta and Production Rollout for production hardening guidance.