Installation Guide
Installation Guide
Section titled “Installation Guide”Link Cable installs as a single jar on every Cobblemon backend in your network. There’s no separate proxy plugin — the proxy stays vanilla. Coordination happens at the backend level over Redis and Mongo.
This guide gets you from “I have the jar” to “transfers actually move state”.
Requirements
Section titled “Requirements”| Requirement | Details |
|---|---|
| Minecraft | 1.21.1 (Fabric) on every backend |
| Java | 21 or higher |
| Cobblemon | Compatible build, configured for MongoDB persistence |
| Ceremony | Compatible version (provides the persistence layer) |
| Fabric Loader | Latest stable |
| Fabric API | Matching your Minecraft version |
| Fabric Language Kotlin | Required |
| Redis | Reachable from every backend |
| MongoDB | Reachable from every backend (Cobblemon already needs this) |
Optional:
- LuckPerms or fabric-permissions-api — gates the
linkcable.command.adminpermission node. Falls through to operator level 2 when absent.
If Ceremony is already bundled in your mod stack, you don’t need to install it separately.
Network Services
Section titled “Network Services”Link Cable assumes the entire network shares one Redis and one MongoDB. This is non-negotiable for cross-server sync to work — the locks, owner-epoch fences, and snapshot storage all live in those two services.
Redis : Used for distributed locks, owner-epoch fencing, hot-cache handoffs, heartbeat publishing, and Ceremony’s distributed services. Run it on a stable host, not an ephemeral test container. Memory footprint is small; the dataset is mostly short-lived TTL keys.
MongoDB : Canonical storage. Cobblemon’s existing collections hold the party, PC, Cobblemon general, and Pokedex lanes; Link Cable adds its own collections alongside for ownership state, backups, the player-core lane (inventory, hunger, XP, abilities, recipe book, Cardinal Components and Fabric Attachment API data, etc.), Molang sync, and migration metadata.
Confirm before going further:
- Every backend server can reach Redis on the same host:port.
- Every backend server can reach Mongo on the same connection string.
- Cobblemon is already using MongoDB as its canonical storage backend (not the file-system fallback).
Installation Steps
Section titled “Installation Steps”1. Drop the Jar on Every Backend
Section titled “1. Drop the Jar on Every Backend”- Stop every backend server.
- Place
linkcable-<version>.jarin each backend’smods/directory, alongside Cobblemon, Ceremony, Fabric API, and Fabric Language Kotlin. - Don’t start the servers yet — the config needs to be set first.
2. Pick the Orchestrator
Section titled “2. Pick the Orchestrator”Exactly one backend in the network is the orchestrator. Every other backend is a participant. The orchestrator is the authority for cross-server coordination decisions; participants ask the orchestrator before completing handoffs.
Pick the most stable backend in your network for the orchestrator role. The hub or lobby server is usually a good choice — it has the highest uptime and is the cheapest to leave running. The orchestrator does not need to be where players spawn, just where the network’s coordination lives.
If the orchestrator goes down, in-flight transfers will retry and stall during the settle window. Live players on participants stay online but can’t change servers cleanly until the orchestrator comes back.
3. First Boot Generates the Config
Section titled “3. First Boot Generates the Config”Start each backend once. Link Cable will create:
config/linkcable/config.jsonStop the server again. Open the file. The defaults look like this:
{ "debug": false, "role": "PARTICIPANT", "serverId": "server1", "orchestratorServerId": "server1", "swapWindowSeconds": 10, "snapshotCacheTtlSeconds": 300, "heartbeatIntervalSeconds": 15, "periodicFlushIntervalSeconds": 15, "serverTimeoutSeconds": 45, "loadTimeoutSeconds": 15, "saveTimeoutSeconds": 15, "backupMaxPerPlayer": 20, "backupMaxAgeDays": 30, "ultraSyncJoinMigrationEnabled": false, "ultraSyncMongoUri": "active", "ultraSyncMongoDatabase": "UltraSync", "ultraSyncMongoCollection": "users", "operationModeThresholds": { "degradedFailureCount": 3, "emergencyFailureCount": 3 }}You’ll edit at minimum role, serverId, and orchestratorServerId on each backend.
4. Configure the Orchestrator
Section titled “4. Configure the Orchestrator”On the chosen orchestrator backend, edit config.json:
{ "role": "ORCHESTRATOR", "serverId": "hub", "orchestratorServerId": "hub"}The serverId is this backend’s unique identifier in the network. The orchestratorServerId matches it because this backend is the orchestrator.
5. Configure Each Participant
Section titled “5. Configure Each Participant”On each other backend, edit config.json:
{ "role": "PARTICIPANT", "serverId": "survival", "orchestratorServerId": "hub"}serverId must be unique per backend. orchestratorServerId must match the orchestrator’s serverId exactly.
The conventional naming is short, lowercase, hyphen-free strings that match what the proxy calls the server (hub, survival, battles, creative, etc.).
6. Configure Redis and Mongo Through Ceremony
Section titled “6. Configure Redis and Mongo Through Ceremony”Link Cable doesn’t have its own Redis or Mongo configuration — it uses whatever Ceremony is configured with. Open Ceremony’s configuration on each backend and point it at the shared Redis and shared MongoDB. The Ceremony docs cover the format; the only requirement Link Cable adds is “every backend points at the same services”.
7. Start the Network
Section titled “7. Start the Network”Bring up the orchestrator first, then the participants. Each backend logs a single confirmation line at config load:
Loaded Link Cable config for role ORCHESTRATOR on hubLoaded Link Cable config for role PARTICIPANT on survivalIf you see a different role or serverId than you expected, the file you edited isn’t the one this backend loaded — confirm config/linkcable/config.json on that backend.
8. First Player Test
Section titled “8. First Player Test”With the orchestrator and at least one participant up, send a test player from the proxy to the orchestrator backend. Then /server <participant> (or whatever your proxy command is).
The player’s Cobblemon party, PC, Pokedex, plus their Minecraft inventory, ender chest, hunger, XP, recipe book, abilities, and any third-party mod data attached via Cardinal Components or the Fabric Attachment API should all be present on the participant. Switch back. Verify state on the orchestrator. If anything is missing, head to the Validation Checklist for the structured test pass.
If a transfer fails or the player loses state, watch the source and destination consoles for [safety] lines and any Failed … warnings. Those are the real signal — successful transfers are silent at the default log level (turn debug = true in config.conf for the per-step trace).
Permissions
Section titled “Permissions”Every admin command sits behind:
linkcable.command.adminWith LuckPerms:
/lp group default permission unset linkcable.command.admin/lp group admin permission set linkcable.command.admin trueWithout a permissions plugin, Link Cable falls back to operator level 2.
The dev-only commands (/linkcable dev …) are only registered when running in a development environment. They are not available on production servers regardless of permission level.
Troubleshooting Setup
Section titled “Troubleshooting Setup”Backends start cleanly, but a transfer hangs or rejects
: Confirm Redis is reachable from both backends and points at the same instance. Confirm orchestratorServerId on every participant matches the orchestrator’s serverId exactly (case-sensitive). The participant needs to see the orchestrator’s presence in Redis to complete a handoff.
Loaded config (role=ORCHESTRATOR, serverId=hub, orchestrator=hub) on more than one backend
: Two backends both think they’re the orchestrator. Pick one, set the other to PARTICIPANT, and restart the participant.
Players join, but party/PC is empty after a transfer
: Confirm Cobblemon is using MongoDB persistence on every backend (not the file-system fallback). Run /linkcable backup create <player> on the backend they were last on — if the resulting backup is empty, the source server didn’t have the data either, which means Cobblemon storage isn’t shared.
Repeated Released stale owner lines during normal play (only visible with debug = true)
: The settle window or heartbeat timing is off for your network’s latency. Raise swapWindowSeconds first, then serverTimeoutSeconds if needed. See Configuration for the field semantics.
Player state is busy, retrying log spam during transfers
: A few of these under stress is normal. Sustained spam usually means a proxy plugin or script is firing multiple /server connect attempts for the same player. Audit proxy-side scripts for retry storms before raising timeouts.
Next Steps
Section titled “Next Steps”- Read Configuration for the full field reference.
- Run the Validation Checklist before opening the network.
- Skim Beta and Production Rollout for production hardening guidance.