Skip to content

Ceremony Integration

Journey 3.0 requires Ceremony 4.1.4+ as its foundation. Ceremony provides far more than just cross-server communication — it powers the region engine (used by zones), the cutscene engine, HUD management, display entities, sound sequencing, and more.


Journey zones are built on Ceremony’s region system. Ceremony handles spatial detection, enter/exit events, and containment checks. Journey adds its own layer (scripts, functions, task integration) on top.

The Cutscene System is powered by Ceremony. Journey registers 10 custom Ceremony extensions for cutscene actions (camera control, entity management, dialogue, etc.).

Ceremony’s HUD system replaces the old notification system. Journey uses it for:

  • Global task boss bars
  • Buff display overlays
  • Zone entry/exit titles
  • Quest progress notifications

Ceremony provides floating text displays and hologram entities that Journey uses for:

  • NPC name plates
  • Quest objective markers
  • Zone labels
  • Floating popups and tooltips

The sound sequencer handles timed audio playback in cutscenes and timelines.

Ceremony’s particle system provides the per-player particle effects used in cutscenes, zone borders, and visual feedback.

A message bus (NATS or Redis) for real-time communication between servers.

Database backends and player data persistence across servers.


  • Global task progress, start, and stop commands
  • Party invites, joins, leaves, and kicks
  • Party task progress for cross-server parties
  • Player data (when using DATABASE backend)
  • Entity visibility state

  • Ceremony 4.1.4+ installed on all servers
  • Journey installed on all servers
  • A message broker: NATS or Redis (for cross-server features)
  • Optionally: a shared database for persistent storage

Ceremony uses HOCON format. The config file is at config/ceremony/config.conf.

The message bus enables real-time communication between servers.

messageBus {
type = "NATS"
# Unique server identifier
serverId = "survival"
# NATS server address (only used when type = NATS)
natsAddress = "nats://localhost:4222"
# Redis server address (only used when type = REDIS)
redisAddress = "redis://localhost:6379"
# Redis password (leave empty if no authentication)
redisPassword = ""
}
FieldDescription
type"NATS", "REDIS", or "NONE" (single-server mode)
serverIdUnique name for this server (e.g., "survival", "creative", "lobby-1")
natsAddressNATS server URL (only when type = "NATS")
redisAddressRedis server URL (only when type = "REDIS")
redisPasswordRedis auth password, empty if none

NATS vs Redis:

  • NATS — Lightweight, fast, easy setup. Best for pure messaging.
  • Redis — More features (caching, persistence). Best if you already run Redis.

Storage backend for persistent data.

storageType = "MONGO_DB"
storageUrl = "mongodb://localhost:27017"
storageUser = ""
storagePassword = ""
OptionDescription
JSONFile-based JSON storage (single-server only)
SQLITESQLite database (single-server)
MARIA_DBMariaDB/MySQL (multi-server)
MONGO_DBMongoDB (multi-server, recommended)
SCYLLA_DBScyllaDB (high-performance, multi-server)

For single-server setups, JSON or SQLITE works fine. For multi-server networks, use a shared database like MONGO_DB or MARIA_DB.

Configure how player data is stored and synced.

playerData {
backend = "CARDINAL_COMPONENTS"
caching = true
autoSave = true
autoSaveInterval = 300
autoMigrate = true
cleanupEnabled = false
cleanupDays = 90
safetyEnabled = true
optimisticLocking = true
maxRetries = 3
saveTimeoutSeconds = 30
maxPendingSaves = 5
}
FieldDescription
backend"CARDINAL_COMPONENTS" (per-world, client-side) or "DATABASE" (shared across servers)
cachingEnable in-memory caching for performance
autoSavePeriodically save player data
autoSaveIntervalSeconds between auto-saves (default: 300)
autoMigrateMigrate data between backends automatically on join
safetyEnabledEnable data safety features for DATABASE backend
optimisticLockingPrevent concurrent save conflicts

For multi-server setups, use backend = "DATABASE" with a shared database.


Docker (recommended):

Terminal window
docker run -d --name nats -p 4222:4222 -p 8222:8222 nats:latest
  • Port 4222 — Client connections
  • Port 8222 — HTTP monitoring

Standalone binary:

  1. Download from https://nats.io/download/
  2. Run: ./nats-server

Edit config/ceremony/config.conf on each Minecraft server:

messageBus {
type = "NATS"
serverId = "survival"
natsAddress = "nats://192.168.1.100:4222"
}

If you prefer Redis over NATS:

Docker:

Terminal window
docker run -d --name redis -p 6379:6379 redis:latest

With password:

Terminal window
docker run -d --name redis -p 6379:6379 redis:latest --requirepass yourpassword
messageBus {
type = "REDIS"
serverId = "survival"
redisAddress = "redis://192.168.1.100:6379"
redisPassword = "yourpassword"
}

Network:

  • Message broker: 192.168.1.100
  • Database: MongoDB at 192.168.1.100:27017
  • Three Minecraft servers: Survival, Creative, Skyblock

Survival (config/ceremony/config.conf):

messageBus {
type = "NATS"
serverId = "survival"
natsAddress = "nats://192.168.1.100:4222"
}
storageType = "MONGO_DB"
storageUrl = "mongodb://192.168.1.100:27017"
playerData {
backend = "DATABASE"
caching = true
autoSave = true
safetyEnabled = true
}

Creative (config/ceremony/config.conf):

messageBus {
type = "NATS"
serverId = "creative"
natsAddress = "nats://192.168.1.100:4222"
}
storageType = "MONGO_DB"
storageUrl = "mongodb://192.168.1.100:27017"
playerData {
backend = "DATABASE"
caching = true
autoSave = true
safetyEnabled = true
}

Skyblock (config/ceremony/config.conf):

messageBus {
type = "NATS"
serverId = "skyblock"
natsAddress = "nats://192.168.1.100:4222"
}
storageType = "MONGO_DB"
storageUrl = "mongodb://192.168.1.100:27017"
playerData {
backend = "DATABASE"
caching = true
autoSave = true
safetyEnabled = true
}

Each server has a unique serverId. All share the same NATS address and MongoDB URL.


Check your server console after starting for:

[Ceremony] Successfully connected to NATS server at nats://192.168.1.100:4222

Or for Redis:

[Ceremony] Successfully connected to Redis at redis://192.168.1.100:6379

If you see connection errors:

  1. Verify the broker is running (docker ps or your process manager)
  2. Check that the port is accessible (firewall rules)
  3. Confirm the address in your config is correct
  4. Test connectivity: telnet 192.168.1.100 4222 (NATS) or redis-cli -h 192.168.1.100 ping (Redis)

When you run /globaltask start journey:example_task on Survival:

  1. Survival broadcasts the start command via the message bus
  2. All connected servers receive it and start the task locally
  3. Boss bars appear for all players on all servers
  4. Any player’s matching actions on any server contribute to shared progress
  5. Progress updates sync in real-time across all servers
  1. Party data lives on the server where the party was created
  2. Invites are sent via the message bus to players on other servers
  3. Member changes (join/leave/kick) sync across servers
  4. Task progress from any member on any server is shared with all members

If the message broker goes down:

  • Global tasks and parties keep working locally on each server
  • Cross-server sync pauses until the broker reconnects
  • When the broker comes back, sync resumes automatically

  1. Verify the broker is running and reachable from all servers
  2. All servers must have the message bus configured with the same broker address
  3. global_tasks_enabled: true in Journey’s config.json
  4. Check server logs for connection messages
  5. No firewall blocking the broker port
  1. Message bus is connected on both servers
  2. party_enabled: true in Journey’s config.json
  3. Target player is online on the other server
  4. Check server logs for connection errors

“Failed to connect to NATS server”: Check that the NATS server is running and the address is correct.

“Failed to connect to Redis”: Check that Redis is running. If authentication is enabled, include the password in the config.


  • Keep the message broker on the same network as your Minecraft servers for low latency
  • Don’t expose broker ports to the internet — keep them internal
  • For production, consider running the broker on a dedicated machine
  • Monitor NATS health via HTTP at http://your-nats-server:8222
  • Redis can double as a cache for other plugins if you’re already running it