Skip to content

ML Camera System

The ML (Machine Learning) camera system learns from player preferences to provide personalized camera experiences.

The ML system:

  • Tracks player viewing preferences
  • Evaluates camera quality metrics
  • Detects obstructions and issues
  • Adapts camera positioning over time
  • Personalizes per player

Location: config/witness/config.json

{
"mlCamera": {
"enabled": true,
"learningRate": 0.01,
"saveInterval": 300,
"minDataPoints": 100
}
}

The system tracks:

  1. Camera Positions: Where cameras are placed
  2. Player Actions: How players interact with camera
  3. Quality Metrics: Obstruction, distance, angles
  4. Preferences: Which angles players view longest
  5. Context: Battle format, Pokemon species, moves
  1. Observation: System observes camera usage
  2. Evaluation: Rates camera positions by quality
  3. Training: Updates model with new data
  4. Prediction: Suggests optimal camera positions
  5. Refinement: Continuous improvement

Cameras are scored on:

  • Visibility: Is action clearly visible?
  • Obstruction: Are there blocks in the way?
  • Distance: Is camera distance appropriate?
  • Angle: Is viewing angle good?
  • Framing: Are Pokemon well-framed?

The system learns:

{
"player_uuid": {
"preferred_distance": 8.5,
"preferred_height": 2.8,
"preferred_angle": 12.0,
"camera_sensitivity": 0.05,
"orbit_preference": 0.6,
"format_preferences": {
"singles": {
"distance": 7.0,
"height": 2.5
},
"doubles": {
"distance": 11.0,
"height": 3.5
}
}
}
}

Players influence ML through:

  • View Duration: Longer views indicate good angles
  • Camera Adjustments: Manual adjustments teach preferences
  • Battle Replay: Watching replays shows preferred angles
  • Format Selection: Format-specific preferences

Location: config/witness/ml_config.json

{
"features": {
"trackPlayerPreference": true,
"trackCameraQuality": true,
"trackObstructions": true,
"adaptToFormat": true,
"learnFromReplays": false
},
"weights": {
"distance": 0.3,
"angle": 0.25,
"height": 0.2,
"orbit": 0.15,
"focus": 0.1
},
"training": {
"batchSize": 10,
"epochs": 5,
"validationSplit": 0.2,
"minBatchSize": 5
},
"quality": {
"obstructionPenalty": 0.5,
"boundaryPenalty": 0.3,
"distancePenalty": 0.2,
"framingBonus": 0.4
},
"persistence": {
"saveInterval": 300,
"autoBackup": true,
"backupCount": 5
}
}
{
"trackPlayerPreference": true
}

Learns individual player preferences for camera positioning.

{
"trackCameraQuality": true
}

Evaluates and scores camera positions for quality.

{
"trackObstructions": true
}

Detects when terrain blocks view and avoids those positions.

{
"adaptToFormat": true
}

Learns different preferences for singles, doubles, etc.

Adjust how much each factor influences decisions:

{
"weights": {
"distance": 0.3, // Camera distance importance
"angle": 0.25, // Viewing angle importance
"height": 0.2, // Camera height importance
"orbit": 0.15, // Orbit movement importance
"focus": 0.1 // Focus target importance
}
}

Higher values = more influence on decisions.

{
"batchSize": 10
}

Number of data points per training batch. Higher = more stable, slower.

{
"epochs": 5
}

Training iterations per batch. Higher = better learning, slower.

{
"validationSplit": 0.2
}

Percentage of data used for validation (0.0-1.0).

The system detects:

  • Blocks between camera and Pokemon
  • Terrain blocking view
  • Other entities in the way
  • Weather/particle interference

Penalty applied when obstructions found:

{
"obstructionPenalty": 0.5
}

Checks if camera is:

  • Outside battle area
  • In invalid positions
  • Too close to boundaries
{
"boundaryPenalty": 0.3
}

Rates camera distance:

  • Too close: Penalty
  • Too far: Penalty
  • Optimal range: Bonus
{
"distancePenalty": 0.2
}

Rewards good framing:

  • Pokemon centered
  • Action visible
  • Appropriate zoom
{
"framingBonus": 0.4
}

Models stored in config/witness/ml/:

ml/
├── global_model.json # Server-wide model
├── players/
│ ├── player_uuid_1.json # Per-player models
│ └── player_uuid_2.json
└── backups/
├── global_backup_1.json
└── global_backup_2.json
{
"version": "1.0",
"lastUpdated": "2025-10-01T12:00:00Z",
"dataPoints": 523,
"parameters": {
"distance": {
"mean": 8.2,
"variance": 1.4,
"confidence": 0.85
},
"angle": {
"mean": 12.5,
"variance": 2.1,
"confidence": 0.78
}
},
"formatSpecific": {
"singles": { ... },
"doubles": { ... }
}
}
/witness ml status

Shows:

  • Learning enabled/disabled
  • Data points collected
  • Model confidence levels
  • Recent learning activity
/witness ml preferences [player]

Display learned preferences for a player.

/witness ml reset [player]

Reset ML data for player or globally.

/witness ml train

Force immediate training cycle (admin only).

ML system uses additional memory:

  • ~1KB per player model
  • ~10KB for global model
  • ~5KB per backup

Monitor with: /witness ml memory

Training has minimal performance impact:

  • Runs asynchronously
  • Batched processing
  • Configurable intervals

Models saved periodically:

  • Default: Every 5 minutes
  • Configurable via saveInterval
  • Automatic backups
  1. Start with ML disabled
  2. Collect baseline data with static cameras
  3. Enable ML after 100+ battles
  4. Monitor initial learning phase
  • Higher (0.05-0.1): Fast learning, less stable
  • Medium (0.01-0.05): Balanced
  • Lower (0.001-0.01): Slow learning, more stable

Start with 0.01 and adjust based on results.

Minimum data points for reliable learning:

  • Global Model: 100+ battles
  • Player Model: 20+ battles
  • Format Model: 50+ battles of that format

Regularly check:

/witness ml quality

Look for:

  • High confidence scores (>0.7)
  • Consistent preferences
  • Low obstruction rates

Symptoms: Preferences not changing

Solutions:

  • Check minDataPoints threshold
  • Verify enabled: true
  • Check file permissions
  • Review logs for errors

Symptoms: ML suggests bad angles

Solutions:

  • Increase minDataPoints
  • Lower learningRate
  • Reset and retrain
  • Adjust quality weights

Symptoms: Memory growing over time

Solutions:

  • Reduce backupCount
  • Increase saveInterval
  • Clean old player models
  • Limit batchSize

Define custom quality evaluation:

{
"customQuality": {
"enabled": true,
"function": "weighted_average",
"factors": {
"visibility": 0.4,
"distance": 0.3,
"angle": 0.2,
"aesthetic": 0.1
}
}
}

Different learning rates per format:

{
"formatLearning": {
"singles": {
"learningRate": 0.02,
"minDataPoints": 50
},
"doubles": {
"learningRate": 0.015,
"minDataPoints": 75
}
}
}