← Back to Projects
MYSTICA preview

MYSTICA

AI-POWERED GEO-RPG GAME

React NativePostgreSQLRedisSpringFine-tuning

THE STORY

As CTO of Tricky Djinn LLC, I was tasked with making an AI act as a game engine for a geo-RPG. Unlike AR games, we focused on the combat experience itself, aiming to replicate text adventures and tabletop roleplaying games. From a technical standpoint, this is quite challenging—we're asking a lot of AI. Despite the 'magic' of LLMs, getting them to behave like a game engine (returning structured data, not just text) while being cost-effective at scale was the core technical problem I solved.

QUICK STATS

timeline
Aug 2024 - Dec 2024
role
CTO & Lead Developer
team
5 people
cost Savings
30% operating cost reduction

DEVELOPMENT INSIGHTS

"Despite the 'magic' of AI, we're asking a lot of it—we're asking it to act as a game engine. The key insight was that you can use expensive, smart models to teach cheaper, smaller models how to behave. Fine-tuning isn't just about customization—it's about cost optimization at scale. This approach of using larger models to create training data for smaller models is similar to how I believe OpenAI's reasoning models work internally."

TECHNICAL CHALLENGES

LLM as Game Engine

LLMs return text, not data, and are trained to behave like chat bots, not game engines. Initially, I used a sequence of prompts where each step wraps user content in specific instructions on processing and output format, spits out XML, and gets repeated until steps are completed. Finally, it outputs a response for the outcome, along with metadata for affecting user health and state of encounter. This works but is expensive.

Fine-Tuning for Cost Reduction

I realized we could use the larger model with massive, hyper-engineered prompts to create training data for a smaller model. The smaller model wouldn't need instructions on what to do or output formatting. We would have the larger model go through the same prompts the smaller model would later use, take that structured JSON output, and combine it with a much simpler input for the smaller model. Despite fine-tuned models costing more per token, this reduced our input token-count by over 50%, saving us 30% on projected operating costs.

Real-Time Multiplayer Infrastructure

Set up multiplayer using websocket connections and Redis 'sessions', balancing lightning fast communication between players with a scalable backend and database. The combat system required 4 different AI steps to turn user actions into a full round of combat, each with its own prompt that was reduced via fine-tuning.

KEY FEATURES

  • AI Combat Engine
    4-step AI pipeline converts player actions into structured game outcomes
  • Fine-Tuned Models
    Custom fine-tuning reduced prompts by 50% for 30% cost savings
  • Location-Based Encounters
    Google Maps API integration for seeding encounters at real-world locations
  • WebSocket Multiplayer
    Real-time combat sessions with Redis for scalable player connections
  • Multi-Environment Deployment
    Development, testing, and live environments with effortless deployment

IMPACT & RESULTS

  • 50%+ reduction in input token count
  • 30% savings on projected operating costs
  • Real-time multiplayer with sub-second latency
  • Encounters regenerate daily/weekly for replay value
  • Novel fine-tuning architecture applicable to other LLM cost optimization problems

RELATED WORK

Check out Mystica and Mercury Notes for related projects

Silas Rhyneer | Portfolio