Thursday, June 5, 2025

Marek's Dev Diary: June 5, 2025

What is this

Every Thursday, I will share a dev diary about what we've been working on over the past few weeks. I'll focus on the interesting challenges and solutions that I encountered. I won't be able to cover everything, but I'll share what caught my interest.

Why am I doing it

I want to bring our community along on this journey, and I simply love writing about things I'm passionate about! This is my unfiltered dev journal, so please keep in mind that what I write here are my thoughts and will be outdated by the time you read this, as so many things change quickly. Any plans I mention aren't set in stone and everything is subject to change. Also, if you don't like spoilers, then don't read this.

A few months ago, I restructured my schedule to alternate weekly between Space Engineers 2 and AI People. This approach lets me maintain a deep focus on each project. I'm really someone who needs to dig deep into something and work on it until it's done, rather than switching context every few hours.

AI People

This week's focus was AI People, specifically exploring AI-assisted programming (what we call "vibe coding") to accelerate development of our AI NPCs. However, instead of diving into planned features, we spent the week refining our methodology.

Our experience with Cursor + Opus 4 and Gemini Pro 2.5 revealed a frustrating pattern. Initial progress feels promising, but then you hit a wall: request a change, the AI modifies code, you skip review and test directly, discover new bugs or no improvement, repeat. Hours later, you realize you're going nowhere.

The core issue? Current AI agents approach software engineering fundamentally differently than experienced developers.

How AI agents work today: You describe a feature → AI reasons briefly → finds files → implements changes → declares completion.

How expert developers actually work:
  1. Fully understand requirements and context
  2. Study relevant code thoroughly (nothing more, nothing less)
  3. Break complex changes into testable chunks
  4. Implement with constant awareness of ripple effects
  5. Review for edge cases and unintended consequences
  6. Update all affected elements - comments, references, documentation, architecture diagrams
  7. Write comprehensive tests and iterate based on results
  8. Access running systems for real-time debugging and observation
Current tools can't replicate this workflow. They also lack game runtime access, can't insert diagnostic traces, and miss the holistic view that makes great code.

This realization led us to explore building our own SWE agent. We're studying Claude Code, which implements some of these concepts plus additional capabilities like sub-agents.

Key insights from this exploration:

LLMs feel superhuman in their domains. Yes, they still have gaps and can only handle minute-long tasks rather than day-long projects, but within their scope? Opus 4 writes a complete Tetris game in seconds - a task that would take me days. The bottleneck isn't intelligence; it's the scaffolding.

When AI programming fails, it's rarely the model's fault. It's inadequate tooling around it. I'm convinced 2025 will bring revolutionary improvements: correct SWE loop, specialized agents for exploration, coding, testing, reviewing, evaluating, validating, etc; sophisticated code search and indexing, intelligent test automation, multimodal feedback loops. Imagine Gemini analyzing gameplay video to fix bugs autonomously.

We're validating our approach on smaller codebases and design documents. Design docs are particularly revealing - text changes are far easier to track than code modifications, exposing flawed agent behavior immediately.

Case in point: I asked Cursor to reformat log specifications in our design doc. It updated one section, missed another, left duplicates, never reviewed its work. In a text document, these mistakes jump out instantly. In code, they'd hide among thousands of lines. Classic junior developer behavior - making changes without verifying impact.

How about costs? Sure, spending $100 on tokens in a day might seem expensive. But if the AI delivers in one day what would take me two weeks? That $100 is cheap. Plus you're iterating in hours instead of weeks. Clear win.

We're not quite there yet, but I'm confident this year will bring the breakthrough. Once we crack this, we'll accelerate AI People development dramatically, running parallel experiments and iterating at unprecedented speed.

Space Engineers 2

Given my AI agent focus this week, my SE2 time went into writing the SE2 Vision document - a comprehensive guide defining requirements, constraints, and KPIs for the team.

Our north star: Make SE2 mainstream while delivering 10x improvements across every dimension - art, code, design, quality, performance.


The key insight: SE2 will match or exceed SE1's complexity, but we're wrapping it in an accessibility layer. New players start with intuitive, manageable systems. Complexity reveals itself progressively as skills develop.

We're also prioritizing engaging gameplay loops and meaningful progression. The complexity remains - it just becomes fun to master rather than overwhelming to encounter.