This project started with a very simple question: Can I build my own programming language with ChatGPT?
What began as a small experiment quickly turned into something much bigger. On the first day, I managed to build a working interpreter. Over the following weeks, that interpreter evolved into MiniLang, along with a real compiler written in
Python. Today, MiniLang still compiles through that Python compiler (mlc_win64.py) to native Windows x64 executables. A self-hosted compiler (so the compiler re-written in MinLang and compiling itself) is the next major goal, but it is still under development and not running yet.

MiniLang became my own small, dynamically typed programming language that is currently compiled by a Python-based compiler into native Windows x64 executables. Even though the language is intentionally compact, it already supports integers, floats, booleans, strings, arrays, and mutable byte buffers, along with classic control flow, first-class and nested functions with closures, structs, enums, namespaces, packages, and imports for multi-file programs. It also comes with a source-based standard library for things like strings, arrays, math, formatting, file I/O, time, networking, and basic data structures, can call external DLL functions through extern, and uses a lightweight error model based on error(...) values plus try(...) instead of exceptions—so in practice it already feels less like a toy language and more like a real platform for building larger native applications such as MiniDoom.
Here is some MiniLang example code to compute Fibonnaci numbers (iterative algorithm and recursive algorithm):
// =====================================
// Fibonacci Computation in MiniLang
// =====================================
function fibo_recursive(a)
if a <=1 then
return a
end if
return fibo_recursive(a-1) + fibo_recursive(a-2)
end function
function main(args)
if len(args) == 0 then
print "Please provide a number."
return
end if
if len(args) > 1 then
print "Please provide only one number."
return
end if
number = toNumber(args[0])
if typeof(number) != "int" then
print "The provided argument is not a number. It must be a number."
return
end if
for i = 1 to number
print "fibo_recursive(" + i +") = " + fibo_recursive(i)
end for
fibo_list(number)
end function
function fibo_list(n)
list = [0,1]
for i = 0 to n - 1
list = list + [list[i] + list[i+1]]
print "fibo_list(" + (i+1) +") = " + list[i+1]
end for
end function
From a one-day interpreter to a real compiler
The first version was mostly about proving that the idea worked at all. I wanted to see whether the AI (ChatGPT 4.5) could help me get from zero to a usable language quickly.
The interesting part started afterwards. What looked like a quick prototype turned into weeks of iteration: language features, parser work, code generation, a growing standard library, and more and more infrastructure around the compiler. At that point, MiniLang stopped being “just a fun experiment” and became a real systems-oriented project. The repository today reflects that structure clearly, with separate compiler, stdlib, tests, and tools directories.
Self-hosting is the goal — but not the current state
One important correction matters here: MiniLang is not self-hosted yet.
That is the direction I am working toward, but right now the compiler is still the Python implementation. The self-hosted compiler is an active development goal, not a finished milestone.
Then came the classic question: Can it run DOOM?
At some point, the project naturally reached the most obvious benchmark: Can it run DOOM?

That question led to MiniDoom. My first attempt was done with ChatGPT (5.4) in the usual manual back-and-forth workflow. That failed. The project was simply too large, and copying code by hand between prompts became too messy.
The second attempt was to let ChatGPT do more of the work more autonomously. That failed too, for basically the same reason: too much manual synchronization, too much copy-paste, and not enough reliable iteration.
The third attempt was different. I switched to Codex and let it work much more directly inside the project. That still involved a lot of testing, discussion, debugging, and mistakes, but this time it actually worked very well. MiniDoom became a functioning port, and from there I even pushed it further by adding a new multiplayer mode with a server and a new menu to control it.

MiniDoom is not just a gimmick
MiniDoom is not just there for the joke value of “running DOOM”. It turned into a serious stress test for MiniLang.
MiniDoom is a full MiniLang port of the original DOOM engine, aiming for gameplay parity, classic behavior, and native Windows execution. It keeps the original module split, maps the original C/H codebase module-by-module into MiniLang, uses native bindings for platform-specific pieces like video, audio, input, and window handling, and builds through a Python script that also handles EXE icon injection. It also explicitly depends on the Python implementation of the MiniLang compiler.
That is exactly why it was such a valuable test. DOOM forces you to deal with real engine problems: rendering, game logic, input, resource loading, sound, and all the ugly edge cases that small demo programs never expose. Besides implementing the original DOOM, the AI and I created a new multiplayer mode (no IPX but UDP) which runs more or less smooth. Singleplayer runs very well 🙂
What I learned from building a compiler with AI
The biggest lesson was simple: For compiler work, you need a huge test framework.
My process eventually became very strict:
- Have the AI implement a new feature.
- Run all existing tests — they all still have to pass.
- Have the AI add new tests for that new feature.
- Run everything again — the new tests must pass, and the old ones must still pass.
- Repeat the same cycle for the next feature.
Without that loop, the project becomes unstable very quickly. The AI forgets parts or invents stupid things. That lesson is visible in the repository too: MiniLangCompilerPy has a dedicated tests tree, a large language_suite.ml, stdlib_unit_tests.ml, and an especially large run_tests.py harness coordinating the suite.
By the numbers
I counted the current main branch contents of both repositories directly from the repo files.
MiniLangCompilerPy:
- 65 code files total
- 40,192 lines / 33,951 LOC
- Excluding tests, the compiler + stdlib + tooling alone account for 41 code files, 33,879 lines, and 28,724 LOC
- Across the whole repo, that splits into 24 Python files and 41 MiniLang files
The biggest files in the current snapshot are codegen_expr.py (5,145 lines), codegen_stmt.py (4,608), run_tests.py (2,703), asm.py (2,641), and compiler.py (1,948).
MiniDoom:
- 81 code files total
- 60,363 lines / 53,839 LOC
- Of that, the
srctree alone accounts for 79 MiniLang source files, 59,690 lines, and 53,248 LOC
The biggest files in the current snapshot are d_net.ml (6,646 lines), info.ml (5,389), m_menu.ml (2,835), mp_platform.ml (2,148), and i_sound.ml (1,835).
Why this project matters to me
What makes this interesting to me is not just that I built a language, and not just that I got DOOM running on it.
What matters is the full Human-AI chain:
idea → interpreter → compiler → tooling → tests → real application
MiniLang and MiniDoom became a practical way to explore how far AI-assisted development can really go today. AI was incredibly useful, but only when combined with structure, iteration, and relentless testing. Without that, things broke apart. With that, it became possible to build something much larger than the original idea.
It started as a question about ChatGPT. It turned into a full x64 compiler project. And eventually, it turned into a language that was strong enough to take on DOOM.
Where to get it?
Of course, everything is open-source. You can download it and use it, modify it, and play with it 🙂
Get MinLang here: https://github.com/MiniLangProject/MiniLangCompilerPy
Get MiniDoom here: https://github.com/MiniLangProject/MiniDoom










































