
Picture by Creator
# Introduction
It looks as if nearly each week, a brand new mannequin claims to be state-of-the-art, beating present AI fashions on all benchmarks.
I get free entry to the newest AI fashions at my full-time job inside weeks of launch. I sometimes don’t pay a lot consideration to the hype and simply use whichever mannequin is auto-selected by the system.
Nonetheless, I do know builders and pals who wish to construct software program with AI that may be shipped to manufacturing. Since these initiatives are self-funded, their problem lies to find the very best mannequin to do the job. They wish to stability value with reliability.
Because of this, after the discharge of GPT-5.2, I made a decision to run a sensible take a look at to know whether or not this mannequin was well worth the hype, and if it actually was higher than the competitors.
Particularly, I selected to check flagship fashions from every supplier: Claude Opus 4.5 (Anthropic’s most succesful mannequin), GPT-5.2 Professional (OpenAI’s newest prolonged reasoning mannequin), and DeepSeek V3.2 (one of many newest open-source options).
To place these fashions to the take a look at, I selected to get them to construct a playable Tetris sport with a single immediate.
These had been the metrics I used to judge the success of every mannequin:
| Standards | Description |
|---|---|
| First Try Success | With only one immediate, did the mannequin ship working code? A number of debugging iterations results in increased value over time, which is why this metric was chosen. |
| Characteristic Completeness | Had been all of the options talked about within the immediate constructed by the mannequin, or was something missed out? |
| Playability | Past the technical implementation, was the sport really clean to play? Or had been there points that created friction within the person expertise? |
| Price-effectiveness | How a lot did it value to get production-ready code? |
# The Immediate
Right here is the immediate I entered into every AI mannequin:
Construct a completely practical Tetris sport as a single HTML file that I can open straight in my browser.
Necessities:
GAME MECHANICS:
– All 7 Tetris piece sorts
– Clean piece rotation with wall kick collision detection
– Items ought to fall routinely, enhance the velocity progressively because the person’s rating will increase
– Line clearing with visible animation
– “Subsequent piece” preview field
– Sport over detection when items attain the highestCONTROLS:
– Arrow keys: Left/Proper to maneuver, All the way down to drop quicker, As much as rotate
– Contact controls for cellular: Swipe left/proper to maneuver, swipe all the way down to drop, faucet to rotate
– Spacebar to pause/unpause
– Enter key to restart after sport overVISUAL DESIGN:
– Gradient colours for every bit sort
– Clean animations when items transfer and contours clear
– Clear UI with rounded corners
– Replace scores in actual time
– Stage indicator
– Sport over display with remaining rating and restart buttonGAMEPLAY EXPERIENCE AND POLISH:
– Clean 60fps gameplay
– Particle results when strains are cleared (non-compulsory however spectacular)
– Improve the rating primarily based on variety of strains cleared concurrently
– Grid background
– Responsive designMake it visually polished and really feel satisfying to play. The code ought to be clear and well-organized.
# The Outcomes
// 1. Claude Opus 4.5
The Opus 4.5 mannequin constructed precisely what I requested for.
The UI was clear and directions had been displayed clearly on the display. All of the controls had been responsive and the sport was enjoyable to play.
The gameplay was so clean that I really ended up taking part in for fairly a while and acquired sidetracked from testing the opposite fashions.
Additionally, Opus 4.5 took lower than 2 minutes to offer me with this working sport, leaving me impressed on the primary strive.


Tetris sport constructed by Opus 4.5
// 2. GPT-5.2 Professional
GPT-5.2 Professional is OpenAI’s newest mannequin with prolonged reasoning. For context, GPT-5.2 has three tiers: Instantaneous, Considering, and Professional. On the level of writing this text, GPT-5.2 Professional is their most clever mannequin, offering prolonged pondering and reasoning capabilities.
Additionally it is 4x costlier than Opus 4.5.
There was numerous hype round this mannequin, main me to go in with excessive expectations.
Sadly, I used to be underwhelmed by the sport this mannequin produced.
On the first strive, GPT-5.2 Professional produced a Tetris sport with a format bug. The underside rows of the sport had been outdoors of the viewport, and I couldn’t see the place the items had been touchdown.
This made the sport unplayable, as proven within the screenshot under:


Tetris sport constructed by GPT-5.2
I used to be particularly stunned by this bug because it took round 6 minutes for the mannequin to provide this code.
I made a decision to strive once more with this follow-up immediate to repair the viewport drawback:
The sport works, however there is a bug. The underside rows of the Tetris board are lower off on the backside of the display. I can not see the items once they land and the canvas extends past the seen viewport.
Please repair this by:
1. Ensuring the whole sport board matches within the viewport
2. Including correct centering so the total board is seenThe sport ought to match on the display with all rows seen.
After the follow-up immediate, the GPT-5.2 Professional mannequin produced a practical sport, as seen within the under screenshot:


Tetris second strive by GPT-5.2
Nonetheless, the sport play wasn’t as clean because the one produced by the Opus 4.5 mannequin.
Once I pressed the “down” arrow for the piece to drop, the subsequent piece would generally plummet immediately at a excessive velocity, not giving me sufficient time to consider methods to place it.
The sport ended up being playable provided that I let every bit fall by itself, which wasn’t the very best expertise.
(Be aware: I attempted the GPT-5.2 Customary mannequin too, which produced comparable buggy code on the primary strive.)
// 3. DeepSeek V3.2
DeepSeek’s first try at constructing this sport had two points:
- Items began disappearing once they hit the underside of the display.
- The “down” arrow that’s used to drop the items quicker ended up scrolling the whole webpage somewhat than simply transferring the sport items.


Tetris sport constructed by DeepSeek V3.2
I re-prompted the mannequin to repair this situation, and the gameplay controls ended up working accurately.
Nonetheless, some items nonetheless disappeared earlier than they landed. This made the sport utterly unplayable even after the second iteration.
I’m positive that this situation could be fastened with 2–3 extra prompts, and given DeepSeek’s low pricing, you can afford 10+ debugging rounds and nonetheless spend lower than one profitable Opus 4.5 try.
# Abstract: GPT-5.2 vs Opus 4.5 vs DeepSeek 3.2
// Price Breakdown
Here’s a value comparability between the three fashions:
| Mannequin | Enter (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| DeepSeek V3.2 | $0.27 | $1.10 |
| GPT-5.2 | $1.75 | $14.00 |
| Claude Opus 4.5 | $5.00 | $25.00 |
| GPT-5.2 Professional | $21.00 | $84.00 |
DeepSeek V3.2 is the most affordable various, and you may also obtain the mannequin’s weights free of charge and run it by yourself infrastructure.
GPT-5.2 is sort of 7x costlier than DeepSeek V3.2, adopted by Opus 4.5 and GPT-5.2 Professional.
For this particular activity (constructing a Tetris sport), we consumed roughly 1,000 enter tokens and three,500 output tokens.
For every further iteration, we’ll estimate an additional 1,500 tokens per further spherical. Right here is the full value incurred per mannequin:
| Mannequin | Complete Price | End result |
|---|---|---|
| DeepSeek V3.2 | ~$0.005 | Sport is not playable |
| GPT-5.2 | ~$0.07 | Playable, however poor person expertise |
| Claude Opus 4.5 | ~$0.09 | Playable and good person expertise |
| GPT-5.2 Professional | ~$0.41 | Playable, however poor person expertise |
# Takeaways
Primarily based on my expertise constructing this sport, I’d stick with the Opus 4.5 mannequin for day after day coding duties.
Though GPT-5.2 is cheaper than Opus 4.5, I personally wouldn’t use it to code, because the iterations required to yield the identical end result would seemingly result in the identical amount of cash spent.
DeepSeek V3.2, nevertheless, is way extra inexpensive than the opposite fashions on this record.
Should you’re a developer on a finances and have time to spare on debugging, you’ll nonetheless find yourself saving cash even when it takes you over 10 tries to get working code.
I used to be stunned at GPT 5.2 Professional’s lack of ability to provide a working sport on the primary strive, because it took round 6 minutes to assume earlier than arising with flawed code. In spite of everything, that is OpenAI’s flagship mannequin, and Tetris ought to be a comparatively easy activity.
Nonetheless, GPT-5.2 Professional’s strengths lie in math and scientific analysis, and it’s particularly designed for issues that don’t depend on sample recognition from coaching information. Maybe this mannequin is over-engineered for easy day-to-day coding duties, and may as a substitute be used when constructing one thing that’s advanced and requires novel structure.
The sensible takeaway from this experiment:
- Opus 4.5 performs greatest at day-to-day coding duties.
- DeepSeek V3.2 is a finances various that delivers cheap output, though it requires some debugging effort to succeed in your required end result.
- GPT-5.2 (Customary) didn’t carry out in addition to Opus 4.5, whereas GPT-5.2 (Professional) might be higher suited to advanced reasoning than fast coding duties like this one.
Be at liberty to duplicate this take a look at with the immediate I’ve shared above, and glad coding!
 
 
Natassha Selvaraj is a self-taught information scientist with a ardour for writing. Natassha writes on every little thing information science-related, a real grasp of all information matters. You may join together with her on LinkedIn or try her YouTube channel.
