Grant and Kyle dive into a comprehensive review and live test of the newly released Claude Opus 4.7, a cutting-edge large language model. This session explores its capabilities for coding and game dev, specifically referencing the "Renaissance / Plan Final Fantasy Tactics RPG Game" project. Discover how this ai model performs under pressure and its potential impact on game design workflows.
🔴 LIVE at 9:30AM PT / 12:30PM ET
Anthropic just dropped Claude Opus 4.7, and we’re putting it through the gauntlet in real time.
Join Grant Harvey (Lead Writer at The Neuron) for an unscripted, warts-and-all test of Anthropic’s newest flagship model.
What we’re testing
- Advanced coding on tasks Opus 4.6 struggled with
- New higher-resolution vision support for images up to ~3.75 megapixels
- File system-based memory across multi-session work
- The new xhigh effort level, which sits between high and max
- Claude Code’s new /ultrareview slash command
- Auto mode for longer, less-interrupted agent runs
Why this matters
Opus 4.7 is the first model Anthropic is releasing with its new automatic cyber safeguards, following last week’s Project Glasswing announcement.
It’s also the direct upgrade path from Opus 4.6 at the same price:
- $5 per million input tokens
- $25 per million output tokens
If you build on Claude, this is likely the model you’ll be using next.
What’s changing under the hood
- New tokenizer, where the same input can map to more tokens depending on content type, roughly 1.0x to 1.35x
- State-of-the-art score on GDPval-AA, a third-party evaluation of economically valuable knowledge work
- Better instruction following, which means prompts written for earlier models may now behave differently
- Improvements across finance agent evals, document reasoning, and long-context tasks
Bring your hardest prompts. We’ll run them live and show you what breaks, what shines, and whether it’s worth migrating today.
Watch part two, where Grant covers Codex for (almost) anything: https://youtube.com/live/OiRkwm3-og0
📰 Full writeup in tomorrow’s newsletter:
🐱 Subscribe to The Neuron (700K+ readers): https://www.theneuron.ai