This Week's Focus ⤵️

Good afternoon everyone or maybe morning by the time this gets to you. Sorry I've been MIA on the newsletter for a few weeks. I've been running multiple Claude Code (Opus 4.6) and OpenAI Codex (GPT-5.3) agents non-stop, and the days have kind of blurred together.

Managing lots of agents is fun, but as a former boss of mine used to say about managing extremely smart engineers and scientists — it's sometimes like herding cats. Part art, part science, and a lot of patience and humility is required to let the bots do what they do best while you stay close enough to watch the details and keep everything moving in the right direction.

Anyway, I'm back for a brief update.

What I'd like to focus on in today's issue are some of the practical realities we're seeing and dealing with in this incredible new period of AI innovation. I've been repeating on conference calls lately that I haven't been this excited about technology since I was applying computer vision to brand campaigns well over ten years ago. Maybe more — I've lost track of time. The years stop adding up in my mind at a certain point.

Let's get into it.

Diving Deeper 🤿

For those of you that are regular readers, you know that for well over a year now Sprinklenet has been building the Knowledge Spaces platform — a managed services platform that acts as a control layer between enterprise data (documents, streaming data from enterprise applications that connect to Spaces via APIs) and LLMs. You know that Spaces is LLM-agnostic and supports a growing list of all the major models — OpenAI's GPT, xAI's Grok, Google Gemini, Anthropic's Claude, and more. And you know that the end users could be anything from consumers using chatbots powered by Spaces to internal employees and partners doing the same. This middleware application layer is getting more and more important to how enterprises apply the necessary control, configuration, and compliance as AI gets integrated and rolled out across their businesses.

Now — like any software company — we use code repositories, and we have a variety of contributors that work on the platform in different ways. I traditionally don't contribute code directly. But I oversee a lot in terms of design, system architecture, and integration touchpoints, because these things are so critical to the user experience that partners and end users have. I keep a close eye on structure, folders, content, changes. It's just part of what I do.

What's changed dramatically in the span of a month is my ability to contribute directly to code in a very systematic and scalable way.

I now use both Claude Code (Opus 4.6) and OpenAI Codex (GPT-5.3) to run my own local reviews of massive codebases. I find areas I want to work on and improve directly. We're still working some kinks out in terms of the workflow and tooling, although I think we've largely got it dialed in — our speed and accuracy are higher than ever right now. Basically, I can run interference between business requirements, suggestions from users (especially power users, who are incredibly important), and affect changes in the platform directly.

It's not perfect. And anyone who says these coding agents are going to put software engineers out of jobs is not really using these things themselves.

Here's what I think is actually happening: low-level and relatively unskilled software developers have big reasons to be scared. They either need to upskill themselves immediately or find new lines of work. But the strongest engineers — the ones who can keep the big picture in their heads while simultaneously operating and orchestrating multiple agents — these people are going to be worth a lot more than they already are.

I'm not the first person to say this. The research backs it up. According to Karat's engineering research, 73% of SVPs and CTOs now say strong engineers are worth at least 3x their total compensation, while 59% say weak engineers deliver net zero or negative value in the AI era. The delta between mediocre talent and top talent is widening. The mediocre performers won't just be unproductive — they'll be hindrances. They'll actively slow down the teams and systems around them. Meanwhile, the top human performers who work in harmony with the best AI tools will grow immensely in their productivity and ability to contribute to enterprise value.

The AI skills gap is now the steepest talent gap in technology. More than half of IT leaders face AI talent shortages, up from just 28% in 2023. For the first time, AI skills are harder to find than cybersecurity or big data talent. And the biggest gap in 2026 isn't prompt engineering — it's agentic engineering: the discipline of designing, governing, and operating AI agents that survive in production.

Core Principles in Practice 💡

Be curious. In order to learn, you have to dive in. It helps to have some background and understanding of what's going on with these tools, but at the end of the day there is no substitute for jumping in the water and learning how to swim. Download Claude Code. Download Codex. Start working on something. Deploy it, test it, refine it, iterate.

Security is no joke. You've heard about Moltbook — the social network where AI agents have been communicating with each other, forming communities, even attempting to create new religions and languages. Tens of thousands of bots talking to each other about their relationships with "their humans." Elon Musk called it the early stages of the singularity. Security researchers have already found hundreds of exposed instances leaking API keys and credentials. Palo Alto Networks identified what they call a "lethal trifecta": access to private data, exposure to untrusted content, and the ability to communicate externally. So yes — experiment, have fun, but be mindful that depending on the level of access you give these agents, they can quickly take on a life of their own.

It's going to cost money. This stuff isn't free. OpenAI, Anthropic, xAI, and others are incredibly valuable because they're contributing enormous value to the economy. It costs money to run these tools, and the premium plans for power users are well worth it. If you want to play around, experiment, and build toys, you can do that for $20/month. But if you want to build enterprise systems and go into production, be prepared to spend significantly more.

It never stops. The power of agents allows you to move fast and the accuracy is incredible, but software and technology are living, breathing things (well, not literally — but you get my point). They require constant updates, improvements, and attention. This isn't a "set it and forget it" world.

A conductor doesn't make a sound. He depends, for his power, on his ability to make other people powerful.

Benjamin Zander

AI Trends & News 📰

Both Anthropic and OpenAI Dropped New Coding Models on the Same Day. On February 5th, Anthropic released Claude Opus 4.6 and OpenAI released GPT-5.3-Codex — the most capable agentic coding models to date from each company. Opus 4.6 features a 1M token context window (a first for Opus-class models) and leads Terminal-Bench 2.0. Codex 5.3 is OpenAI's first model that was instrumental in creating itself. Both represent a massive leap in what's possible with AI-assisted development.

Moltbook: AI Agents Build Their Own Social Network. An AI social network called Moltbook exploded in late January, with over 32,000 AI agent users forming communities, upvoting posts, and interacting without human participation. Born from the OpenClaw project, it has sparked major conversations around AI autonomy, security, and what happens when agents can communicate freely.

Sonnet 4.6 Follows 12 Days Later. Anthropic followed up Opus 4.6 with Claude Sonnet 4.6 on February 17th — promising Opus-level coding performance at Sonnet pricing. The pace of releases is relentless.

Legacy Spotlight 🔧

Most enterprises aren't building from scratch. They're sitting on years — sometimes decades — of accumulated code, legacy frameworks, and technical debt. And this is exactly where AI coding agents can deliver the most value, or cause the most damage.

The opportunity is real. An agent like Claude Code can ingest a massive legacy codebase, map dependencies, identify dead code, and surface refactoring opportunities faster than any human team. I've seen it firsthand with our own repositories. You point the agent at a folder you haven't touched in months, and within minutes it's flagged inconsistencies, outdated patterns, and integration gaps you didn't know existed.

But here's the risk: legacy systems are full of context that doesn't live in the code. Business rules buried in obscure functions. Workarounds that exist for reasons nobody documented. Edge cases that only surface in production. An agent doesn't know any of that unless you tell it — and if you let it refactor aggressively without experienced engineers reviewing the output, you'll introduce subtle bugs that won't show up until they're in front of customers.

The rule of thumb we follow at Sprinklenet: agents propose, humans approve. Let the agents do the heavy lifting on analysis, documentation, and draft refactors. But keep your strongest engineers in the review loop, especially on anything that touches core business logic, data pipelines, or customer-facing systems. The agents are fast and they're accurate on syntax — but they don't understand your business the way your best people do. Not yet.

Closer to Alignment

If you're an enterprise leader watching all of this unfold, the question isn't whether to adopt AI coding agents — it's how to integrate them without creating chaos.

The alignment challenge right now is organizational, not technical. The tools work. What's harder is getting your best engineers, your product managers, and your business stakeholders rowing in the same direction on how these agents get deployed, what guardrails are in place, and who is accountable for what the agents produce.

My recommendation: start with your strongest people. Give them the tools, give them room to experiment, and build your playbook from the ground up based on what actually works in your environment. Don't try to boil the ocean. Pick a meaningful project, let your best people run with it, and iterate from there.

Balanced & Insightful ⚖️

We're now in a world where two major AI coding agents dominate the landscape, and both dropped on the same day. Here's how they stack up:

Feature

Claude Code

(Opus 4.6)

OpenAI Codex

(GPT-5.3)

Release Date

February 5, 2026

February 5, 2026

Context Window

1M tokens (beta)

Not publicly disclosed

Agentic Coding Benchmark

#1 on Terminal-Bench 2.0

Top-tier (close competitor)

Speed

Optimized for long-horizon tasks (14.5 hr 50% time horizon)

25% faster than GPT-5.2-Codex

Key Differentiator

Agent Teams — assemble multiple agents to collaborate on tasks

Interactive — steer and interact while it works without losing context

Cybersecurity

Found high-severity vulnerabilities in well-tested codebases

First OpenAI model rated "High" for cybersecurity under Preparedness Framework

Pricing (API)

$5 / $25 per million tokens (input/output)

API access coming soon; currently via Codex app, CLI, IDE

Self-Improvement

N/A

First model instrumental in its own creation

Notable Add-on

Adaptive thinking with effort controls

Codex-Spark (real-time coding at 1000+ tokens/sec via Cerebras)

Best For

Deep codebase analysis, long-running agentic tasks, enterprise workflows

Interactive development, research + tool use, rapid iteration

The bottom line: these tools are not interchangeable — they have different strengths. The smartest teams are using both. Claude Code excels at deep, sustained work across large codebases. Codex shines when you need interactivity and rapid feedback loops. If you're only using one, you're leaving capability on the table.

A Note from Jamie

As I mentioned at the top, I haven't been this excited about the state of technology in over ten years. I see a window of opportunity right now to reinvent and dramatically improve the processes of so many businesses.

In the past I used to get excited about consumer applications. Now I'm excited about being the middleware — the layer that helps enterprises make all of this work optimally. I find it genuinely energizing to be in the details of model selection, model configurations, and working on the tools that will power how every business needs to improve their processes.

As we all know, there's the daily work you have to do just to keep the trains moving. But the real progress happens when you can sit down for sessions that are several hours long and focus without interruption. When you can get into the groove with the bots and run multiple agents in parallel — let Claude run several agentic processes, use Codex to verify and cross-check, jam to some house music on repeat, have a few coffees and some water with electrolytes, and just focus like a hound dog on building.

That's what I've been doing. And it's going well.

- Jamie Thompson

Need Expert Guidance?

Book a focused 1-hour strategy session with Jamie.

Evaluate current architecture and readiness
Identify quick wins and hidden risks
Get tailored, actionable next steps

👇🏼 Book a Strategy Call

Jamie’s Weekly Spotify Mix

This week's playlist is called Claude Code'ing Energy and it's only two tracks — on purpose.

When I'm deep in a multi-hour coding session with the agents, I don't want variety. I want rhythm.

I put "Dancing in the Moonlight" and "Gypsy" on repeat and let them become the background frequency for the work.

All energy goes into the project. No skipping, no browsing, no decisions about what to listen to next. Just momentum.

🎺🎧 Note: Web Edition Only

What did you think of this week's edition?

Help us shape topics

Login or Subscribe to participate

Keep Reading