This Week's Focus ⤵️

Good morning. This week I want to talk about two things: a new open source tool that changed how I work, and a growing conviction that the real productivity unlock isn't more agents. It's sharper focus with fewer, better ones.

Google quietly released an open source CLI for Google Workspace called GWS CLI. It gives you full programmatic access to Gmail, Drive, Calendar, Docs, Sheets, everything in Google Workspace, directly from your terminal. No browser. No clicking. Just commands.

I started running it inside Claude Code last week, and the effect was immediate. I can tell my AI agent to search my inbox, draft a response, pull a file from Drive, update a spreadsheet, and compose an email, all in one continuous session without ever switching context. The agent reads my mail, understands what needs a response, drafts it, and I review and send. The entire loop happens in the terminal. This is a lot more efficient and higher quality results than doing it in a browser or app.

Here's the thing people don't talk about enough: humans have context windows too. 

Just like LLMs have a limited number of tokens they can hold in memory, your brain has a limited capacity for active focus. Every time you switch tabs, switch windows, switch devices, you're flushing your human context window. You're forcing a reload. And just like with an LLM, that reload is expensive.

The more I can do right in the terminal, the better and faster I am.

No switching tabs. No switching windows. No switching devices. It's getting closer to something like a neural interface. My thoughts flow directly into the tools. When I can think fast, I can usually type fast too. Maybe thanks to Mavis Beacon Teaches Typing on my Apple IIGS as a kid. I don't think my parents ever imagined that those rapid typing skills would someday help me interface with AI at 126 words per minute. But here we are.

This is what I mean when I say the terminal is my office. It's not about being a power user or showing off. It's that context breaks are the single biggest tax on knowledge work, and eliminating them changes everything.

Sponsor of This Week's AI Newsletter:

This edition is brought to you by Sprinklenet Knowledge Spaces. The control layer that sits between your guardrails, your compliance requirements and your ability to rapidly integrate and swap out multiple LLMs within your organization as needed, for both your internal employees and external users.

Diving Deeper 🤿

Here's something I've been wrestling with, and I think it's worth saying out loud: the "just throw more agents at it" approach is overrated.

There's a lot of talk right now about running five, ten, twenty agents in parallel. Agent swarms. Autonomous pipelines. Let the AI do everything. And look, I've built some of those systems. They have their place. But after weeks of operating this way in production, with real work, real deadlines, and real money on the line, I've landed somewhere different.

The highest-value sessions are not the ones where I have the most agents running. They're the ones where my mind is sharp, I've identified the one or two things that actually matter, and I'm working with a single powerful terminal at maximum model intelligence on those things. A few agents can be running in parallel with the right tools connected, but you don’t need more than a few most of the time.

Think about it like this: you can have ten people in a room generating ideas, or you can have two people who deeply understand the problem working intensely on the solution. The ten-person room feels productive. The two-person room actually ships.

The same principle applies to AI-augmented work. Time is compressed dramatically. Output is through the roof. But you still have to decide what matters. You still have to prioritize ruthlessly. The tools don't do that for you, and if you let them run wild, you'll end up with a lot of motion and not a lot of progress.

I find myself constantly checking priorities. Am I working on the thing that creates the most value right now? Is this agent doing something that moves the needle, or is it just busy work that feels productive? That discipline, the human judgment layer, is the actual competitive advantage.

When I'm super clear on what I need to do and I can merge my imagination with the AI tools, maxed out on access and tokens, I can move fast. And that's the operating model I'm building Sprinklenet around.

Core Principles in Practice 💡

  • Focus beats volume. A sharp human with one great agent on the right problem will outperform a distracted human running ten agents on ten things. The bottleneck was never compute. It's attention and judgment.

  • CLI access is a multiplier. Tools like the GWS CLI remove the context-switching tax entirely. When your AI agent can read your email, check your calendar, and draft a document without you leaving the terminal, the workflow becomes a single continuous thought. That's not a small optimization. It's a different way of working.

  • We're building this into Knowledge Spaces. Inspired by what GWS CLI and MCP have done for my personal workflow, we're building CLI and MCP server access directly into Sprinklenet Knowledge Spaces. The vision: you tell your AI agent "create a new organization, add a knowledge space, upload these documents, configure a bot with this system prompt" and it handles all the API calls programmatically. No UI clicking. Knowledge Spaces already serves as the middleware control layer across multiple LLMs. Adding CLI/MCP access makes it even more powerful as the orchestration point for enterprise AI operations.

  • The CI/CD pipeline for knowledge work is real. This is where I see our development going at Sprinklenet. I go directly from the front lines, understanding business needs and objectives of clients, to creating really targeted, high-value problem-solving tools. Then we have internal reviews and staging, then it gets to production. We're launching something this week that we basically pulled off in a matter of weeks. Something that would have previously taken a year or more to develop. That compression is only possible when the human context is sharp and the tools are dialed in.

  • The max-value window is real and it's narrow. There's a sweet spot where the human mind is rested, focused, and locked in, and the AI tools are dialed to max intelligence on the right problem. That window produces more value in two hours than an unfocused eight-hour day with every tool running. Protect that window. Structure your day around it.

Beware the barrenness of a busy life.

Socrates

AI Trends & News 📰

🔒 Pentagon Labels Anthropic a Supply Chain Risk

In a dramatic escalation, the Department of War formally designated Anthropic, maker of Claude, as a supply chain risk, requiring defense contractors to certify they don't use Anthropic's models in Pentagon work. Anthropic is the first American company to receive this designation, previously reserved for foreign adversaries. The dispute centers on Anthropic's refusal to grant the DoW unrestricted access to Claude. CEO Dario Amodei is back at the negotiating table while challenging the designation in court. Public sentiment has swung in Anthropic's favor. Claude app downloads surged while ChatGPT saw uninstallation spikes.

🚀 OpenAI Releases GPT-5.4 and Strikes DoW Deal

OpenAI launched GPT-5.4 (available as "Thinking" and "Pro" tiers) alongside a deal with the Department of War, announced just hours after the Anthropic blacklisting. GPT-5.4 supports up to 1M token context windows and scored a record 83% on OpenAI's GDPval benchmark. CEO Sam Altman later admitted the deal "looked opportunistic and sloppy" and amended the contract to include surveillance protections.

🔬 Google Ships Gemini 3.1 Flash-Lite and Deep Research Agent

Google released Gemini 3.1 Flash-Lite, the fastest model in the Gemini 3 series with 2.5x faster time-to-first-token. The standout: the Gemini Deep Research Agent, which autonomously plans and synthesizes multi-step research tasks, achieving 46.4% on Humanity's Last Exam. Google also rolled out Canvas in AI Mode to all US Search users.

Another Weekend Claude Sesh

Closer to Alignment 🏆

Last week I talked about decisiveness and the human capital question. This week I want to build on that with something more personal.

I've been in this terminal-first mode for weeks now, and the productivity gains are real. But I've also noticed the trap: it's easy to confuse activity with progress. When you can spin up agents, draft responses, scan opportunities, and publish content at machine speed, you start to feel like you should be doing all of it, all the time.

That's wrong. And it took me a few bruising weeks to internalize it.

The right model is not "do everything faster." It's "identify the two or three things that create maximum leverage, and execute on those at maximum intelligence." Let the rest wait. Let some things not get done at all. The discipline to say "not now" to a good opportunity because you're locked into a great one. That's the skill that matters most in 2026.

This is the same lesson from last week's Block story, just applied at the individual level. Jack Dorsey didn't cut 4,000 people so the remaining employees could do 4,000 people's worth of busy work with AI. He cut them so a focused team could do the right work at a higher level.

Balanced & Insightful ⚖️

There's a seductive narrative in the AI productivity space that goes: "just automate everything, run a hundred agents, scale yourself infinitely." It sounds great in a newsletter. But the reality is more nuanced.

The tools are extraordinary. I can do things in hours that used to take weeks. But the days where the most actual value gets produced, the code that ships, the conversations that land, are the days where the human is rested, focused, and locked onto the right problem. Not the days with the most agents running.

The GWS CLI, MCP integrations, Knowledge Spaces, Claude Code. These are all force multipliers. But a multiplier only works if you're multiplying the right thing. Zero times a thousand is still zero.

The organizations that win in 2026 won't be the ones with the most AI tools. They'll be the ones with the sharpest judgment about where to point them.

This edition was mostly written in terminal.

The workflow: I draft and edit content in Claude Code, use a headless browser agent to push it into my newsletter templates via the API, and generate images on the fly with an OpenAI API key. When an image comes back and I don't like it, I tell the agent to try again with different parameters. Most of the creative decisions still happen in real time between me and the tools.

Final editing is still a bit annoying. I have to hop into the browser for that last pass in the Beehiiv editor, tweaking layout, adjusting image placement, making sure everything renders correctly. It's tedious. It breaks the flow. And it's a reminder that this stuff isn't perfect yet.

But honestly, if it was perfect, I probably wouldn't be writing this newsletter at all. There would be no need. The fact that there's still a gap between what AI can do and what a human needs to do is exactly where the value lives right now. That gap is closing fast, but today, it's still where the interesting work happens.

Have a good week, everyone. If you need help or have an interesting enterprise AI situation that needs some human + AI support, just reply to this email. I read every one.

Need Expert Guidance?

Book a focused 1-hour strategy session with Jamie.

Evaluate current architecture and readiness
Identify quick wins and hidden risks
Get tailored, actionable next steps

👇🏼 Book a Strategy Call

Jamie’s Weekly Spotify Mix 🎵

These are the last four songs I listened to before publishing this edition. I suppose that makes them the playlist. That said, I have a feeling I'll be back to Fleetwood Mac and the workout remixes for most of the week.

🎺🎧 Note: Web Edition Only

What did you think of this week's edition?

Help us shape topics

Login or Subscribe to participate

Keep Reading