In partnership with

Why Zero Trust for AI Now

Hello everyone, and welcome to this week's edition of the Sprinklenet AI Newsletter.

AI adoption is accelerating rapidly, accompanied by heightened risks of data exposure through prompts and outputs.

The risks are real, but the rewards of getting AI right are way too big to do nothing.

I wanted to address a topic that frequently comes up in discussions with enterprises integrating AI into their existing IT systems and business processes: the application of zero trust protocols.

Zero Trust principles offer vital safeguards, verifying every interaction while empowering teams to innovate with confidence.

In this edition, we explore how cryptographic techniques, such as split-key sharing, can enhance AI controls, aligning them with established industry security standards.

But first, a word from this week's sponsor, Skej.

An AI scheduling assistant that lives up to the hype.

Skej is an AI scheduling assistant that works just like a human. You can CC Skej on any email, and watch it book all your meetings. It also handles scheduling, rescheduling, and event reminders.

Imagine life with a 24/7 assistant who responds so naturally, you’ll forget it’s AI.

  • Smart Scheduling
    Skej handles time zones and can scan booking links

  • Customizable
    Create assistants with their own names and personalities.

  • Flexible
    Connect to multiple calendars and email addresses.

  • Works Everywhere
    Write to Skej on email, text, WhatsApp, and Slack.

Whether you’re scheduling a quick team call or coordinating a sales pitch across the globe, Skej gets it done fast and effortlessly. You’ll never want to schedule a meeting yourself, ever again.

The best part? You can try Skej for free right now.

From Crypto to Controls

Implementing the concepts of Shamir's Secret Sharing, separation of duties, and advanced encryption techniques in AI workflows may initially appear burdensome due to the added layers of security and coordination.

However, in practice, these approaches are increasingly feasible in enterprise IT environments through the use of mature tools, cloud services, and automated pipelines. They enable organizations to protect intellectual property while leveraging large language models (LLMs) for tasks such as querying proprietary data or engaging in conversational applications.

Below, I outline how this operates in a real-world enterprise setting, focusing on retrieval-augmented generation (RAG) pipelines with vector databases and streaming data integration.

The explanation draws on established practices and technologies to ensure clarity and applicability.

Core Principles in Practice

Threshold Keys In Production

(Skip if you only need the high-level view.)

A two-of-three key scheme protects a RAG chatbot without slowing it down. All data is encrypted before it reaches the vector store. The master key is split with Shamir’s Secret Sharing:

  • Share A lives in a cloud KMS.

  • Share B sits in a hardware security module (HSM).

  • Share C stays offline for disaster recovery.

When a user submits a prompt, an API gateway checks identity and policy. A short-lived microservice then pulls just the needed chunks, assembles Shares A and B in memory, decrypts, and sends the redacted context to the external LLM. The reply is scanned for policy violations, re-encrypted if needed, and returned. Logs record which shares were used, so audits are simple. Calls to the KMS and HSM take only milliseconds, and the microservice is destroyed after each request, leaving no plaintext at rest.

Zero Trust AI Stack At A Glance

Zero Trust verifies every request. In an AI workflow that means:

  1. Identity First: SSO and MFA protect users and service accounts.

  2. Data Centric: Classify, tokenize, and encrypt before any model call.

  3. Context Aware: A gateway evaluates risk signals in real time.

Key acronyms you will see: ZTNA (session-level network access based on identity) and PDP / PEP (runtime policy decision and enforcement points). Implementing them gives you inline control over prompts, outputs, and routing.

Executive Checklist

  • Enforce least-privilege roles with just-in-time elevation.

  • Route all model traffic through a policy gateway with immutable logs.

  • Apply DLP and PII detection on both prompts and responses.

  • Require vendors to honor zero data retention and clear IP terms.

This combination of split keys, granular encryption, and policy gateways keeps IP safe while end users enjoy a seamless chatbot experience.

In God we trust. All others must bring data.

W. Edwards Deming

📰 AI Trends & News

Zero-Click Prompt Injection Shown at Black Hat 2025. Researchers demonstrated attacks that silently embed rogue prompts in popular AI agents, extracting data from connected knowledge bases and defeating simple guardrails.

Poisoned Document Exploit Leaks Data via ChatGPT Connectors. A Wired investigation revealed that a single crafted file can force ChatGPT to exfiltrate “secret” content pulled from linked repositories, underscoring the need for strong Zero Trust boundaries around LLM integrations.

🔧 Legacy Spotlight

Many enterprises still run ERP, CRM, and data-warehouse platforms designed long before AI. If a language model hooks in directly, long-lived trust assumptions break and blind spots appear.

Control the edge. Put an API gateway in front of each legacy system to verify identity, enforce least-privilege scopes, and strip or mask sensitive fields.

Expose only what is needed. Offer the LLM pre-approved, cached, or aggregated views rather than raw tables.

Encrypt early. Tokenize or encrypt critical columns before data leaves the source so no PII or trade secrets ever reach the model.

Closer to Alignment

Policy, architecture, and adoption often drift. Bring them together with a single control plane. One place to define data classes and handling rules. One gateway for model access and routing. One log of record for prompts, outputs, and retrieval events. One risk review cadence that includes security, legal, privacy, and business owners.

When policy and plumbing match, teams move faster with less risk.

🕚 Balanced & Insightful

Innovation stalls when every idea must clear production-level hurdles, yet risk spikes when security rules come after launch.

A practical middle ground is to give teams three clearly marked lanes:

Lane

Purpose

Guardrails

Explore

Fast experiments and prompt tinkering

Synthetic or scrubbed data only. No external model keys. Automatic deletion after 30 days.

Pilot

Proof-of-concepts on real use cases

DLP and tokenization on by default. All prompts and outputs logged. Weekly review with Security and Data owners.

Production

Customer-facing or business-critical workloads

Policy routing, least-privilege roles, human sign-off on sensitive actions, continuous tests for injection and leakage.

Leadership Mandate

Questions for executives this week:

  1. Which data classes can exit our environment, and under what conditions?

  2. Are prompts and outputs logged immutably and auditable?

  3. Do policies control model choice, version, and region?

  4. Can we revoke access or rotate keys in minutes?

  5. Are sandboxes separated from production with distinct controls?

  6. Do vendors ensure zero retention and customer IP ownership?

  7. Is sensitive data tokenized or encrypted before external calls?

  8. Are filters automated for injections, exfiltration, and unsafe outputs?

  9. Who handles AI incident response, and what is the playbook?

  10. How do we measure value against risk?

Executive Checklist

  • Data classified and mapped to controls.

  • Gateway with routing, policy, and kill switches.

  • Logging to SIEM for reviews.

  • DLP at boundaries.

  • Tokenization with managed keys.

  • Vendor terms protecting IP.

  • Quarterly red teaming for threats.

A Note form Jamie:

AI should extend your reach, not your risk.

Zero Trust is the discipline that keeps us honest—defined rules, strong boundaries, and proof that they work.

When identity, data, and context stay at the center, you can scale AI with confidence while protecting the assets that make your business unique.

Need Expert Guidance?

Book a focused 1-hour strategy session with Jamie.

Evaluate current architecture and readiness
Identify quick wins and hidden risks
Get tailored, actionable next steps

👇🏼 Book a Paid Strategy Call

AI leaders only: Get $100 to explore high-performance AI training data.

Train smarter AI with Shutterstock’s rights-cleared, enterprise-grade data across images, video, 3D, audio, and more—enriched by 20+ years of metadata. 600M+ assets and scalable licensing, We help AI teams improve performance and simplify data procurement. If you’re an AI decision maker, book a 30-minute call—qualified leads may receive a $100 Amazon gift card.

For complete terms and conditions, see the offer page.

Jamie’s Spotify Playlist

This week I’m spinning an 80s soft-rock mix with favorites like Elton John, Julio Iglesias, and Nana Mouskouri. It’s soft rock I can keep on the background while grinding through the day. Lots to do. Thanks God for music! 🙂

🎺🎧 Note: Web edition only.

What did you think of this week's edition?

Help us shape topics

Login or Subscribe to participate

Keep Reading

No posts found