All Posts
The Sovereignty Protocol
platformBYOKmodelsfoundation
๐Ÿ”Œ

IDE & Model Agnostic: Build With What You Have

The Sovereignty Protocol does not care which IDE you use, which model you prefer, or which cloud provider you trust. It runs on your tools, with your keys, at your cost. Here is what true model-agnostic AI governance looks like in practice.

30 April 2026ยท5 min readยทThe Sovereignty Protocol Team

The Lock-In Problem

Every AI platform wants to be your everything. Your model, your interface, your runtime, your storage. The value proposition is convenience โ€” one place for everything. The cost is dependency.

When a provider updates their model, your agents may behave differently. When pricing changes, your costs change. When a service goes down, your workflows stop. You traded control for convenience, and you may not have realised the full price of that trade.

The Sovereignty Protocol takes the opposite position. It is designed to be the governance layer you own, running on top of the models and tools you choose. Nothing about the platform forces you into a specific provider, a specific model, or a specific deployment environment.


Any IDE

The Sovereignty Protocol is not tied to any development environment. It is a set of files, conventions, and services โ€” not an application that requires a specific editor.

You can use it with:

  • VS Code, Cursor, or any fork
  • JetBrains IDEs (IntelliJ, PyCharm, WebStorm)
  • Neovim or Emacs with LSP support
  • The terminal, directly
  • Any web-based editor that can access your files

The governance layer โ€” laws, flows, templates, skills โ€” is defined in YAML and Markdown. Any editor that can open text files can work with it. Your CI/CD pipeline can lint it. Your version control can track every change. There is no proprietary format to learn or export.


Any Model

The platform ships with a full routing layer that supports 348+ models across NVIDIA NIM and OpenRouter. But it does not require you to use any of them.

The Bring Your Own Key (BYOK) system lets individual users configure their own provider account and preferred model. When configured, that model takes precedence over platform defaults for every operation that user initiates. You can mix and match: some users on platform defaults, others on their own Claude or GPT-4o or Gemini key.

The model routing configuration is a YAML file โ€” model_routing.yaml โ€” that you own and version-control. Want to change which model handles research tasks? Edit one line. Want to add a fallback chain for when your primary model is unavailable? Add it to the config. No redeployment required.


Any Provider

The platform's LLM layer currently routes through NVIDIA NIM (primary) and OpenRouter (secondary). But the provider abstraction is clean โ€” adding a new provider is a matter of implementing the Provider interface in Go and registering it in the routing config.

For self-hosted deployments, you can point the routing layer at any OpenAI-compatible endpoint โ€” Ollama, LocalAI, vLLM, or any private model server. The governance layer does not change. Your laws, your memory, your audit logs โ€” all intact, regardless of what sits underneath.


Any Cloud, Any Machine

The backend is a single Go binary. The frontend is a Next.js static export. The database is SQLite via PocketBase. The entire stack runs on a single machine with no external dependencies.

You can deploy it on:

  • A local laptop for personal use
  • A VPS or dedicated server
  • A Docker container in any cloud (AWS, GCP, Azure, Hetzner โ€” anything)
  • A bare-metal server in your own infrastructure
  • A Raspberry Pi, if you want to be creative

There is no mandatory cloud service, no required SaaS dependency, no "call home" telemetry. The platform is yours.


What This Means for Your Costs

When you run your own models with your own keys, your costs are transparent and directly controlled. You see exactly how many tokens each operation consumes. You can configure cost caps per operation type. You can run cheaper models for bulk background tasks and reserve expensive models for high-stakes interactions.

The platform's usage tracking in Sentinel's BYOK dashboard shows you per-request cost estimates, provider usage, and token counts. You are not paying a markup on API calls โ€” you are paying your provider directly, and you can see exactly what you are paying for.


The Principle Behind the Design

The reason the Sovereignty Protocol is designed this way is not just commercial. It is philosophical.

A governance framework that creates dependency is not really governance โ€” it is another form of control being exerted over your systems, just by a vendor instead of an agent. True sovereignty means the framework you use to govern your AI is itself something you own, understand, and can modify.

You should be able to read the laws that govern your agents. You should be able to change them. You should be able to run the entire system on a machine with no internet connection if you choose to.

That is what model-agnostic, IDE-agnostic, cloud-agnostic design actually means. Not just compatibility โ€” ownership.

The Sovereignty Protocol

Governed AI workforces for the real world. Laws your agents cannot break, memory that persists, security that is built in from day one.