MCP, Skill, Agent Catalog

One registry for models,
agents, and MCP servers.

The industry is converging on OCI artifacts for packaging AI. Jozu Hub is the enterprise registry purpose-built for it — with security scanning, cryptographic signing, policy gating, and audit trails for every model, agent skill, and MCP server you deploy. Runtime governance is enforced by Jozu Agent Guard.

THE PROBLEM

351,000 agent skills. No governance for any of them.

Agent skills went from a few thousand to over 350,000 in three months. MCP servers are multiplying just as fast. Developers share them via git repos and zip files — with no versioning, no scanning, no signing, and no audit trail.

The same supply chain chaos that plagued container images before OCI registries is now playing out with AI artifacts. Except this time, the artifacts take actions on your behalf.

  • Unvetted skills and MCPs

    Employees download agent skills from public repos with no provenance, no scanning, and no cryptographic signing. Your security team has never seen them — these have already been used by hackers to exfiltrate data and plant viruses in enterprises.

  • No versioning or rollback

    When something breaks in production, there's no reliable way to roll back to a known-good state, or even confirm which version is running. Skills shared as git repos or zip files have no immutable versioning.

  • Fragmented toolchains

    Models in one registry, MCP servers on GitHub, agent skills in zip files, policies in YAML repos. No single system governs them all — and no connected audit trail links what was approved to what actually ran.

  • Compliance blind spots

    EU AI Act and NIST AI RMF require artifact provenance, SBOMs, and audit trails for every AI component in production. You can't generate any of that from a git clone or a shared zip file.

WHY OCI

The industry agrees: OCI artifacts are the answer.

Docker, JFrog, Red Hat, and the CNCF are all converging on OCI artifacts as the standard for packaging AI components. The MCP Registry itself is adding OCI as a first-class distribution channel.

The reasoning is straightforward: enterprises already run OCI registries with scanning, signing, access controls, and compliance workflows. Why build parallel infrastructure for AI?

Jozu uses KitOps to package AI artifacts as OCI:

Existing Infrastructure

Use Jozu Hub, or the registries you already run — Harbor, ECR, GCR, Artifactory, Docker Hub. No new tools to learn.

Immutable Versioning

Every change creates a new tagged, immutable artifact. Historical lineage is preserved. Rollbacks are reliable. No more “which version was that?”

Supply Chain Security

Cryptographic signatures via Cosign, SBOMs, vulnerability scanning, and provenance attestations. The same toolchain you use for container images.

Content-Addressable

SHA-256 digests for every component. What you pulled is exactly what was pushed. Tamper detection is built into the format.

Vendor Neutral

OCI is an open standard governed by the Linux Foundation. Jozu uses KitOps ModelKits, a CNCF project that is open source and portable across any compliant registry.

Compliance Ready

Signed artifacts with SBOMs and attestations map directly to EU AI Act, NIST AI RMF, and ISO 42001 requirements. Audit-ready from day one.

The Solution

Jozu Hub: The Enterprise
MCP & Skill Registry

Jozu Hub does what general-purpose OCI registries can't: purpose-built security scanning for AI artifacts, policy gating before promotion, cryptographic audit trails, and a native MCP Registry API that VS Code, Cursor, and Claude Desktop connect to directly.

Everything you need to govern
models, agents, and MCP servers.

JOZU HUB — SUPPLY CHAIN

MCP, Skill, Agent Catalog

Implements the MCP registry specification. VS Code, Cursor, and Claude Desktop point directly to Jozu Hub for centrally curated catalog of security-scanned MCP servers and agent skills.

Comprehensive Security Scanning

ModelScan, LLM Guard, Garak, Promptfoo, and ART assess 10+ vulnerability types — from serialization attacks to prompt injection. Scan results are signed attestations on the artifact itself.

Single Artifact Management

Package models, datasets, code, agent skills, MCP server configs, guardrail configs, and policies as a single versioned OCI artifact. Selective pull lets you grab only what you need.

Tamper-Evident Packaging

Every component protected by SHA-256 digests. Cryptographic signatures via Cosign. Self-verifying artifacts — everything needed to confirm provenance travels with the artifact.

Policy Gating Before Production

ArtifactPolicy evaluates signature requirements, scan thresholds, license compliance, and provenance before any artifact reaches production. Policies are themselves signed OCI artifacts.

AI Bill of Materials

SPDX 3 SBOMs with dependency tracking, training lineage, and license compliance. Cryptographically chained audit logs. Ready for EU AI Act, NIST AI RMF, and ISO 42001.

JOZU AGENT GUARD — RUNTIME

Tool-Level Policy Enforcement

ToolPolicy controls which agents can call which MCP tools, with what parameters, under what conditions. Per-tool, per-agent, per-user granularity. Every invocation is evaluated against policy before execution.

Isolated Execution Environment

Agents, models, and MCP servers run inside a protected runtime — micro-VM or Kata container isolation. Contains blast radius if an agent or MCP server is compromised. Only SHA-verified artifacts load.

Runtime Guardrails

GuardrailPolicy enforces content safety thresholds, PII detection, prompt injection defense, and toxicity filtering on every inference request and response. Thresholds are policy-driven, not hardcoded.

Human-in-the-Loop

ToolPolicy triggers approval workflows for high-risk actions — destructive operations, sensitive data access, high-cost invocations. Approvals are signed attestations in the audit trail.

Cryptographic Audit Trail

Every policy decision logged. Tamper-evident, cryptographically chained. Syncs to Hub when connected; operates autonomously when disconnected. Full traceability from artifact to action.

Desktop to Edge to Air Gap

Agent Guard enforces policy locally on laptops, servers, edge devices, and air-gapped networks. No phone-home to a central server required. Fail closed by design, not by configuration.


Management Plane

Jozu Hub

Store, scan, sign, and govern every AI artifact in one registry. MCP servers and agent skills get the same enterprise treatment as your models.

Explore Jozu Hub
Secure Runtime

Jozu Agent Guard

Governs what your agents can do at runtime. Tool-level policy enforcement, isolated execution, and cryptographic audit trails — from desktop to edge.

Explore Agent Guard

Govern Your AI Artifacts
Like Your Container Images

We're building a vendor-agnostic MLOps platform and KitOps ModelKits align perfectly with that vision. They work wherever our containers do — on-prem or in the cloud — giving us the freedom to store and deploy ML artifacts without being tied to a specific infrastructure.
Tomasz Bochenski
External, Lead Machine Learning Platform Engineer, MLOps
DSV
BUILT ON OPEN STANDARDS

CNCF-backed. OCI-native.
No proprietary formats.

Jozu created KitOps and donated it to the CNCF. Our CTO wrote the ModelPack specification — the only openly governed standard for AI/ML packaging. Both are built on OCI artifacts.

CNCF Sandbox Project

KitOps

Open source CLI for packaging AI/ML projects into OCI-compliant ModelKits. 235,000+ downloads. Works with any OCI registry.

kitops.org
CNCF Specification

ModelPack

Vendor-neutral AI/ML interchange format. Contributors include Red Hat, PayPal, ANT Group, and ByteDance. KitOps is the reference implementation.

modelpack.org

Request your free Jozu trial

Interested in testing Jozu in your private environment? Download the Helm Chart, and start your 2-week trial.

  • STEP 1

    Install

    Jozu Hub can be installed in your environment in just 1-hour, with no disruptions to existing workflows. We suggest taking a baseline measurement of current deployment times and security gaps, to benchmark against.

  • STEP 2

    Evaluate

    Once installed, you can run real-world tests with your models and infrastructure for up to 2-weeks. This will allow you to measure Jozu's performance against your existing tools and processes.

  • STEP 3

    Review

    At the end of your 2-week trial our team will work with you to review your results, and help you quantify improvements and ROI. This includes an implementation and roadmap discussion.