Know what your AI
is really saying

FR-OS (Frame-Relative Operating System) is a formally verified evaluation engine built around a general-purpose relative calculus: give it a center, give it objects, give it invariants, and it evaluates the structure. The center could be an AI prompt, a document, a contract clause, a medical record, or a transaction. The same guarantees hold at every scale. Shellfinity's first application: LLM governance. Every AI response verified against your policies before it reaches your users.

# Define your rules in plain English
$ fros policy create "block harmful content, limit escalation to 2"

# FR-OS checks the AI's response
$ fros evaluate --policy safety-01 --input response.txt

PASS  policy: safety-01
     result: all rules satisfied
     tokens checked: [user_query, response, context]

# When it catches a violation:
FAIL  policy: safety-01
     violation: "exploit" blocked by policy
     fix: remove "exploit" to pass

Today's AI safety tools
are just more AI

How it works today

Most AI guardrails use another AI model to judge the output. That second model has its own blind spots, its own failure modes, and returns vague confidence scores instead of clear answers. "The filter probably caught it" isn't good enough.

How FR-OS works

The Bubble Engine checks AI output against your rules using mathematically proven logic. You get a clear yes/no verdict, plus a detailed report showing exactly what violated your policy and how to fix it.

Three steps. Zero ambiguity.

01

Write your rules

Define policies in plain English: "block harmful content", "limit sensitive topics to 3", "require safety disclaimers". FR-OS compiles them into rules that are mathematically guaranteed to work.

02

AI generates freely

Your AI model produces output without restriction. You won't need to worry about prompt engineering workarounds, quality trade-offs, or interference with what the model does best.

03

FR-OS judges

The Bubble Engine evaluates the output against your rules, then returns "pass" or "fail" with a report naming exactly what was flagged and what to fix. Deterministic, consistent, and final.

Not another AI checking AI

vs. AI-based moderation

Clear answers, not scores

Other tools return confidence percentages you have to interpret. FR-OS returns a definitive yes or no, with a detailed report you can audit and act on.

vs. Keyword blocklists

Smart rules that compose

Keyword lists are brittle and miss context. FR-OS policies understand categories and relationships: block one term and related terms are covered automatically.

vs. Prompt instructions

Enforcement, not suggestions

System prompts are instructions the AI can ignore or be tricked into bypassing. FR-OS checks output after generation; it can't be jailbroken because it enforces a reassessment of violated conditions.

Mathematically proven

Same result, every time

The Bubble Engine is built on machine-checked mathematical proofs. No matter how you run it, you get the same verdict. A proof, every time.

Get on the waitlist

Be the first to know when FR-OS launches. We'll notify you when API access is available.