Coming soon
Current language models learn from raw text. They see words in context
and absorb statistical patterns. They can't explain what they learned,
and they can't distinguish correct reasoning from confident guessing.
CMVL is a different approach to model training. The engine produces
structured proofs for every determination it makes. A model trained
on these proofs learns not just WHAT the answer is, but WHY -- with
token-level attribution that no existing training corpus provides.
The idea
Billions of parameters trained on trillions of tokens. The training signal is "predict the next word." The model learns patterns, but the patterns are opaque. When the model is wrong, retrain on more data and hope. The feedback loop is statistical, slow, and expensive.
The FR-OS engine evaluates inputs and produces structured proofs. Each proof records what was evaluated, what evidence supported the determination, and what was ruled out. A model trained on these proofs learns the reasoning process, not just the answer. When the model disagrees with the engine, the disagreement becomes the next training example.
The loop
FR-OS processes inputs and produces determinations with complete proof records. Each record is a structured training example with full attribution.
A small, efficient model learns to predict what the engine would determine. It runs in milliseconds where the engine runs in seconds. It extends the engine's reach to inputs the engine hasn't seen.
The engine spot-checks the model's predictions. Agreements confirm the model is learning correctly. Disagreements become new training examples. The model improves continuously without human annotation.
Early results
Standard approach: the model gets "right" or "wrong" as feedback. In controlled experiments, models trained with binary reward achieved 0% accuracy on held-out evaluation tasks. The signal is too sparse to learn from.
CMVL approach: the model gets a full proof record showing what was evaluated and why. In the same controlled experiments, models trained with certificates achieved 62% accuracy. The structured signal enables learning where binary reward cannot.
This is not just faster learning. It is qualitatively different. The certificate carries the reasoning structure. The model doesn't just learn to produce the right answer -- it learns the pattern of evaluation that produces right answers across domains.
Vision
The engine already operates across domains: natural language, medical reasoning, policy enforcement. A model trained on the engine's proofs across all domains would learn a general pattern of evidence evaluation that transfers to new domains without retraining.
The engine stays authoritative. The model stays fast. Together, they produce a system that reasons verifiably at the speed of a neural network.
Early access
CMVL is in active development. Join the waitlist to follow progress and get early access.