Frequently Asked Questions
Frequently Asked Questions
About Verified AI
What does “verified AI” mean?
Verified AI means every AI-generated answer is independently checked through a completely separate computational path before it is used. If the independent verification confirms the AI’s answer, a proof certificate is issued. If there is a discrepancy, the error is caught before it reaches production. This is a structural guarantee — not a statistical improvement.
How is this different from AI guardrails or prompt engineering?
Guardrails and prompt engineering reduce the frequency of errors but cannot eliminate them. They work by constraining the AI’s behavior — which helps, but the AI is still probabilistic. AMX takes a fundamentally different approach: independent re-derivation from source evidence. The verification engine contains zero neural components, so it cannot hallucinate or make the same class of errors as the AI it verifies.
Does AMX replace my existing AI / LLM?
No. AMX is a symbolic verification engine — it is not an LLM and does not replace LLMs. When natural language understanding is needed, our LLM Bridge connects to external LLMs (GPT-4, Claude, Llama, Mistral, or your own models) and AMX verifies their outputs. Think of AMX as “Intel Inside” for AI: we do not compete with LLMs, we make them trustworthy.
What is a proof certificate?
A proof certificate follows the eXtensible Formal Proof Certificate (XFPC) open standard. It is a machine-verifiable record of how a verified answer was derived from source evidence. It is independently replayable (any auditor can verify it on their own hardware), tamper-evident (cryptographic integrity), and self-contained (includes everything needed for verification). It is not a confidence score, a log file, or a model explanation.
Can you really guarantee zero wrong answers?
AMX guarantees zero wrong verified answers. If an answer passes verification, the proof certificate documents a complete, independently verifiable derivation from source evidence to conclusion. If the verification engine cannot confirm correctness, no proof certificate is issued and the discrepancy is flagged. The guarantee is structural — the verification path contains no neural components and cannot hallucinate.
Why can’t we just wait for better LLMs to solve the accuracy problem?
The transformer architecture — the foundation of every modern LLM — is hitting three fundamental walls: the data wall (publicly available training text is running out), the compute wall (each generation costs 10x more to train), and the architecture wall (transformers remain probabilistic sequence predictors regardless of scale). Even a perfect LLM produces output based on statistical likelihood, not logical derivation. Verification is not a temporary stopgap — it is a permanent architectural necessity. See our deep analysis of the Transformer Ceiling.
How is symbolic verification different from running multiple LLMs (ensemble methods)?
Ensemble methods combine the outputs of multiple probabilistic models — but multiple probabilistic models remain probabilistic. If three LLMs agree on a wrong answer (which happens frequently with common misconceptions), the ensemble has high confidence in an incorrect result. AMX verification uses a completely separate computational approach: deterministic symbolic reasoning with zero neural components. It does not poll models for consensus — it independently re-derives the answer from source evidence through logic.
Does AMX have any copyright exposure from training data?
No. AMX’s verification engine has zero training data and therefore zero copyright exposure. It does not learn from copyrighted text, does not reproduce copyrighted content, and does not inherit the intellectual property risks of the LLMs it verifies. Knowledge Capsules are authored from primary sources with explicit provenance. For enterprises in publishing, media, legal, and education, this is a critical distinction: using an LLM for generation creates copyright risk; using AMX for verification creates copyright immunity.
Technology
How does the verification engine work without neural networks?
AMX is a symbolic reasoning engine that re-derives answers from source evidence using deterministic logic. Same input always produces the same output and the same proof. Because it contains no neural networks, it is immune to hallucination, prompt injection, and the probabilistic errors that affect LLMs. Fast-path verification takes approximately 10ms for known patterns; full derivation of complex queries takes approximately 500ms.
What does “deterministic” mean in this context?
Deterministic means no randomness. If you give AMX the same input twice, you get the same output and the same proof certificate both times. This is critical for regulatory compliance — regulators require reproducible results. LLMs are non-deterministic by design (temperature, sampling), which is why they cannot provide proofs.
What are Knowledge Capsules?
Knowledge Capsules are modular, versioned packages that contain the domain knowledge used by the verification engine. For example, a financial regulations capsule, a drug interaction capsule, or a tariff classification capsule. They are hot-swappable — you can update knowledge without retraining or redeploying the system. The upcoming Knowledge Copilot tool will allow domain experts to author and refine capsules without engineering skills.
What is the Living Knowledge Graph?
Unlike LLMs that have fixed knowledge cutoffs, AMX maintains a Living Knowledge Graph through hot-swappable Knowledge Capsules. When regulations change, new drugs are approved, or market conditions shift, you update the relevant capsule and the verification engine picks up the new knowledge immediately — no retraining, no redeployment, no downtime. This means your verification always reflects current reality, not a frozen snapshot from the last training run.
How do Knowledge Capsules compose?
Knowledge Capsules are designed for composition: a trade compliance capsule can reference a tariff classification capsule, which links to a currency conversion capsule, creating verified multi-domain reasoning chains. The platform tracks which capsule versions contributed to each verification and guarantees consistency across linked capsules. This composability is a fundamental advantage over LLMs — capsules compose; GPT-5 does not. You cannot combine two LLMs and get guaranteed correctness. You can combine capsules and get verified multi-domain answers with proof certificates.
What languages does AMX support?
Language support is tiered by production readiness. English is production-grade with full verification support across all Knowledge Capsules. Chinese (Mandarin) and Malay (Bahasa Malaysia) have strong support for business document processing and compliance workflows. Tamil has foundational coverage with ongoing expansion. Cross-language verification is supported — you can input in one language and receive verified output in another, with proof integrity preserved. The architecture is designed for rapid addition of new languages.
Deployment and Integration
How does AMX integrate with my existing systems?
AMX offers multiple integration modes: sidecar (alongside existing AI systems, zero code changes), API gateway (centralized verification), embedded (10ms fast-path verification for cached derivations), and asynchronous (post-hoc audit). It supports MCP, A2A, and WebMCP agent protocols and connects to existing ERP, CRM, and business systems through standard enterprise connectors.
Can AMX run on-premise?
Yes. AMX supports cloud (AWS, Azure, GCP), on-premise, hybrid, and edge deployment. For on-premise, the entire platform runs on your infrastructure with zero data egress. AMX is purpose-built for air-gapped environments — no cloud dependency, no phone-home, no external data transfer.
Does AMX require GPUs?
No. AMX is CPU-only. It runs on commodity hardware — including a $500 laptop for edge deployments. This means no GPU procurement, no GPU rental costs, and compatibility with standard enterprise IT infrastructure. It also means AMX is 1000x more energy efficient than neural approaches.
How many devices can AMX run on?
AMX is CPU-only, which means it can run on any of the 15 billion CPU-bearing devices deployed worldwide — from cloud servers and enterprise workstations to edge laptops and embedded systems. Specific energy consumption is 0.001-0.01 Wh per verification, compared to 0.3-1.0 Wh per LLM query. This 1000x energy advantage makes AMX viable for deployments where GPU-based AI is impractical: remote locations, tropical climate data centers with cooling constraints, mobile field operations, and resource-constrained environments.
How does AMX work in air-gapped environments?
AMX is purpose-built for air-gapped deployment. The entire platform — verification engine, Knowledge Capsules, proof certificate generation — operates with zero external dependencies. No cloud callbacks, no phone-home telemetry, no external data transfer. Knowledge Capsules are updated through secure transfer mechanisms that your security team controls. This makes AMX the only AI verification platform deployable in classified, sovereign, and high-security environments where network isolation is mandatory.
How long does deployment take?
AMX can be deployed in days, not months. Sidecar mode requires zero changes to your existing AI systems. More complex integrations (API gateway, embedded mode, custom Knowledge Capsules) take longer but are measured in weeks, not quarters.
Pricing and Economics
How much does AMX cost?
$0.10 per verification in the verification-as-a-service model. Enterprise licensing is also available for high-volume deployments. Contact us for details.
How does AMX compare to human review costs?
AMX is 8x cheaper than human review and 180x faster. For a process where human reviewers cost $50/hour and review 10 items per hour ($5 per item), AMX verifies at $0.10 per item — a 50x cost reduction — while being available 24/7 with zero fatigue.
What is the ROI calculation?
The ROI depends on your current error rates, compliance costs, and review processes. For most enterprise deployments, the cost of AMX verification is a fraction of the cost of a single compliance violation, regulatory fine, or error-related business loss. Contact us for an ROI analysis specific to your operations.
Why is the market opportunity for verified AI so large?
Enterprise AI spending has crossed $200 billion annually, yet only 30-40% of AI project value is actually realized. The gap — driven by trust deficits, manual review overhead, and regulatory constraints — represents an enormous market for verification. In financial services alone, broken AI trust costs an estimated $78 billion per year in compliance failures, operational errors, and forgone opportunities. As regulators worldwide mandate explainable, auditable AI (with penalties up to 7% of global revenue under the EU AI Act), verification transitions from “nice to have” to “regulatory requirement.”
Compliance and Security
Does AMX help with EU AI Act compliance?
Yes. The EU AI Act requires high-risk AI systems to be explainable and auditable. AMX proof certificates provide machine-verifiable explainability and independent auditability — exactly what the regulation demands. Proof certificates can be archived and presented to regulators as evidence of compliance.
What about MAS, SEC, and other financial regulators?
AMX proof certificates satisfy the model risk management and decision traceability requirements of MAS (Singapore), SEC/OCC (US), and similar financial regulatory frameworks. Every AI decision is documented with a verifiable reasoning chain from source evidence to conclusion.
How does AMX handle data privacy (PDPA, GDPR, PIPL)?
AMX processes data where it lives — zero data egress in on-premise and edge deployments. For cloud deployments, data residency controls ensure data stays within specified jurisdictions. The platform supports the right to explanation under GDPR and PDPA — proof certificates document how AI decisions were made without exposing model internals.
Is AMX SOC 2 / ISO 27001 certified?
AMX follows SOC 2 and ISO 27001 aligned security practices. Role-based access control, encryption at rest and in transit, immutable audit logging, and complete access event tracking are built into the platform.
Marketplace and Ecosystem
What is AMX Central?
AMX Central is the upcoming marketplace for verified Knowledge Capsules — think “GitHub for enterprise knowledge.” Domain experts and organizations can publish capsules for others to discover, evaluate, and deploy. Quality signals, usage metrics, and verification status help consumers find the right capsules. Capsule authors retain 85-95% of revenue, creating an incentive for experts to formalize their domain knowledge into reusable, verified packages.
What is the Knowledge Refinery?
The Knowledge Refinery automatically transforms organizational documents, policies, and procedures into verified Knowledge Capsules. It uses a hybrid pipeline where LLMs extract knowledge from unstructured text, and AMX verifies the extracted knowledge for correctness and consistency. This solves the cold-start problem — organizations with years of institutional knowledge locked in documents, manuals, and policies can convert that knowledge into verified, machine-usable capsules without starting from scratch.
Getting Started
How do I evaluate AMX?
Contact us for a proof-of-concept engagement. We will work with your team to verify a representative set of AI decisions from your existing systems, demonstrating the value of proof certificates in your specific domain.
What AI models work with AMX?
AMX verifies outputs from any LLM — GPT-4, Claude, Llama, Mistral, Gemini, or your own fine-tuned models. The LLM Bridge connects to external models for natural language tasks, and AMX’s symbolic verification engine independently verifies all outputs. If your model generates an answer, AMX can verify it.
Do I need to change my existing AI systems?
In sidecar mode, no changes are required. AMX intercepts AI outputs and verifies them transparently. Other integration modes (API gateway, embedded) may require minimal configuration but do not require changes to your AI models.
Still Have Questions?
We are happy to discuss how verified AI applies to your specific use case.