How we use AI
AI is at the core of every product CASM Labs builds. But it is never the starting point. We use deep industry experience to identify the problems that matter, then apply AI where it can make the biggest difference.
This page explains how we use AI, what happens to your data, and how we approach AI regulation.
What the AI does
CASM Labs products use AI to automate high-effort, high-stakes tasks in professional workflows. Our first product, ReqFit, uses AI to read a proposal and an RFP side by side, identify every requirement, assess how well each one has been addressed, and produce a structured assessment report.
The AI does not write content for you. It does not make decisions for you. It gives you the information you need to make better decisions yourself, faster and with greater confidence.
Every product we build follows the same principle: AI handles the analysis and the heavy processing. Your team keeps the judgement and the final say.
What happens to your data
Our products process your documents in real time using enterprise-grade AI infrastructure. Once processing is complete, your uploaded files are discarded. We do not store your documents on our servers.
Your documents are never used to train AI models. They are never shared with third parties. They are never accessible to other users.
This is not a feature we added after the fact. It is how we designed the architecture from day one, because the documents our users upload are commercially sensitive and we have no business holding onto them.
Payments across our products are handled by Paddle, our merchant of record. CASM Labs never sees or stores payment details.
AI regulation
AI regulation is evolving quickly. We monitor developments across the jurisdictions where our users operate and build our products to meet or exceed current requirements.
EU AI Act
The EU AI Act classifies AI systems by risk level: unacceptable, high, limited, and minimal. High risk categories include AI used in recruitment, credit scoring, law enforcement, and critical infrastructure.
CASM Labs products do not fall into any high risk category. They are business productivity tools that analyse documents and produce quality reports. They do not make decisions about people, assess creditworthiness, or process biometric data. Under the Act's framework, our products sit in the limited or minimal risk tier.
Where we do have obligations, we meet them. The AI Act requires that users of AI systems know they are interacting with AI. We make this clear at every stage, from product descriptions to the reports themselves. Our terms of service include explicit AI output disclaimers, and our processing architecture is designed to exceed the data handling expectations the Act sets for systems at our risk level.
UK regulation
The UK is developing its own AI regulatory framework through a principles-based approach rather than a single piece of legislation. CASM Labs is a UK-registered company and we align our practices with the core principles of transparency, fairness, and accountability that underpin the UK's approach to AI governance.
What this means for your compliance team
If you are evaluating a CASM Labs product and need to satisfy internal compliance or procurement requirements:
- Our products are not classified as high risk under the EU AI Act
- Documents are processed in real time and not retained
- No customer data is used for model training
- AI involvement is disclosed transparently throughout every product
- Payments are handled by a PCI-compliant merchant of record
- A data processing addendum is available on request for enterprise buyers
Technical safeguards
Our products run on enterprise-grade cloud infrastructure with encryption in transit and at rest. AI processing happens in isolated sessions with no cross-contamination between users.
For full details, see our Privacy Policy and Cookie Policy.