VAARHAFT’s AI Strategy: Proprietary Computer Vision Forensics, Not LLMs
Oct 2, 2025
- Team VAARHAFT

(AI generated)
A finance employee joined a video meeting, saw familiar faces on screen and approved a transfer worth roughly 25.6 million US dollars. Only later did investigators confirm the obvious in hindsight. The executives were synthetically cloned, and the call was a deepfake. The case, reported from Hong Kong in 2024, shows how quickly image and video deception can turn into real losses (Financial Times). This is the world our customers operate in. It raises a direct question about our approach: does VAARHAFT build its own AI models? Yes. Are these large language models? No. We build proprietary computer vision forensics because images and documents demand different AI than text.
The problem: why images and documents demand different AI than text
Large language models learn patterns in tokens. They excel at reasoning over text and generating language. Image and document forensics require something else entirely. Detection hinges on faint pixel and sensor cues such as compression artifacts, demosaicing patterns, recapture traces and editing discontinuities.
This is why the question is not academic: If a fraud team tries to use a text model for pixel level authenticity analysis, it will miss the forensically relevant signals. It will also struggle to localize manipulated regions with heatmaps that an auditor or claims handler can act on. Our customers need fast, evidence based answers on whether an image was AI generated or edited, whether a document contains spliced elements and whether provenance data corroborates the story. That work belongs to computer vision forensics, not LLMs.
Why VAARHAFT builds proprietary computer vision and forensics models
We build and train our own models because the risk landscape evolves weekly and because off the shelf systems rarely expose the forensic depth that insurers, banks and marketplaces require. First, domain specific training is non negotiable. Effective detectors learn from data that reflects real manipulations, recapture attempts and anti forensics.
Second, robustness and explainability matter as much as raw accuracy. Fraud operations need pixel level heatmaps, interpretable confidence scores and repeatable reports instead of black box judgments. Third, the threat is adaptive. New generators, new upscalers and new recapture tricks keep arriving. In house models allow continuous retraining, hardening and monitoring. Finally, privacy is part of the design. Our models are developed in Germany, hosted in Germany and operate under GDPR principles. We do not retain customer media for training and we delete submitted files after analysis. Customers get evidence without giving up data control.
What proprietary models enable in real workflows
Purpose built forensics unlocks capabilities that generic AI cannot match. Our detectors separate AI generated from camera captured content, flag software based edits and visualize suspicious regions on images and documents with pixel level heatmaps. They read metadata, extract and verify C2PA content credentials where available and combine this with reverse image signals to check whether an upload has appeared elsewhere on the web. Provenance does not replace detection. It complements it. For a deeper dive into benefits and gaps of provenance, also see our explainer (C2PA under the microscope).
These capabilities flow into practical tools that risk teams can operate inside existing processes. For fast authenticity checks of claim photos and uploaded documents, see the image and document analysis powered by our models in Fraud Scanner. For secure capture when you must be certain that a claimant is photographing a real three dimensional scene rather than a screen or a printout, our SafeCam web app performs layered verification after capture and blocks recaptures. Together, detection and secure capture reduce both fraud and false positives with minimal user friction.
Signals, standards and regulation that shape model design
Trust infrastructure is improving across the ecosystem. Content provenance through C2PA is gaining adoption in cameras, creative tools and platforms, which helps verifiers check whether an image carries a cryptographically signed history. Yet provenance will be absent for much of the risky media your teams see. Attackers do not sign their work. That is why forensic detection remains essential alongside provenance checks. Our view aligns with industry practice. Use provenance when it is present. Detect manipulations when it is not. And always correlate both signals.
Regulation also drives design choices. The EU Artificial Intelligence Act introduces transparency and governance requirements across AI systems. For authenticity tooling that informs underwriting or compliance, the emphasis on traceability and risk management favors explainable outputs and auditable processes. Building proprietary models lets us implement these principles end to end, from data handling to report generation.
Short practical takeaways for risk teams
- Prioritize pixel level explainability and provenance metadata extraction in any image or document authenticity check. Look for clear region maps and machine readable content credentials where available.
- Ask vendors about benchmarks and evaluation. Participation in challenges that measure manipulation detection and localization helps validate generalization beyond a single dataset.
- Combine proactive capture controls with reactive forensic detection to reduce both fraud and false positives across claims, onboarding and marketplace uploads.
Conclusion
So does VAARHAFT build its own AI models? Yes. And they are not LLMs. Image and document forensics is not a text problem. It is a computer vision discipline where pixel level cues, metadata and provenance intersect. Proprietary models let us keep pace with attackers, provide actionable heatmaps and PDF reports, and meet GDPR expectations with hosting and development in Germany. If you want to see how pixel level forensics and secure capture fit into a layered defense against deepfakes, recaptures and edited documents, explore the tools above or reach out to our team to walk through your workflow.
.png)