Is this image AI-generated?
Arda's AI Detector gives you a five-layer evidence report — C2PA provenance, IPTC/XMP metadata, EXIF forensics, pixel-level vision, and invisible watermark detection. See why each verdict was reached, not just what it is.
Five independent layers. One verdict.
Each layer runs on its own and reports its own finding. The full report shows you every signal — so the verdict is auditable, not a black box.
Content Credentials (C2PA)
Reads cryptographically signed provenance. If an image was signed by OpenAI, Adobe Firefly, Google, or another C2PA-compliant pipeline, the chain of custody is verifiable — and conclusive.
IPTC / XMP metadata
Checks for the ISO-standardized digitalSourceType = trainedAlgorithmicMedia tag and aiSystemUsed fields embedded by newer generation pipelines.
EXIF forensics
Flags AI-generation software names in EXIF Software tags, common AI output dimensions (512×512, 1024×1024), and PNG tEXt chunks carrying Stable Diffusion parameters — prompt, sampler, seed, model hash.
Pixel-level vision
Sends the image to Gemini 2.5 Flash with a vision prompt looking for known AI hallmarks — overly smooth skin, symmetry artifacts, hand and ear malformations, and brushstroke inconsistency.
Invisible watermark detection
Runs the Stable Diffusion invisible-watermark decoder. When Stable Diffusion generates an image, it embeds a near-invisible signature — finding it is conclusive proof.
Five outcomes, each with a confidence score.
Arda rolls the five layers into one of five verdicts — with a confidence percentage and a plain-English explanation listing the decisive signals.
AI detection is probabilistic. Not a legal or forensic-grade determination.
What you walk away with.
Every scan produces a report that names every signal, the verdict, and a SHA-256 hash of the file for tamper-evidence.
JSON report
Machine-readable record with every layer's finding, the verdict, the confidence, the scan timestamp, and a SHA-256 hash.
EveryonePDF report
Branded, shareable PDF with the verdict, the decisive reasons, and the full evidence trail — ready to send to a client or editor.
PremiumLibrary audit
Sweep your entire Arda library in the background, batched in groups of 100. See a verdict breakdown for every image in one view.
PremiumTamper-evident
Every report includes a SHA-256 hash of the scanned file. If the file changes by a single byte, the hash changes — the report is pinned to this exact image.
EveryoneUnlimited standard scans, free.
Every Arda account includes the AI Detector. Standard scans are unlimited on Free. Premium unlocks unlimited Deep scans, PDF reports, and the library audit.
Premium
- Unlimited standard & Deep scansScale
- Branded PDF reportsShare
- Whole-library auditAudit
- Everything in Arda PremiumFull
Common questions.
- How accurate is Arda's AI Detector?
- Accuracy depends on the evidence layers available for the specific image. When a C2PA signature or an invisible watermark is present, the verdict is conclusive. For images without those signals, Arda combines metadata forensics with vision analysis — both probabilistic. The report shows every layer independently so you can judge the strength of the evidence, not just the verdict.
- Is this legal or forensic-grade?
- No. AI detection is probabilistic and Arda's reports are not legal or forensic-grade determinations. They're a defensible, auditable evidence trail — useful for newsrooms, insurance, education, and internal verification, not for court.
- What if the five layers disagree?
- The report shows each layer's independent finding. The overall verdict weighs conclusive signals (C2PA, invisible watermark) heaviest, metadata next, and vision analysis as a supporting signal. When they disagree you'll see the verdict roll toward "Inconclusive" with the decisive reasons called out in plain English.
- Which AI generators can you detect?
- Any generator that signs images with C2PA (OpenAI DALL·E, Adobe Firefly, Google Imagen, and more), any pipeline that writes the ISO
digitalSourceType=trainedAlgorithmicMediatag, Stable Diffusion images that carry the invisible watermark or PNGtEXtparameters, and anything with telltale EXIF or filename signatures. Pixel analysis covers generators that don't leave metadata at all. - Can you detect edited real photos?
- The detector is tuned for "AI-generated" vs. "authentic." A real photo edited in Photoshop will typically come back "Likely Authentic" or "Authentic." Heavy AI-assisted edits (generative fill, AI upscaling) may surface as "Inconclusive" since they share some signals with fully generated images.
- What's the difference between Standard and Deep scans?
- Standard runs the first four layers — C2PA, IPTC/XMP metadata, EXIF forensics, and pixel-level vision. Deep adds the invisible watermark decoder, which can conclusively identify Stable Diffusion-generated images. Standard is unlimited on Free; Deep is 5/month on Free and unlimited on Premium.
Drop in an image. Get an evidence trail.
No install. Works in your browser. Free accounts get unlimited standard scans and 5 deep scans every month.