[Model Release] AIRealNet — Detecting AI‑Generated vs Real Images
Hello community
, We’re excited to introduce AIRealNet, a vision model fine‑tuned to distinguish AI‑generated images from real human photos. With the rapid growth of generative models, we wanted to provide a lightweight, open‑source tool for researchers, developers, and educators working on authenticity detection.
Key Features
-
Architecture: Built on SwinV2‑Tiny for efficiency and strong performance.
-
Dataset: ~200k balanced images (AI‑generated vs real), curated with a privacy‑first approach.
-
Use Cases:
-
Deepfake / AI‑content detection
-
Research on authenticity and trust in media
-
Educational demos for computer vision courses
-
-
Format: Available on Hugging Face Hub for direct use with
transformersortimm.
Quick Stats
-
Classes:
AIvsReal -
Balanced dataset for fair training
-
Rows: [insert row count here after running wc -l or pandas]
-
License: MIT
Try It Out
python
from transformers import pipeline
pipe = pipeline("image-classification", model="XenArcAI/AIRealNet")
pipe("https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/677fcdf29b9a9863eba3f29f/eVkKUTdiInUl6pbIUghQC.png")# example image
Links
-
Dataset: Parveshiiii/AI-vs-Real · Datasets at Hugging Face (its not full fine tuning dataset)
Call for Feedback
We’d love to hear your thoughts on:
-
Benchmarks on different datasets
-
Potential integrations (e.g., moderation pipelines, authenticity checkers)
-
Ideas for extending to multi‑class detection (AI model source classification)
Let’s collaborate to make AI detection more transparent and accessible.