[Model Release] AIRealNet — Detecting AI‑Generated vs Real Images

:rocket: [Model Release] AIRealNet — Detecting AI‑Generated vs Real Images

Hello community :waving_hand:, We’re excited to introduce AIRealNet, a vision model fine‑tuned to distinguish AI‑generated images from real human photos. With the rapid growth of generative models, we wanted to provide a lightweight, open‑source tool for researchers, developers, and educators working on authenticity detection.

:key: Key Features

  • Architecture: Built on SwinV2‑Tiny for efficiency and strong performance.

  • Dataset: ~200k balanced images (AI‑generated vs real), curated with a privacy‑first approach.

  • Use Cases:

    • Deepfake / AI‑content detection

    • Research on authenticity and trust in media

    • Educational demos for computer vision courses

  • Format: Available on Hugging Face Hub for direct use with transformers or timm.

:bar_chart: Quick Stats

  • Classes: AI vs Real

  • Balanced dataset for fair training

  • Rows: [insert row count here after running wc -l or pandas]

  • License: MIT

:high_voltage: Try It Out

python

from transformers import pipeline

pipe = pipeline("image-classification", model="XenArcAI/AIRealNet")
pipe("https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/677fcdf29b9a9863eba3f29f/eVkKUTdiInUl6pbIUghQC.png")# example image


:globe_with_meridians: Links

:raising_hands: Call for Feedback

We’d love to hear your thoughts on:

  • Benchmarks on different datasets

  • Potential integrations (e.g., moderation pipelines, authenticity checkers)

  • Ideas for extending to multi‑class detection (AI model source classification)

Let’s collaborate to make AI detection more transparent and accessible.

1 Like