Artificial Intelligence

AI or Not?; Using AI to Detect AI


In the past few months, I’ve been spending a lot of time exploring the practical side of AI, building agents, playing with prompt engineering, and integrating LLMs into real-world systems. One of the recurring thoughts I had during this journey was:
“There’s so much AI-generated content out there… but can we reliably spot it?”

This curiosity led me to build something small and focused: a web application called AI or Not?. It’s based on a very simple idea:

If humans find it hard to tell whether something was generated by AI… why not ask the AI itself?


🧠 The Premise

Whether it’s a perfectly structured paragraph or a hyper-realistic image, distinguishing human-created content from AI-generated output has become increasingly difficult. While tools exist to detect AI-written text or deepfake-style images, most of them are fragmented, overly complex, or simply not accessible to regular users.

So I decided to build a unified interface, something minimal and useful, that would let anyone input a text snippet or image, and get a prediction on whether it’s AI-generated. But more importantly, I wanted the output to include two critical things:

  • A confidence score (because nothing is 100% certain in this space)
  • A short reasoning or explanation for the result


🛠️ Behind the Scenes

The core of AI or Not? is an agent-based backend system that uses a combination of models and heuristic rules to determine the origin of the content. It’s powered by LLMs for reasoning and interpretability, and backed by image analysis models fine-tuned on common AI-generated datasets.

The application is a result of blending a few of my long-time interests:

  • Python and backend architecture (yes, still a Pythonista at heart)
  • Frontend in React (for a snappy, no-friction interface)
  • Serverless architecture for easy scalability
  • And of course, hands-on experimentation with LLMs and AI agents


🧪 Why I Built It

Building AI or Not? was more than a weekend hack. It was a way for me to think through code, to test out ideas around AI interpretability, reasoning chains, and how far we can trust AI to critique itself.

It also feeds into a larger question I’ve been exploring recently:

Can AI play the role of an “introspective verifier” in systems where human oversight is limited?

This is especially relevant in the context of fast-growing generative ecosystems, where content authenticity and trust becomes a serious concern.


🕵️ Try It Yourself

The app is live at:
👉 https://ai-or-not.shalabhaggarwal.com/

Just a very simple and no frills interface. Paste a piece of text or upload an image and see what the AI thinks. And if you’re curious about how it works or want to discuss more, I’m always up for a conversation.


✍️ Closing Thoughts

We’re entering an era where AI not only creates, but also interprets, critiques, and moderates. Tools like AI or Not? are small steps toward building that self-reflective loop where AI can help us understand AI better.

This project was a mix of fun, curiosity, and some solid engineering, and I hope it sparks a few ideas for others exploring similar problems.

If you’re working on something in this space or thinking about practical applications of LLMs and agents, I’d love to connect.


To top