Saturday, December 6, 2025

The Invisible Citations: When Academic Integrity Meets the Emperor’s New Clothes

In an era where AI can write entire research summaries in seconds, here’s a scenario about a Postdoc in Italy facing a problem as old as Hans Christian Andersen’s emperor.

Dr. Bandello is not a real person, but represents a dilemma many early-career researchers face: when everyone else seems to see evidence you cannot verify, do you admit you’re looking at an empty screen?

6:30 AM at Policlinico Tindari

Dr. Bandello stares at her laptop screen in the hospital’s quiet research wing, thirty minutes before morning rounds begin. At 32, she’s preparing her first independent grant proposal—a make-or-break moment that could launch her career in cardiovascular imaging.

The AI research assistant she’d discovered last week had seemed like a godsend. Within seconds, it generated comprehensive literature summaries with impressive specificity: “Recent breakthrough by Müller et al. (2024) demonstrated 43% improvement in early detection using novel biomarker combinations, published in *European Heart Journal*.

The Copenhagen cohort (n=2,847) showed statistical significance at p<0.001 with 94% sensitivity.” Perfect. Exactly the novel angle she needed to distinguish her proposal from established labs with decades more resources. The AI had even provided what appeared to be cutting-edge preprint citations, claiming they were already published in top-tier journals.

The Search That Returned Nothing

But when Dr. Bandello tried to access the full papers through PubMed—the gold standard database she’d relied on throughout her PhD—something troubling emerged. The Müller study returned zero results. Neither did the Copenhagen cohort data. Even expanding her search to include preprint servers and specialty databases yielded nothing.

She tried different search strategies, thinking perhaps she’d missed something. MeSH terms, author combinations, journal-specific searches—all empty. The detailed statistics that had seemed so promising, so ready-made for her grant application, appeared to exist only in the AI’s summary.

Other citations in the AI-generated report were real—legitimate papers with verifiable DOIs that she could access immediately. But these were mixed seamlessly with phantom references that looked equally credible. Author names followed proper conventions. Journal titles matched real publications. The statistics were formatted exactly like genuine research findings. Without DOIs to click, without papers to download, she was essentially citing air.

The Pressure to See What Isn’t There

“Why are you still manually searching everything?” asked Dr. Rossi, her office mate, glancing over during their brief coffee break.
“I’ve been using AI tools for months. Everyone is. It saves hours of literature review time.” Dr. Bandello hesitated.

“I just like to verify the citations myself.” “Verify? Why would you doubt the AI? It’s pulling from the same databases you are, just faster.” Dr. Rossi shrugged.
“Maybe those papers are too recent for PubMed indexing. Or they’re in specialty journals. Trust me, if the tool found them, they exist.” The pressure was subtle but real. In their competitive research environment, admitting you couldn’t find papers that an AI had supposedly located felt like confessing incompetence. Other postdocs discussed AI-generated insights confidently in meetings.

Principal investigators nodded approvingly at the speed of modern literature reviews. Was she the only one actually checking? Or was everyone else also pretending to verify what they couldn’t access, afraid to be the one who admitted the emperor might be naked?

The Supervisor’s Simple Question

The awakening came during her grant proposal review meeting. Dr. Marcelli, her supervisor, scrolled through the reference list with practiced efficiency. “Interesting findings from this Müller study,” he noted.

“Can you send me the DOI? I’d like to review their methodology before we submit.”

Dr. Bandello’s stomach dropped. “I’ll… I’ll get that to you this afternoon.”
That afternoon, she called Dr. Marcelli’s office. “About that Müller paper—I’m having trouble locating the exact DOI.”
“Ah,” he said quietly. “The invisible citations.” “The what?” “It’s becoming more common. AI tools that generate plausible-sounding references to papers that don’t exist. Like Hans Christian Andersen’s emperor—everyone pretends to see the magnificent clothes until someone asks to touch the fabric.” Dr. Bandello felt simultaneously relieved and embarrassed.
“So you’ve seen this before?”
“Dr. Bandello, you’re not the first researcher to come to me with phantom references. And you won’t be the last. The question is: what do we do now?”

Back to Verifiable Ground

That evening, Dr. Bandello returned to PubMed with a different approach. Instead of trying to verify AI-generated citations, she built her literature review from the ground up using verified sources.
Each paper came with a clickable DOI. Each finding could be traced to its original publication. The process was slower, certainly.
But as she worked through legitimate cardiovascular imaging research, she discovered something unexpected: the real papers revealed research gaps and methodological opportunities that were actually more promising than the phantom findings had been. Her grant proposal timeline stretched by two days. But every citation was verifiable. Every statistic was traceable. Every claim was backed by papers she could access, download, and thoroughly review.

The night before submission, she felt something she hadn’t experienced with the AI-generated draft: complete confidence in her references. — Dr. Bandello’s experience is fictional, but the dilemma is universal. Hans Christian Andersen wrote about an emperor who paraded naked because no one dared admit they couldn’t see his “magnificent clothes.”

In modern research, the invisible clothes are citations without DOIs. The courtiers are colleagues who pretend to verify what they cannot.
The child who states the obvious is the researcher who asks:

“But where’s the actual paper?”

KlastroHeron was built on that child’s simple question.
Every citation. Every DOI. Every time. 14-day trial.
No credit card required. Because academic integrity starts with verification.

🔬 Request Your 2-Week Free Trial

No credit card. No auto-subscription. Just test and decide.

Who Can Request?

  • Biomedical researchers
  • Clinical practitioners
  • PhD candidates
  • Biomedical students
  • Pharmaceutical researchers
  • Hospital administrators

How to Apply:

Send an email to: contact@klastrovanie.com

Include:

  1. Your institution and country
  2. Your role (e.g., Postdoc, Clinical Researcher, Biomedical Student)
  3. Your research field
  4. Why you want to test KlastroHeron (2-3 sentences)

We’ll review and send you a 2-week license within 24 hours.
Full access to all features (300 searches/month) during trial.

After 2 weeks? Decide if it fits your workflow. No pressure. No auto-renewal.

Note: This is a fictional scenario based on common challenges in medical research.
Names, institutions, and specific details are illustrative. The situations described reflect
real pain points many professionals face.

Featured image generated using Midjourney for illustrative purposes.

share this recipe:
Facebook
X
Email
Print

Still hungry? Here’s more