Scientists are failing to disclose their use of AI despite journal mandates (Physics World)

# The Hidden Hand: Why Scientists are Secretly Using AI (and Why It Matters)

In the world of science, honesty is everything. Whether you are measuring the temperature of a distant star or counting the cells in a Petri dish, the goal is to find the truth. But recently, a shadow has fallen over the world of academic publishing.

According to reports, including those highlighted by *Physics World*, a growing number of scientists are using Artificial Intelligence (AI) to write their research papers, but they aren’t telling anyone about it. Even though most major journals now have strict mandates requiring researchers to disclose their use of AI, many authors are keeping their digital assistants a secret.

This isn’t just a case of “cheating on homework.” It’s a trend that threatens the very foundation of how we trust scientific discovery.

## The Rulebook: What Journals Actually Want

To understand the problem, we first have to understand the rules. Most scientific journals—the prestigious magazines where research is officially published—have updated their guidelines over the last two years.

The consensus is generally this: **AI is okay to use, but you must be transparent.**

Journals like *Nature*, *Science*, and those published by the Institute of Physics (IOP) generally allow AI to help with things like:
* Polishing grammar and language.
* Checking code for errors.
* Summarizing large batches of data.

However, they almost all agree that AI cannot be listed as an “author.” Why? Because an author must take responsibility for the work. If a human makes a mistake, they can be held accountable. You can’t sue or fire an algorithm. Therefore, the rule is simple: If you used AI to help write or analyze your work, you must include a statement explaining exactly what the AI did.

The problem is that many scientists are simply ignoring this rule.

## The “Smoking Guns”: How We Know They’re Hiding It

If scientists are hiding their AI use, how do we know they are doing it? The answer is often found in the most embarrassing way possible: they forget to delete the evidence.

In recent months, the internet has been flooded with examples of published scientific papers that contain “telltale” AI phrases. These are the equivalent of a student turning in an essay that still has the Wikipedia “edit” buttons visible.

### Example 1: The “As an AI Language Model” Slip-up
Several papers have been found in peer-reviewed journals containing the phrase: *”As an AI language model, I do not have personal opinions, but…”* This happens when a researcher asks ChatGPT to write a conclusion or an abstract, and then simply copies and pastes the entire response without reading it carefully.

### Example 2: The Hallucinated Reference
AI tools often “hallucinate,” which means they make things up that sound real. Scientists have been caught submitting papers with bibliographies full of fake sources. The AI creates a title like *”Quantum Gravity and Its Effects on Carbon Structures (2022)”* and assigns it a real-sounding author. When editors try to look up the paper, they realize it doesn’t exist.

### Example 3: The Bizarre Graphics
Recently, a paper in a biology journal went viral because it included an AI-generated image of a rat with impossibly large, nonsensical anatomy. The labels on the diagram were gibberish. The researchers used an AI image generator to create a “professional-looking” diagram, but didn’t check to see if the diagram actually made scientific sense.

## Why Are Scientists Keeping it Secret?

If the journals allow AI use (as long as it’s disclosed), why are so many researchers staying silent? There are three main reasons:

### 1. The Fear of Stigma
Science is an old-fashioned world in many ways. There is a strong “do it yourself” ethic. Many researchers fear that if they admit an AI helped them write their paper, their peers will think they are lazy or that their brainpower isn’t up to the task. They worry that admitting to AI use will make their hard work seem “cheap.”

### 2. The Language Barrier
Science is global, and English is the primary language of research. For many brilliant scientists in Asia, South America, or Europe, English is a second or third language. AI is a godsend for these researchers because it helps them compete with native English speakers. However, they may feel that admitting to using a “translator” or “editor” makes them look less professional, so they hide the tool they used to level the playing field.

### 3. Laziness and the “Publish or Perish” Culture
In modern science, your career depends on how many papers you publish. This pressure is intense. Some researchers use AI to “mass-produce” papers, churning out as much content as possible. In these cases, disclosure isn’t just an oversight—it’s an attempt to hide the fact that the human didn’t do much of the work at all.

## The Danger to Science: Why This Matters

You might wonder, “If the science is correct, does it really matter if a robot wrote the words?” The answer is a resounding **yes**.

### The Problem of Reproducibility
The golden rule of science is that someone else should be able to read your paper, follow your steps, and get the same result. This is called “reproducibility.” If an AI is used to analyze data or write a methodology section, it might omit a small, crucial detail that a human would have known to include. If we don’t know AI was involved, we don’t know where the “logic” might have gaps.

### The Breakdown of Peer Review
Scientific papers are checked by other experts (peer reviewers) before they are published. These reviewers are often volunteers who give up their time to ensure the quality of science. If the world is flooded with AI-generated papers, the peer-review system will collapse. It is impossible for humans to keep up with the speed at which a bot can produce “scientific-sounding” text.

### The “Black Box” of Data
In physics specifically, AI is often used to find patterns in massive datasets. If a physicist uses an AI to find a new particle but doesn’t disclose how the AI was trained, other scientists can’t verify if the discovery is real or just a “glitch” in the algorithm’s training.

## The Road Ahead: Transparency is the Key

The solution isn’t to ban AI. That would be like banning calculators in a math class; it’s an impossible and unnecessary battle. AI can be a powerful tool for good. It can help summarize years of research in seconds or help find a needle of a discovery in a haystack of data.

However, the culture of science must change.
* **Normalizing Disclosure:** Journals need to make it clear that using AI for grammar or formatting is totally fine and won’t hurt your chances of publication.
* **Better Tools:** Editors need better software to detect AI-generated content, not to “punish” it, but to ensure it is properly labeled.
* **Education:** Universities must teach young researchers how to use AI ethically—as a tool, not a ghostwriter.

***

## FAQ: AI and Scientific Publishing

**Q: Is it illegal for a scientist to use AI?**
**A:** No, it’s not illegal. However, it is often a violation of the “terms of service” or the ethical guidelines of scientific journals. In the professional world of science, this can lead to papers being retracted (removed) and can damage a researcher’s reputation.

**Q: Can’t we just use AI detectors to catch them?**
**A:** AI detectors are not perfect. They often give “false positives,” accusing humans of being bots, or “false negatives,” missing actual bot-written text. They are a helpful tool but not a complete solution.

**Q: Does AI actually do the experiments?**
**A:** Usually, no. In most cases, the AI is used to write the paper or create the charts *after* the lab work is done. However, some labs are starting to use “AI-driven robotics” to run experiments, which makes disclosure even more important.

**Q: What happens if a paper is found to have undisclosed AI?**
**A:** If the errors are small, the journal might issue a correction. If the AI created fake data or the authors lied about how the work was done, the paper is usually retracted. This is a “black mark” on a scientist’s career.

**Q: Is AI use always bad for science?**
**A:** Not at all! AI can help scientists process data from the Large Hadron Collider or predict how proteins fold. The problem isn’t the *use* of the tool; it’s the *secrecy* surrounding it.

## Conclusion

Science relies on a “chain of trust.” We trust the researcher, the journal trusts the peer reviewers, and the public trusts the final result. When AI is used in secret, that chain begins to rust. To keep science moving forward, researchers need to step out of the shadows and be honest about the digital hands helping them write the future. Transparency isn’t just a rule—it’s the heart of discovery.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top