New AI Detects Questionable Scientific Journals - 1

Image by National Cancer Institute, from Unsplash

New AI Detects Questionable Scientific Journals

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Scientists developed an AI detection system for open-access journals with shady practices, revealing integrity threats in science and the need for human assessment

In a rush? Here are the quick facts:

  • AI trained on 12,000 reputable and 2,500 low-quality journals.
  • AI flagged over 1,000 previously unknown suspect journals.
  • The current AI false positive rate is 24%, requiring human oversight.

Open-access journals enable free access to research for scientists worldwide, boosting their global exposure. However, the open-access model has created an environment where questionable journals now proliferate. These outlets often charge authors fees, promise fast publication, but lack proper peer review, putting scientific integrity at risk.

Researchers recently published their findings testing a new AI tool which aims to tackle this problem. They trained the AI using more than 12,000 high-quality journals, together with 2,500 low-quality or questionable publications removed from the Directory of Open Access Journals (DOAJ).

The AI learned to identify red flags by analyzing editorial board gaps, unprofessional website design, and minimal citation activity.

It identified more than 1,000 previously unknown suspicious journals from a dataset of 93,804 open-access journals on Unpaywall, which collectively publish hundreds of thousands of articles. Many of the iffy journals come from developing countries.

“Our findings demonstrate AI’s potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review,” the researchers write.

The researchers point out that the system is not perfect. It currently produces 24% false positives, meaning one in four genuine journals may be incorrectly flagged. Human experts are still required for final evaluation.

The AI system assesses journal credibility by analyzing website content, design elements, and bibliometric data, including citation patterns and author affiliations. Indicators of questionable journals include high self-citation rates and lower author h-index values, while established institutional diversity and broad citation networks indicate reliability.

The research team expects future development will improve the AI system’s ability to detect deceptive publisher strategies. By combining automated tools with human oversight, the scientific community can better protect research integrity and guide authors toward trustworthy journals.

PromptLock: How AI Could Supercharge Ransomware Attacks - 2

Image by Max Bender, from Unsplash

PromptLock: How AI Could Supercharge Ransomware Attacks

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Cybersecurity firm ESET has announced the discovery of what its researchers call “the first known AI-powered ransomware.”

In a rush? Here are the quick facts:

  • PromptLock can steal, encrypt, and potentially destroy data.
  • It uses AI to generate malicious scripts automatically on the target machine.
  • AI could allow ransomware to adapt, scale, and attack faster than before.

The malicious software, called PromptLock, shows how AI can be used in dangerous cyberthreats. Researchers at ESET explain that PromptLock can steal data, while encrypting files, and destroy data. However, the researchers say that this destructive function does not seem to be active yet.

In other words the ransomware does not seem to have been deployed in real-world attacks. As a result, ESET believes that the software is either an unfinished proof-of-concept, or a project still under development.

“The PromptLock malware uses the gpt-oss-20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes. PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,” said ESET researchers.

They added: “The PromptLock ransomware is written in Golang, and we have identified both Windows and Linux variants uploaded to VirusTotal.” Golang is a flexible programming language often used by malware developers because it can run across different platforms.

Experts have long warned that AI could give hackers new tools. “AI models have made it child’s play to craft convincing phishing messages, as well as deepfake images, audio and video,” ESET noted. With these tools widely available, even attackers with limited technical skills can launch more advanced attacks .

For example, researchers at CloudSek recently discovered that hackers can embed ransomware instructions inside documents via AI summarizers. “A novel adaptation of the ClickFix social engineering technique has been identified, leveraging invisible prompt injection to weaponize AI summarization systems,” they said.

These infected AI summarizers can produce dangerous instructions through invisible text tricks and repeated hidden commands, leading users to unknowingly execute malicious tasks automatically.

Ransomware has evolved into a major cybersecurity threat , often used by both criminals and advanced hacking groups. The discovery of PromptLock technology indicates that AI systems could enhance these ransomware attacks, automating file scanning, data theft, and adjusting tactics in real time.

While PromptLock may not yet be in active use, researchers say it highlights a future of cyberattacks powered by artificial intelligence.