New E-Tattoos To Replace Traditional EEG Systems - 1

Image by Nanshu Lu, from Eurekalert

New E-Tattoos To Replace Traditional EEG Systems

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Researchers have developed a groundbreaking liquid ink that can be printed directly onto the scalp, creating electronic tattoos (e-tattoos) to measure brain activity.

In a Rush? Here are the Quick Facts!

  • Researchers developed liquid ink for printing e-tattoos to monitor brain activity.
  • E-tattoos eliminate discomfort and complexity of traditional EEG setups.
  • E-tattoos maintain stable brainwave monitoring for over 24 hours.

This innovation, detailed in Cell Biomaterials on Monday, eliminates the discomfort and complexity of traditional electroencephalography (EEG) setups, offering new possibilities for neurological diagnostics and brain-computer interfaces.

Traditional EEG systems rely on labor-intensive electrode placement, conductive gels, and bulky cables, often leading to signal degradation and patient discomfort during extended use.

The new e-tattoo technology, designed by a team led by Nanshu Lu from the University of Texas at Austin, simplifies the process by using biocompatible ink that flows through hair to adhere directly to the scalp.

Using an inkjet printer, researchers can precisely apply the e-tattoo ink to predetermined locations on the scalp. Once dried, the ink forms thin-film sensors capable of capturing brainwaves without the need for adhesives or lengthy preparation.

This new approach addresses these challenges by utilizing advanced materials and non-contact, on-body digital printing to create e-tattoos.

This breakthrough technology provides a comfortable and efficient solution for long-term, high-quality brain activity monitoring, eliminating the drawbacks of conventional EEG systems.

“Our innovations in sensor design, biocompatible ink, and high-speed printing pave the way for future on-body manufacturing of electronic tattoo sensors, with broad applications both within and beyond clinical settings,”

said Nanshu Lu, the paper’s co-corresponding author at the University of Texas at Austin, as reported by EurekAlert .

Tests on participants demonstrated that e-tattoos performed as well as conventional electrodes, with superior durability. While gel-based electrodes dried out and failed after six hours, e-tattoos maintained stable connectivity for over 24 hours.

The ink’s adaptability also allows for the integration of signal-conducting lines, replacing standard wires in EEG setups. This adjustment significantly reduces noise interference. In future iterations, researchers aim to embed wireless transmitters directly into the e-tattoos, paving the way for a fully wireless EEG system.

E-tattoos are already used on various parts of the body to measure signals like heart activity and muscle fatigue, but applying them to hairy areas like the scalp posed a challenge until now, as noted by EurekALert.

The breakthrough ink formulation overcomes this hurdle, broadening the applications of e-tattoo technology. This innovation not only streamlines EEG tests but also opens doors for more practical and widespread use of brain-monitoring technologies in clinical and non-clinical settings.

Stanford Expert Accused Of Using AI To Fabricate Court Statement - 2

Image by Kireyonok_Yuliya, from Freepik

Stanford Expert Accused Of Using AI To Fabricate Court Statement

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Stanford professor Jeff Hancock is accused of using AI to fabricate citations in a court declaration defending Minnesota’s deepfake law.

In a Rush? Here are the Quick Facts!

  • Jeff Hancock is accused of using AI to fabricate court declaration citations.
  • Two of Hancock’s cited academic articles are untraceable and may not exist.
  • Attorney Berdnarz claims the errors resemble AI-generated “hallucinations.

Stanford communication professor and misinformation expert Jeff Hancock is under scrutiny after being accused of using artificial intelligence (AI) to fabricate parts of a court statement. as reported by The Stanford Daily .

Hancock, the founding director of Stanford’s Social Media Lab, submitted a 12-page declaration in November to a Minnesota court in defense of the state’s 2023 law criminalizing the use of deepfakes to influence elections, as reported by The Daily.

His testimony supported Minnesota Attorney General Keith Ellison, emphasizing the dangers of AI-generated media in spreading misinformation. Hancock argued deepfakes undermine traditional fact-checking and increase the persuasiveness of false information, says The Daily.

The declaration, which included 15 citations , has come under fire after two cited academic articles could not be located. These works were untraceable through their reported digital object identifiers or the journals cited, according to The Daily.

Attorney Frank Berdnarz, representing the plaintiffs, Republican State Representative Mary Franson and conservative social media satirist Christopher Kohls, challenged the integrity of Hancock’s testimony, as noted by The Daily.

Berdnarz argued that the questionable citations resembled AI-generated “hallucinations” and called for the document to be excluded from judicial consideration.

He stated, “The existence of a fictional citation Hancock (or his assistants) didn’t even bother to click calls into question the quality and veracity of the entire declaration,” as reported by The Daily.

Hancock, who was paid $600 per hour for his testimony, made his declaration under penalty of perjury. The professor has not commented on the allegations, reported The Daily.

Hancock, who teaches courses on communication and technology, was featured in a 2024 Netflix documentary with Bill Gates discussing AI’s future. He is slated to teach Truth, Trust, and Tech in spring 2025, exploring deception in digital media, noted The Daily.

Kohls, known online as Mr. Reagan, has previously opposed California laws targeting deceptive political media, including one addressing a manipulated campaign video of Vice President Kamala Harris, as noted by The Daily.

If proven true, these allegations cast a shadow over the credibility of academic expertise in high-stakes legal matters, where integrity is paramount.

If proven true, the allegations against Hancock highlight a profound contradiction: an expert on misinformation potentially fabricating evidence in a legal declaration. His role as a scholar who warns against the dangers of AI-generated deception stands in stark contrast to the accusation that he used the very tools he critiques to bolster his testimony.