
Photo by Kelly Sikkema on Unsplash
Google Launches Audio Overview, an AI Feature That Creates Podcasts From Your Notes
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Google announced yesterday a new experimental feature called Audio Overview to create AI-generated audio discussions with AI hosts to discuss the topics in users’s notes on NotebookLM—another experimental tool launched last summer and expanded globally this summer.
According to the new update , Audio Overview works along with NotebookLM’s newest version, Gemini 1.5 and its multimodal capacities to create engaging audio discussions based on users’ documents and information on their personal notes.
“With one click, two AI hosts start up a lively ‘deep dive’ discussion based on your sources,” explained Biao Wang, Product Manager at Google Labs, in the document, “They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.”
Users can also download the conversations to listen to a podcast’s audio on the go. To explain further the new feature, Google provided an 8-minute audio sample of what the tool can do, based on the new update of Audio Overview.
Two hosts, who sound like a man and a woman with very human-like voices, explain that the new AI tool can “do the reading for you and just tell you the good stuff.” The new update for NotebookLM addresses people who learn better by listening to the information instead of reading it. “It’s like having a personalized expert on whatever you are researching,” explains the female host.
Wang also noted that the new AI tool has limitations: “For example, for large notebooks, it can take several minutes to generate an Audio Overview.” The hosts also only speak English, users can’t interrupt them, and the content might include inaccurate information. Another consideration is that the “podcast” is not “objective,” as it mainly considers the sources provided by users.
To access the new feature, users have to open an existing notebook on NotebookLM, then open the Notebook guide, and click on “Generate” to create the AI podcast.
Google recently shared another AI update, an enhancement feature to “polish” emails for Google One AI Premium or Gemini Workspace add-on users.

Image by Mariia Vitkovska, from iStock Photos
AI Voice App Can Detect High Blood Pressure
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Klick Labs announced on Tuesday a new method for detecting chronic high blood pressure (hypertension) using voice recordings. Their research presents a non-invasive approach that could improve early hypertension detection.
In the study, 245 participants recorded their voices up to six times a day for two weeks using a Klick Labs mobile app. The app, which uses machine learning, analyzed vocal biomarkers to identify hypertension with up to 84 percent accuracy for females and 77 percent for males. Key features analyzed included pitch variability, speech energy distribution, and sound sharpness.
Yan Fossat, Senior Vice President at Klick Labs, noted, “we discovered a more accessible way to detect hypertension, which we hope will lead to earlier intervention for this widespread global health issue. Hypertension can lead to a number of complications, from heart attacks and kidney problems to dementia.”
Hypertension affects over 35% of the global population, and it is dubbed the “silent killer” by the World Health Organization , because many people are unaware they have it. Traditional blood pressure measurements, like arm cuffs, require specialized equipment and expertise, making them less accessible in some areas.
“Voice technology has the potential to exponentially transform healthcare, making it more accessible and affordable, especially for large, underserved populations,” said Jaycee Kaufman, Klick Labs research scientist and co-author of the study.
Klick Labs’ research builds on previous studies that have linked speech characteristics to heart failure symptoms and pulmonary hypertension. However, the researchers argue that this is the first study to directly investigate the relationship between speech and arterial blood pressure.
While the study’s results are promising, the researchers acknowledge several limitations. They pointed out that the number of hypertensive cases was limited, and that the participant pool was predominantly of Indian ethnicity.
They emphasized the need for a larger and more diverse participant pool to include individuals with a broader range of hypertension symptoms and varied ethnic backgrounds.
Although the proposed model demonstrated high performance with single-recording tests, the researchers observed that optimal results were achieved when data from multiple recordings per participant were used, necessitating data collection over several sessions.
They called for further research to explore methods for reducing the number of required recordings, ideally transitioning to a one-shot approach.
The researchers also noted that participants had moderate training in intonation, articulation, and speech corpus, and suggested that more naturalistic speech collection scenarios should be investigated.
As the availability and accurateness of AI-powered health tools continues to grow, it’s essential to approach them with caution . While these tools can be valuable resources, as to yet, they should not replace the advice of a qualified medical professional.