Google And Harvard Experts Use AI To Create Groundbreaking Brain Images - 1

Google And Harvard Experts Use AI To Create Groundbreaking Brain Images

  • Written by Andrea Miliani Former Tech News Expert

Harvard neuroscientists and Google researchers worked together using artificial intelligence to create interactive 3D mappings of a small portion of the human brain—the size of half a grain of rice—and published never-before-seen images for the scientific community and the general public.

The research and the images were shared in a paper in Science on May 9 and have already helped scientists better understand brain structures. For example, one discovery is something called “axon whorls,” a cell structure that’s purpose is still unknown. The results are being analyzed and could be used by future researchers to understand the current mysteries of the brain.

According to the MIT Technology Review , the images created now represent “the highest-resolution picture of the human brain ever created.” The interactive mappings, data, and findings are available for free on the Neuroglancer platform.

The tiny portion of brain tissue—collected from a woman with epilepsy during surgery—contains around 57,000 cells, 150 million synapses, 230 millimeters of blood vessels, and represents 1.4 petabytes of data.

To use all the information possible, the scientists cut the 3 mm-long piece of the healthy brain into 5,000 slices, scanned them with an electron microscope, and created digital images. Later, Google’s machine-learning experts virtually linked the images to create interactive 3D views.

According to The Guardian , Jeff Lichtman, a professor of molecular and cellular biology at Harvard and part of the research team, explained that “the reason we haven’t done it before is that it is damn challenging. It really was enormously hard to do this.”

Scientists expect to continue working with Google in the future to reconstruct similar digital maps of a mouse brain since reconstructing an entire human brain would be extremely difficult. According to Google , the new findings and future research could help experts understand diseases like Alzheimer’s and neurological disorders like autism or better understand how memories are created in our brains.

Google Gemini Launches New Models And Features To Compete With ChatGPT - 2

Google Gemini Launches New Models And Features To Compete With ChatGPT

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by

Google launched its most efficient AI model, Gemini 1.5 Flash, and a new AI agent, Project Astra, on Tuesday at Google I/O, the company’s annual developer conference. During the two-hour event, Google’s team explained all the functions of the new models and the added features for current services and devices, all based on AI.

The new Gemini 1.5 Flash will be faster, cheaper, and more efficient than the previous model, 1.5 Pro. According to Google , 1.5 Flash can process large amounts of data, summarize conversations, and caption videos and images quickly. Both 1.5 Flash and 1.5 Pro will be available for users of Vertex AI and Google AI Studio.

Project Astra, on the other hand, was described as an “advanced seeing and talking responsive agent.” This new AI agent can process multimodal information, understand context, and interact with humans.

Google shared a two-minute demo to show how the AI assistant can work. In the video, a Google worker in London uses her smartphone’s camera to ask Project Astra to describe her surroundings, describe the code her coworker is working on in the office, recognize her geographical location, and come up with a creative name for a band. The AI agent answers correctly and creatively in what sounds like a “natural” conversation.

Project Astra even remembers where the worker left her glasses. However, these are not regular glasses but what looks like Google Glasses that integrate the AI assistant. Google didn’t provide details about the glasses, only hinting that they will be a part of a future development.

After the video presentation, Demis Hassabis, CEO of Google’s DeepMind, said, “It’s easy to envision a future where you can have an expert assistant by your side, through your phone or new exciting form factors like glasses.”

Project Astra was announced only a few hours after OpenAI launched GPT-4o , their advanced version of ChatGPT. Both AI products contain similar features: real-time conversations, “vision” through devices, and simultaneous integration and processing of text, images, and audio.

Users on X have already started comparing the two virtual assistants, highlighting the advantages and disadvantages of both versions. “Astra has slightly longer latency,” said one user , and “strong text-to-speech, but hasn’t shown as much emotional range as GPT4o.”

Project Astra is still in an early stage, and Google expects to release it later this year, while GPT-4o will be available for all users within the next few weeks.