YouTube Wants Record Labels’ Permission To Train AI Music Tools - 1

Photo by Alexander Shatov on Unsplash

YouTube Wants Record Labels’ Permission To Train AI Music Tools

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Youtube has been approaching major record labels like Sony, Universal, and Warner, to agree on licensing deals to train new AI music tools. According to the Financial Times , YouTube is developing AI technologies that will be launched later this year and wants permission to train them and clone artists’ music.

The tech giant wants to reach financial agreements to avoid legal repercussions, just like OpenAI has been doing with major publications and media corporations like Time and News Corp .

YouTube has also reached out to artists. Last year, ten artists—including Troye Sivan, John Legend, and Charli XCC—agreed to participate in a test product to create music clips called “Dream Track.” In May, YouTube shared updates for the app including instrumental music generated by AI to this tool. “We’re excited to continue to ideate more Dream Track features that we hope enable deeper engagement between music fans and artists on YouTube,” states the document .

And new projects other than Dream Track and involving licensed music are in development. “We’re not looking to expand Dream Track but are in conversations with labels about other experiments,” declared Google to the Financial Times.

However, YouTube’s strategy could affect and upset artists as well, especially those who have already spoken out against this type of technology. According to The Guardian , just two months ago, a group of over 200 artists— including Stevie Wonder, Billie Eilish, J Balvin, and Nicki Minaj— signed an open letter against the use of AI technologies similar to the ones YouTube is developing. “We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem,” states the document .

A few users on Reddit have already expressed their concern regarding YouTube’s deal attempts. “This represents a significant disadvantage for artists, who receive minimal compensation while enduring corporations have the ability to market their art—or a convincing imitation of “new” art in their style—indefinitely, even after the demise of the artist’s descendants,” wrote one user.

Painters and photographers have also shared their concerns over AI taking over human art , and have created new platforms like Cara in order to protect human-created content from major tech giants.

Living Skin on Robots: A Leap Forward in Bioengineering - 2

Image by DMC-FZ18 on Pxhere

Living Skin on Robots: A Leap Forward in Bioengineering

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Humanoid robots could soon look and act more like us, thanks to a new method for attaching lab-grown biological skin tissue published this week by researchers at the University of Tokyo .

The Engineer reports Professor Takeuchi (the lead of the research team) highlighting the potential of Robot’s skin to self-heal: [unlike chemical-based self-healing] “Biological skin repairs minor lacerations as ours does, and nerves and other skin organs can be added for use in sensing and so on”.

The research team drew inspiration from the structure of human skin ligaments. To achieve this, they created V-shaped perforations on the robots’ face. A layer of skin tissue was then applied, adhering to the perforations through a collagen gel. Previously, attaching skin tissue to robots relied on miniature anchors or hooks. These methods limited the types of surfaces that could be covered and could damage the skin during movement. The new method using perforations can be applied to any shape, offering greater flexibility and durability.

The press release from the University of Tokyo discussing Takeuchi’s research, suggests that sensors could be embedded beneath the skin. This would allow robots to gather information about their environment through touch, similar to how humans do.

CNN reports that Takeuchi and his team aim to incorporate additional sensory functions in the upcoming research phase, “to make the skin more responsive to environmental stimuli,” says Takeuchi. This richer sensory data could be used by AI to better understand the world around them and make more informed decisions.

As the research progresses, the potential to create robots that can heal themselves, sense their environment more on a biological level, and perform tasks with human-like dexterity becomes increasingly tangible. While Takeuchi et al.’s research focuses on the physical capabilities of robots, advancements in AI are happening simultaneously. For instance, a humanoid robot named Figure 01 recently demonstrated impressive conversational intelligence using visual inputs. Figure 01’s capabilities highlight the potential for future AI to interact with the world in a more human-like way.

Takeuchi said that: “Realistic facial expressions enhance the robot’s ability to communicate and interact with humans more naturally and effectively [… ] This is particularly important in applications such as healthcare, where empathy and emotional connection can significantly impact patient care.” In this context, ethical considerations become more important. How will we ensure that robots with advanced sensory perception and potentially self-preservation instincts interact with the world in an ethical manner? Inquiries like this are complex questions that AI developers and ethicists will need to address.