Elon Musk Says Tesla Will Use Humanoid Robots In Production Next Year - 1

Photo by Prometheus 🔥 on Unsplash

Elon Musk Says Tesla Will Use Humanoid Robots In Production Next Year

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Elon Musk, Tesla’s CEO, said on social media that his electric car company will begin using and producing humanoid robots by next year.

“Tesla will have genuinely useful humanoid robots in low production for Tesla internal use next year and, hopefully, high production for other companies in 2026,” wrote Musk on X , in a reply to another user’s posts about former OpenAI worker Daniel Kokotajlo’s predictions on the evolution of artificial general intelligence.

According to the BBC , Musk’s statements tend to be very ambitious, and the CEO does not always deliver the expected results. It’s also not the first time Musk talked about producing humanoid robots. He has previously explained that Tesla is building these products and that they will work on mass production to sell them for less than $20,000.

In 2019, Musk also promised robotaxis for 2020, but this promise has not been fulfilled. According to CNBC , investors have been asking questions and requesting more information on these self-driving vehicles, named CyberCabs, and about the humanoid robots named Optimus.

A few weeks ago, in April, Musk assured stakeholders that the Optimus would be in “limited production in the natural factory itself, doing useful tasks before the end of this year.” He also added that Tesla will probably be selling them next year.

According to Forbes , he said recently that Optimus robots would significantly increase the company’s valuation—currently estimated at $788 billion by Google Finance —to $25 trillion.

Musk and other Tesla executives are expected to discuss the challenges and expectations for the business and share financial results today.

Despite Musk’s false promises and issues in the company from laying off over 10% of its staff a few months ago to false promises and investors being left in the dark , stakeholders still believe in Tesla’s CEO. Last month, shareholders approved the largest pay package in U.S. history for Musk .

Study Reveals Growing Data Restrictions Impacting AI Training - 2

Image by Adisorn, from Adobe Stock

Study Reveals Growing Data Restrictions Impacting AI Training

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A new study led by a MIT research group, reveals a growing trend of websites limiting the use of their data for AI training. The study examined 14,000 web domains and found that restrictions have been placed on 5% of all data. Additionally, over 28% of data from the highest-quality sources across three commonly used AI training datasets is restricted. This study is the first large-scale longitudinal audit of consent protocols for web domains used in AI training corpora.

Generative AI systems, like ChatGPT, Gemini, and Claude, rely heavily on vast amounts of data to function effectively. The quality of these AI tools’ outputs depends significantly on the quality of the data they are trained on. Historically, gathering this data was relatively straightforward, but the recent surge in generative AI has led to tensions with data owners. Many data owners are uneasy about their content being used for AI training without compensation or proper consent.

The consequences of this data squeeze are multifaceted. It will make developing AI systems more difficult, as they rely heavily on this data for training. The restrictions may also bias AI models by limiting them to less diverse data sets. Additionally, copyright issues could arise if AI models are trained on data that websites don’t want used for that purpose.

The restrictions are having a significant impact. In just one year, a significant portion of data from important websites has become restricted, and this trend is expected to continue.

Shayne Longpre, the study’s lead author, states : “We’re seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities.”

This means that smaller AI companies and academic researchers who depend on freely available datasets could be disproportionately affected, as they often lack the resources to license data directly from publishers.

For example, Common Crawl , a dataset comprising billions of pages of web content and maintained by a nonprofit, has been cited in over 10,000 academic studies, illustrating its critical role in research.

The study highlights the need for new tools that give website owners more control over how their data is used. Ideally, these tools would allow them to differentiate between commercial and non-commercial uses, permitting access for research or educational purposes.

The situation also serves as a reminder to big A.I. companies. They need to find ways to collaborate with data owners and offer them value in return for access. A more sustainable approach is crucial for the continued development of A.I.

Longpre emphasised the need for big AI companies to collaborate with data owners and offer them value in return for access. For years, these companies have treated the internet as an “all-you-can-eat data buffet” without giving much in return to data owners. However, this approach is unsustainable, and as data owners become more protective of their content, AI companies will need to find ways to work with them to ensure continued access to high-quality data.