Waymo To Begin Testing Autonomous Vehicles In Japan - 1

Photo by gibblesmash asdf on Unsplash

Waymo To Begin Testing Autonomous Vehicles In Japan

  • Written by Andrea Miliani Former Tech News Expert

The American robotaxi company Waymo—owned by Alphabet—announced yesterday that it will begin testing its autonomous vehicles in Tokyo, Japan, next year, the company’s first international expansion.

In a Rush? Here are the Quick Facts!

  • Waymo partnered with the Japanese companies GO and Nihon Kotsu to launch its autonomous vehicles in Tokyo
  • The first vehicles will arrive in early 2025 and will be driven manually first to map routes and test Waymo’s Driver software
  • The technology aligns with the Japanese government’s vision of mobility and safety

In the press release , Waymo explained that they have partnered with the Japanese companies GO—a taxi-hailing application— and Nihon Kotsu—popular taxi and limousine service providers in the region—for this expansion.

“Our upcoming road trip to Tokyo gives us the chance to work alongside local partners, government officials, and community groups to understand the new landscape,” wrote the company in the statement. “ We’ll learn how Waymo can serve Tokyo’s residents and become a beneficial part of the city’s transportation ecosystem.”

The startup has acknowledged challenges for its system, like adjusting to left-hand traffic, in a different culture. According to the document, this partnership aligns with the Japanese government’s mission of developing innovative technology to improve mobility and safety.

Toyota recently partnered with the Nippon Telegraph and Telephone Corporation (NTT) —partly owned by the Japanese government—to reach a “society with zero traffic accidents” with new mobility AI software.

Waymo said the first vehicles, all-electric Jaguar I-PACEs, will arrive in Japan in early 2025 and Nihon Kotsu drivers will be driving the vehicles manually at the beginning to map routes across the Japanese capital. The introduction of the vehicles to the routes will allow Waymo’s AI-powered software Waymo Driver to adjust to the new environment.

This year, Waymo developed strategies and built multiple partnerships to expand in the United States. In August the company deployed its vehicles on San Francisco freeways , in September Waymo partnered with Uber to expand to Atlanta and Austin, and with Hyundai in October to add more EV taxis to its fleet .

AI Welfare: Anthropic’s New Hire Fuels Ongoing Ethical Debate - 2

Image by Freepik

AI Welfare: Anthropic’s New Hire Fuels Ongoing Ethical Debate

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

As fears over AI outpacing human control grow, Anthropic , an AI company, has turned its attention to a new concern: chatbot welfare.

In a Rush? Here are the Quick Facts!

  • Anthropic hired Kyle Fish to focus on AI system welfare.
  • Critics argue AI welfare concerns are premature, citing current harm from AI misuse.
  • Supporters believe AI welfare preparation is crucial to prevent future ethical crises.

In a new move, the company has hired Kyle Fish to research and protect the “interests” of AI systems, as first reported today by Business Insider . Fish’s role includes pondering profound questions such as what qualifies an AI system for moral consideration and how its “rights” might evolve.

The rapid evolution of AI has raised ethical questions once confined to science fiction. If AI systems develop human-like thinking, could they also experience subjective emotions or suffering?

A group of philosophers and scientists argues that these questions demand attention. In a recent preprint report on arXiv , researchers called for AI companies to assess systems for consciousness and decision-making capabilities, while outlining policies to manage such scenarios.

Failing to recognize a conscious AI, the report suggests, could result in neglect or harm to the system. Anil Seth, a consciousness researcher, says that while conscious AI may seem far-fetched, ignoring its possibility could lead to severe consequences, as reported by Nature .

“The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel,” Seth argued in Nautilus .

Critics, however, find AI welfare concerns premature. Today’s AI already inflicts harm by spreading disinformation, aiding in warfare, and denying essential services.

Yale anthropologist Lisa Messeri challenges Anthropic’s priorities: “If Anthropic — not a random philosopher or researcher, but Anthropic the company — wants us to take AI welfare seriously, show us you’re taking human welfare seriously,” as reported by Buisness Insider

Supporters of AI welfare contend that preparing for sentient AI now could prevent future ethical crises.

Jonathan Mason, an Oxford mathematician, argues that understanding AI consciousness is critical. “It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about — that we didn’t even realize that it had perception,” as reported by Nature.

While skeptics warn against diverting resources from human needs, proponents believe AI welfare is at a “transitional moment,” as noted by Nature.

Business Insider reports that Fish did not respond to requests for comment regarding his new role. However, they note that on an online forum focused on concerns about an AI-driven future, he expressed a desire to be kind to robots.

Fisher underscores the moral and practical importance of treating AI systems ethically, anticipating future public concerns. He advocates for a cautious approach to scaling AI welfare resources, suggesting around 5% of AI safety resources be allocated initially while stressing the need for thorough evaluation before any further expansion.

Fisher sees AI welfare as a crucial component of the broader challenge of ensuring that transformative AI contributes to a positive future, rather than an isolated issue.

As AI systems grow more advanced , the concern extends beyond their potential rights and suffering to the dangers they may pose. Malicious actors could exploit AI technologies to create sophisticated malware, making it more challenging for humans to detect and control.

If AI systems are given moral consideration and protection, this could lead to further ethical complexities regarding the use of AI in cyberattacks.

As AI becomes capable of generating self-learning and adaptive malware, the need to protect both AI and human systems from misuse becomes more urgent, requiring a balance between safety and development.

Whether an inflection point or misplaced priority, the debate underscores AI’s complex and evolving role in society.