
Apple Maps Beta is Now Available on the Web
- Written by Shipra Sanganeria Cybersecurity & Tech Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
- Reader’s Comments 1
Apple, this week, announced the launch of the web version of Apple Maps, albeit in a public beta form. Per the July 24 announcement, the beta web version can be accessed by any user on a desktop or mobile browser, regardless of the operating system.
According to Apple, the web interface of Apple Maps closely resembles its iOS app and includes features such as directions, guides, reviews, businesses, and more.
“Now, users can get driving and walking directions; find great places and useful information including photos, hours, ratings, and reviews; take actions like ordering food directly from the Maps place card; and browse curated Guides to discover places to eat, shop, and explore in cities around the world,” the press release stated.
In the press release, Apple stated that new features, including its Look Around , which offers 360-degree panoramic views of certain locations, will be available in the coming months.
Additionally, developers using MapKit JS tool can also link to Maps on the web, enabling their users to access driving directions, detailed place information, and more.
Accessible through the beta.maps.apple.com site, the web-based version of Apple Maps is currently available only in English. It works on Safari and Chrome for Mac and iPad, as well as Edge and Chrome for Windows PCs.
Apple announced that support for additional languages, browsers, and platforms will be rolled out in the future.
Coming more than a decade after its initial release in 2012, Apple’s launch of Maps on the web may signal a direct challenge to Google in this space, while also addressing the long-standing user request for browser and cross-device access. Although some websites, such as DuckDuckGo, have utilized MapKit JS to integrate Apple Maps web view into their search results since 2019 , this move represents a significant step in expanding Apple Maps’ accessibility.

Image by frimufilms, from Freepik
AI Model Degradation: New Research Shows Risks of AI Training on AI-Generated Data
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
According to a study published on July 24, the quality of AI model outputs is at risk of degradation as more AI-generated data floods the internet.
The researchers of this study found that AI models trained on AI-generated data produce increasingly nonsensical results over time. This phenomenon is known as “model collapse.” Ilia Shumailov , lead author of the study, compares the process to repeatedly copying a photograph. “If you take a picture and you scan it, and then you print it, and you repeat this process over time, basically the noise overwhelms the whole process, […] You’re left with a dark square.”
This degradation poses a significant risk to large AI models like GPT-3, which rely on vast amounts of internet data for training. GPT-3, for example, was partly trained on data from Common Crawl , an online repository containing over 3 billion web pages. The problem is exacerbated as AI-generated junk content proliferates online. This effect could be further amplified by the findings of a new study indicating growing restrictions on data available for AI training .
The research team tested the effects by fine-tuning a large language model (LLM) on Wikipedia data and then retraining it on its own outputs over nine generations. They measured the output quality using a “perplexity score,” which indicates the model’s confidence in predicting the next part of a sequence. Higher scores reflect less accurate models. They observed increased perplexity scores in each subsequent generation, highlighting the degradation.
This degradation could slow down improvements and impact performance. For instance, in one test, after nine generations of retraining, the model produced completely gibberish text.
One idea to help prevent degradation is to ensure the model gives more weight to the original human-generated data. Another part of Shumailov’s study allowed future generations to sample 10% of the original dataset, which mitigated some negative effects.
The discussion of the study highlights the importance of preserving high-quality, diverse, and human-generated data for training AI models. Without careful management, the increasing reliance on AI-generated content could lead to a decline in AI performance and fairness. To address this, there’s a need for collaboration among researchers and developers to track the origin of data (data provenance) and ensure that future AI models have access to reliable training materials.
However, implementing such solutions requires effective data provenance methods, which are currently lacking. Although tools exist to detect AI-generated text, their accuracy is limited.
Shumailov concludes, “Unfortunately, we have more questions than answers […] But it’s clear that it’s important to know where your data comes from and how much you can trust it to capture a representative sample of the data you’re dealing with.”