Publishers Block AI Bots To Protect Content - 1

Image by AbsolutVision, from Unsplash

Publishers Block AI Bots To Protect Content

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

News publishers continue to fight against AI bots, suing tech companies, warning that scraping poses dangers to journalism, fair compensation systems, as well as the open web’s future.

In a rush? Here are the quick facts:

  • AI tools like ChatGPT reduce traffic to news sites.
  • Cloudflare launched tools to help block unauthorized AI scrapers.
  • Reddit and iFixit have sued or blocked AI companies like Anthropic.

In a new report by The Wall Street Journal (WSJ) , News publishers are fighting back against AI companies that scrape their websites for content without compensation. As AI tools like ChatGPT and Google’s Gemini grow, many media companies are trying to block bots that use their work without permission.

“You want humans reading your site, not bots, particularly bots that aren’t returning any value to you,” said Nicholas Thompson, CEO of The Atlantic, which has a licensing deal with OpenAI but plans to block other AI companies, as reported by WSJ.

This tactic, known as “scraping,” has existed since the early days of Google. Back then, search engines drove traffic to publishers’ websites. Now, AI chatbots enable news summaries which redirect readers away from visiting their original sources . The combination of bot driven traffic reduction, and advertising revenue decline has become a common issue for numerous publishers.

To fight back, publishers are turning to tech companies like Cloudflare , which recently launched tools to let websites control whether AI bots can access content. Dotdash Meredith CEO Neil Vogel, whose company also licenses content to OpenAI, said, “People who create intellectual property need to be protected or no one will make intellectual property anymore,” as reported by WSJ.

Some companies, like Reddit and iFixit, have taken legal action. Reddit sued AI company Anthropic for scraping over 100,000 times despite requests to stop. iFixit said Anthropic hit its servers one million times in a single day.

The fight is also playing out in court. The New York Times is suing Microsoft and OpenAI , while News Corp and its subsidiaries are taking on Perplexity. The BBC has also threatened legal action against AI startup Perplexity, accusing it of scraping its content to train its default model.

Meanwhile, some worry that stricter anti-scraping rules could block legitimate uses like academic research, as noted by WSJ.

As Shayne Longpre of the Data Provenance Initiative warned, “The web is being partitioned to the highest bidder. That’s really bad for market concentration and openness,” as reported by WSJ.

Can AI Understand Color Without Seeing It? - 2

Image by Mario Gogh, from Unsplash

Can AI Understand Color Without Seeing It?

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The research demonstrates that ChatGPT understands common color metaphors, but fails to understand new ones.

In a rush? Here are the quick facts:

  • AI struggles with novel or reversed color metaphors.
  • Colorblind and color-seeing people interpret metaphors similarly.
  • Painters outperformed others on new color metaphors.

The research demonstrates that ChatGPT and other AI tools excel at processing basic color metaphors, yet fail to understand creative metaphors. Scientists studied human and ChatGPT responses to metaphors such as “feeling blue” and “seeing red” to determine the language processing capabilities of the AI systems, as first reported by Neuroscience News (NN).

The study, led by Professor Lisa Aziz-Zadeh at the USC Center for the Neuroscience of Embodied Cognition, found that color-seeing and colorblind people performed similarly when interpreting metaphors, suggesting that seeing color isn’t necessary to grasp their meaning.

However, people with hands-on-experience, such as painters, demonstrated superior abilities in interpreting complex metaphors, including “the meeting made him burgundy.”

ChatGPT, which processes huge amounts of written text, did well on common expressions and offered culture-informed explanations. For example, NN reports a case where the bot described a “very pink party” as being “filled with positive emotions and good vibes.” But it often stumbled on unfamiliar metaphors or when asked to reverse associations, such as figuring out “the opposite of green”

“ChatGPT uses an enormous amount of linguistic data to calculate probabilities and generate very human-like responses,” said Aziz-Zadeh, as reported by NN. “But what we are interested in exploring is whether or not that’s still a form of second hand knowledge, in comparison to human knowledge grounded in firsthand experiences,” he added.

The study was a collaboration among neuroscientists, computer scientists, and artists from institutions including USC, Google DeepMind, and UC San Diego. As AI develops, researchers say combining sensory input with language might help it better understand the world in human-like ways.