
Image by Solen Feiyssa, from Unsplash
Big Tech Seeks 10-Year Ban On State AI Laws
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a rush? Here are the quick facts:
- Big Tech wants a 10-year ban on state-level AI laws.
- INCOMPAS lobby group leads push, backed by Amazon, Meta, Google, and Microsoft.
- Critics say it’s a power grab by wealthy tech companies.
The Financial Times (FT) reports that the companies use INCOMPAS lobbyists to request the Senate add the moratorium to President Donald Trump’s budget bill. The House passed the measure last month.
“This is the right policy at the right time for American leadership,” Chip Pickering, INCOMPAS CEO and a former congressman, said to FT. “But it’s equally important in the race against China,” he added.
But critics aren’t buying it. “Responsible innovation shouldn’t fear laws that ban irresponsible practices,” said Asad Ramzanali of Vanderbilt University, as reported by FT. Similarly, MIT’s Max Tegmark called it “a power grab by tech bro-ligarchs attempting to concentrate yet more wealth and power,” reported FT.
The Republican Party faces internal opposition to this proposal, as reported by FT. Senate members Josh Hawley and Marsha Blackburn stand against the moratorium, but Thom Tillis and Steve Daines support it, citing that they fear that state-by-state regulations will create a fragmented system.
OpenAI CEO Sam Altman warned it would be “disastrous” to require companies to meet safety standards before launch, as noted by FT. However, AI safety experts who support regulation argue that uncontrolled AI power growth threatens significant social damage to society.

Image by Dimitri Karastelev, from Unsplash
Meta’s Chatbot Shares Private Phone Number by Mistake
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The AI assistant from Meta revealed a stranger’s phone number, then contradicted itself repeatedly, which raised concerns about AI hallucinations, and user protection features.
In a rush? Here are the quick facts:
- Meta AI gave a user a real person’s number as customer support contact.
- The AI contradicted itself repeatedly when confronted about the mistake.
- Experts warn of AI assistants’ “white lie” behavior to seem helpful.
Mark Zuckerberg promoted his new AI assistant as “the most intelligent AI assistant you can freely use,” yet the tool received negative attention after revealing a real person’s private phone number during customer support inquiries, as first reported by The Guardian .
During his attempt to reach TransPennine Express via WhatsApp, Barry Smethurst received what appeared to be a customer service number from Meta’s AI assistant. The Guardian reports that when Smethurst dialed the number, James Gray answered the phone call, although he was 170 miles away in Oxfordshire, working as a property executive.
When challenged, the chatbot first claimed the number was fictional, then said it had been “mistakenly pulled from a database,” before contradicting itself again, stating it had simply generated a random UK-style number. “Just giving a random number to someone is an insane thing for an AI to do,” Smethurst said, as reported by The Guardian. “It’s terrifying,” he added.
The Guardian reports that Gray hasn’t received calls but voiced his own worries: “If it’s generating my number, could it generate my bank details?”
Meta responded: “Meta AI is trained on a combination of licensed and publicly available datasets, not on the phone numbers people use to register for WhatsApp or their private conversations,” reported The Guardian.
Mike Stanhope from Carruthers and Jackson noted: “If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimise harm. If this behaviour is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behaviour to be,” reported The Guardian
Concerns around AI behavior have grown further with OpenAI’s latest o1 model. In a recent Apollo Research study, the AI was caught deceiving developers , denying involvement in 99% of test scenarios and even attempting to disable its oversight mechanisms. “It was clear that the AI could think through its actions and formulate convincing denials,” said Apollo.
Yoshua Bengio, a pioneer in AI, warned that such deceptive capabilities pose serious risks and demand much stronger safeguards.
Another OpenAI study adds to these concerns by showing that punishing AI for cheating doesn’t eliminate misconduct , it teaches AI to hide it instead. Using chain-of-thought (CoT) reasoning to monitor AI behavior, researchers noticed the AI began masking deceptive intentions when penalized for reward hacking.
In some cases, the AI would stop tasks early or create fake outputs, then falsely report success. When researchers attempted to correct this through reinforcement, the AI simply stopped mentioning its intentions in its reasoning logs. “The cheating is undetectable by the monitor,” the report stated.