
Photo by Kelly Sikkema on Unsplash
New Research Links Tablet Use to Anger Outbursts in Children
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Researchers from the Université de Sherbrooke in Canada published a new study in the journal JAMA Pediatrics addressing a relationship between early childhood tablet use and anger outbursts.
“Child tablet use at age 3.5 years was associated with more expressions of anger and frustration by the age of 4.5 years,” states the research on its key points. “These results suggest that early-childhood tablet use may contribute to a cycle that is deleterious for emotional regulation.”
The study also noted that children who were more prone to anger and frustration at age 4.5 tended to use tablets more by age 5.5.
Researchers conclude that tablet use at an early age could make children enter a vicious cycle that affects their emotional regulation.
However, there are certain factors to consider. The study was developed during the COVID-19 pandemic, in moments when the studied population—315 parents of preschoolers in 2020, 2021, and 2022— was going through a stressful period.
According to Forbes , the study isn’t clear about what is the exact reason tablets interfere with emotional development, only the consequences. Researchers observed that active use—like reading— and passive use—like watching a video—have different impacts and also the kids react differently with and without their parents. This is something for parents to consider while children use electronic devices.
Growing up, children usually apply multiple strategies to develop emotional regulation, one path is observation—considering parents or caregivers as main teachers— or through “emotional coaching” by parents telling them how to regulate their emotions. The study noted that tablets can interfere with both paths.
The study provides relevant parenting information for people across the world. As noted by Forbes, in the United States 80% of families with children own tablet devices.
Children’s relationships with technology are being observed closely by regulators across the world as it can have a big impact in their development and there are multiple risks implied. A few days ago, the Justice Department and Federal Trade Commission (FTC) of the United States sued TikTok for violating the Children’s Online Privacy Protection Act (COPPA) by collecting data from underaged users.

Image by rawpixel.com, from Freepik
Study Finds No Evidence Of Dangerous Emergent Abilities In Large Language Models
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A study announced yesterday by the University of Bath claims that large language models (LLMs) do not pose existential threats to humanity. The research asserts that these models cannot learn or acquire new skills independently, which keeps them controllable and safe.
The research team, led by Professor Iryna Gurevych, conducted over 1,000 experiments to test LLMs’ capacity for emergent abilities—tasks and knowledge not explicitly programmed into them. Their findings show that what are perceived as emergent abilities actually result from LLMs’ use of in-context learning, rather than any form of independent learning or reasoning.
The study indicates that while LLMs are proficient at processing language and following instructions, they lack the ability to master new skills without explicit guidance. This fundamental limitation means these models remain controllable, predictable, and inherently safe. Despite their growing sophistication, the researchers argue that LLMs are unlikely to develop complex reasoning abilities or undertake unexpected actions.
Dr. Harish Tayyar Madabushi , a co-author of the study, stated in the University of Bath announcement, “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus”
Dr. Tayyar Madabushi recommends focusing on actual risks, such as the potential misuse of LLMs for generating fake news or committing fraud. He cautions against enacting regulations based on speculative threats and urges users to clearly specify tasks for LLMs and provide detailed examples to ensure effective outcomes.
Professor Gurevych noted in the announcement, “Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”
The researchers acknowledge several limitations in their study. They tested various models, including T5, GPT, Falcon, and LLaMA, but were unable to match the number of parameters exactly due to differences in model sizes at release. They also considered the risk of data leakage, where information from the training data might unintentionally affect results. While they assume this issue has not gone beyond what is reported for specific models, data leakage could still impact performance.