
Image by Moses Malik Roldan, from Unsplash
U.S. Requires Foreign Students To Make Social Media Public For Visas
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The U.S. State Department now requires foreign students to disclose their social media content for visa applications.
In a rush? Here are the quick facts:
- The U.S. now requires student visa applicants to make social media profiles public.
- Private accounts may be seen as hiding anti-American sentiment.
- New policy applies to F, M, and J visa applicants.
The U.S. State Department announced that consular officers will examine social media content to detect anti-American sentiments, while treating private and hidden accounts as possible attempts to “evade or hide certain activity,” as reported by the Wall Street Journal .
“The enhanced social media vetting will ensure we are properly screening every single person attempting to visit our country,” a senior State Department official told the WSJ. Applications for F, M, and J visas, which include academic and cultural exchange programs, can now resume under these new standards.
The department suspended visa interviews to create new screening procedures that included broader social media evaluation, as noted by the WSJ. The application process now requires applicants to set their profiles to public visibility.
The WSJ reports that the Trump administration has established this new policy as part of its broader initiative to tighten student visa regulations. The government has recently targeted Pro-Palestinian student demonstrations, while threatening visa revocation for students linked to the Chinese Communist Party, as well as those studying sensitive subjects.
The WSJ reports of a case last month where the Department of Homeland Security attempted to suspend Harvard University’s ability to enroll foreign students, claiming the school failed to ensure campus safety for Jewish students. The department alleged that many “anti-American, pro-terrorist agitators” were foreign nationals. However, a federal judge temporarily blocked that suspension.
However, the new policy may raise legitimate concerns around infringement on privacy rights, and may have a chilling effect on academic freedom by compelling students to disclose details about their personal online behavior.
The policy guidance requires staff members to evaluate applicants’ social media profiles for evidence of anti-American value positions or national security threats. While the stated intent is to safeguard national interests, the criteria are broad and open to interpretation, as noted by the Washington Post .
As Stuart Anderson, executive director of the National Foundation for American Policy, noted, it remains unclear how narrowly or broadly these guidelines will be enforced. “I don’t think any American would want to be judged by their worst tweet,” he said to The Post, warning that a wide-reaching interpretation could unfairly bar students who otherwise merit a visa.
NPR further noted that over one million international students in the U.S. contribute $40 billion yearly, but new Trump-era policies are driving interest down sharply. Colleges relying on their tuition and cultural presence may face serious challenges.
Still, the administration maintains that the rules are needed for national security. Visa applicants who fail to comply may face delays or denials in their approval process.

Photo by Glenn Carstens-Peters on Unsplash
Researchers Reveal Students Who Use AI Models To Write Essays Face Cognitive Challenges
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
A recent MIT study, focused on the cognitive cost of using AI models to write essays, revealed that students who rely more on large language models (LLMs) may face harmful consequences and cognitive challenges.
In a rush? Here are the quick facts:
- MIT study revealed that students who use AI models to write essays face harmful consequences and cognitive challenges.
- The group of participants who used ChatGPT showed weaker neural connectivity and difficulties in remembering their work.
- Experts conclude that AI models can significantly affect students and their learning processes, including what the researchers call a “cognitive cost.”
The study , titled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, found that the use of an AI models can significantly affect students and their learning processes, including what the researchers call a “cognitive cost.”
The research involved 54 participants and revealed that the group using ChatGPT to write essays showed weaker neural connectivity and had difficulty remembering and quoting their own essay just minutes after finishing the task.
While the research team acknowledged the limitations of their small sample size, they hope the findings will serve as “a preliminary guide to understanding the cognitive and practical impacts of AI on learning environments.”
For the study, the researchers divided the participants into three groups: one that could use LLMs such as ChatGPT, another that could access traditional search engines like Google, and the third group that could only use their knowledge—called the Brain-only group.
The participants completed four essay writing and analysis sessions—three with the original group setup, and a final session in which access to tools was changed, requiring the LLM group to write using only their brains.
As measurement instruments, the scientists used an electroencephalography (EEG) to register brain activity considering engagement, and load—Scientists have also recently developed an e-tattoo to detect mental fatigue . The study also included NLP analysis, participant interviews, and essay scoring by both human teachers and an AI tool.
The experts revealed a strong correlation between brain connectivity and the use of external tools. The Brain-only group had the highest levels of neural connectivity, while those who used AI showed the weakest.
Memory retention was also negatively affected. The group that used AI models had more difficulty quoting their own essays and reported the lowest levels of “ownership” over their work.
“As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study,” wrote the researchers. “The LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring.”