
Image by NASA, from Unsplash
AI Models Like ChatGPT Could Soon Pilot Space Missions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers were surprised to find that ChatGPT managed a second-place finish in the spacecraft simulation competition, which demonstrated the potential of large language models (LLMs) to guide space missions.
In a rush? Here are the quick facts:
- ChatGPT placed second in a spacecraft piloting simulation challenge.
- The AI completed tasks like intercepting and avoiding satellites.
- Researchers used Kerbal Space Program for realistic space simulation.
ChatGPT has proven it can do more than write poems or answer questions, it may be able to fly a spaceship. Researchers explored control ability of LLMs in space mission.
In their study they explained that they did this through a competition based on the Kerbal Space Program Differential Games Challenge. ChatGPT caught the scientific world by surprise through its second-place finish in the autonomous spacecraft simulation event.
“You operate as an autonomous agent controlling a pursuit spacecraft,” was the initial prompt researchers gave ChatGPT, as reported by LiveScience .
From there, the AI showed it could make complex decisions, orient the spacecraft, and navigate through missions like intercepting satellites and avoiding detection. The model’s outputs were converted into functional code to control a simulated vehicle in real-time.
Traditional spacecraft navigation systems need constant training and tuning, which makes them impractical for fast-moving, real-time missions. However, the researchers argue that average LLMs such as ChatGPT operate with pre-trained knowledge bases that allow them to adapt to new situations through specific, well-designed prompts.
The researchers argue that this new approach could change the way we control satellites and deep-space missions, especially in situations where human real-time intervention becomes impossible. The AI system demonstrated its ability to rival traditional differential equation-based models through short testing periods, despite its short training.
Obviously risks remain, like AI “hallucinations” that could cause dangerous errors. The team noted, “There is no doubt that training a LLM can leverage prior knowledge and improve it for specific scenarios.”
The results will be published in the Journal of Advances in Space Research , and the team has open-sourced their code and data for further experimentation. An AI piloting our next space mission might not be as far off as it sounds.

Image by Pathum Danthanarayana, from Unsplash
Massive Mobile Ad Fraud Campaign Hidden In Google Play Apps
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cybersecurity researchers discovered 352 hidden Android apps operating as stealthy ad fraud tools, which produced 1.2 billion fake ad bids daily before they shut down the operation.
In a rush? Here are the quick facts:
- conAds campaign used 352 malicious Android apps.
- Fraud scheme generated 1.2 billion daily ad bid requests.
- Apps hid icons and ran in background.
HUMAN’s Satori Threat Intelligence team successfully disrupted the complex ad fraud operation known as IconAds.
The operation involved 352 Android applications, which secretly loaded ads while concealing their icons from user detection. The daily operation of IconAds reached its peak at 1.2 billion ad bid requests, which primarily originated from Brazil, Mexico, and the United States.
The apps used advanced obfuscation tactics to avoid detection. “IconAds’ primary obfuscation technique uses seemingly random English words to hide certain values,” explained Satori researchers.
The attackers also embedded harmful code within encrypted libraries while employing distinctive command-and-control (C2) domains for each application to conceal their traffic.
The application ‘‘com.works.amazing.colour’’ changed its icon to a blank white circle and loaded ads even when no app was open. Others impersonated popular apps like Google Play or Google Home, running silently in the background while serving fraudulent ads.
To hide their activities, these apps disabled their visible components after installation and used aliases with no name or icon. In some cases, they included license checks to confirm they were downloaded from the Play Store, refusing to run otherwise. They also used DeepLinking services to decide when to activate the malicious code.
The identified apps received removal from Google Play, and Google Play Protect provides users with protection against these threats.
According to HUMAN, “Customers partnering with HUMAN for Ad Fraud Defense are and have been protected from the impact of IconAds.”
The attack demonstrates how mobile ad fraud operations are becoming more sophisticated, so experts recommend that advertisers, platform developers, and app developers enhance their monitoring systems, improve transparency, and work together to prevent upcoming threats.