AI-Enhanced Vultures Could Revolutionize Wildlife Conservation And Disease Detection - 1

Image by Casey Allen, From Unsplash

AI-Enhanced Vultures Could Revolutionize Wildlife Conservation And Disease Detection

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Scientists use vultures and AI to track carcasses, monitor wildlife mortality, detect diseases, and uncover illegal activities across vast landscapes.

In a Rush? Here are the Quick Facts!

  • Vultures’ scavenging behavior is combined with AI to identify carcass locations.
  • Researchers attached bio-loggers to 29 vultures for data collection.
  • AI achieved 92% precision in identifying carcass locations.

Scientists have developed a method to track animal carcasses in vast landscapes using AI with vultures as natural detectors. The study was published in the Journal of Applied Ecology.

By combining advanced bio-logging technology with the vultures’ innate scavenging behavior, the team has created a system that can help monitor wildlife mortality, detect disease outbreaks, and even uncover illegal wildlife killings.

The researchers developed the AI algorithm to automatically and accurately classify the behaviors of white-backed vultures using data from animal tags.

As scavengers, vultures are constantly searching for carcasses, and with the addition of a second AI algorithm, researchers can now automatically pinpoint carcasses across large landscapes using data from tagged vultures.

This study focused on African white-backed vultures, known for their ability to locate carcasses from high altitudes.

The research team attached bio-loggers to 29 vultures, both wild and captive, to record their movements and behaviors. The data collected from these birds was then analyzed using machine learning techniques to differentiate between six distinct behaviors, such as feeding or flying.

Using this data, the researchers applied a process called “clustering” to GPS data, grouping locations where vultures spent a lot of time.

The clusters were analyzed to determine whether they were associated with animal carcasses. This step was crucial because vultures often gather in groups around carcasses, making it difficult to pinpoint exact locations without technological help.

Once the clusters were identified, the researchers trained an AI algorithm to distinguish between areas with carcasses and those without.

The results were impressive: the model could correctly identify carcass locations with a high degree of accuracy, achieving 92% precision and 89% recall. In the field, teams used this data to investigate over 1,900 clusters, confirming the presence of carcasses in 580 of them.

The success of this approach demonstrates the potential of combining natural animal behavior with AI technology to solve complex environmental challenges.

This method is not only effective for vultures, but it can also be adapted to other species, allowing researchers to track a variety of ecological resources like water sources or roosting sites.

Furthermore, this system has broader applications in wildlife conservation. By detecting carcasses in the wild, researchers can track disease outbreaks in animals, or identify cases of environmental poisoning, such as cyanobacteria toxins that have killed elephants in Botswana.

The system could also be used to uncover illegal wildlife activities, such as poaching or unauthorized animal disposal.

One of the key advantages of this approach is that it doesn’t rely on large numbers of tagged vultures. The system works even if only one vulture is present at a carcass site, making it cost-effective and easier to implement across vast landscapes.

This flexibility is a significant improvement over previous methods that required multiple tagged vultures to confirm carcass locations.

In addition to carcass detection, the method can be adapted to monitor other wildlife behavior, such as nest identification during breeding seasons.

This versatility shows that the combination of vultures, bio-logging technology, and machine learning could be a powerful tool for understanding animal behavior and improving conservation efforts.

In conclusion, this study not only showcases how technology can harness an animal’s natural abilities to help monitor and protect wildlife.

With its potential applications for conservation, disease tracking, and illegal activity detection, this research offers a new way forward for wildlife management and environmental monitoring.

Major Update To Pokémon GO With Large Geospatial Model Integration - 2

Image by Fireblaze64, from Deviantart

Major Update To Pokémon GO With Large Geospatial Model Integration

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Niantic’s Large Geospatial Model enhances Pokémon GO with a global 3D map, improving AR interactions and enabling persistent location-based experiences.

In a Rush? Here are the Quick Facts!

  • The LGM creates a global 3D map linking data from billions of images.
  • Players will experience more realistic AR with smoother, seamless interactions in diverse locations.
  • LGM allows Pokémon GO to predict and generate AR in unscanned locations.

Niantic, the developer behind the wildly popular augmented reality game Pokémon GO, has recently announced a new update that promises to revolutionize how players interact with the game.

The update centers around the integration of a Large Geospatial Model (LGM), a technology that will significantly enhance the game’s augmented reality (AR) experience, providing players with a more immersive and accurate representation of the real world.

The introduction of the LGM is set to bring major changes to Pokémon GO, starting with how the game maps and interacts with the physical environment.

Traditionally, Pokémon GO relied on a variety of local models created from player-submitted scans of different locations.

These models provided the game with the visual data needed to generate AR content, such as Pokémon appearing at specific locations. However, these models were often limited in their ability to accurately represent the vast and diverse world that players interact with.

The new Large Geospatial Model goes beyond these localized scans by creating a global, 3D digital map that links data from multiple sources to build a much larger, more detailed model of the world.

Niantic’s LGM combines billions of images and data points captured by Pokémon GO players and other sources to create a highly detailed, scalable 3D representation of the environment.

This means that when players open the game, they’ll experience more realistic AR content, with digital elements that interact more naturally with the real world around them.

One of the most significant improvements the LGM brings to the game is the ability to predict and generate AR experiences even in locations that haven’t been directly scanned by players.

In other words, the model will not only know what areas look like from a player’s point of view but will also be able to infer what places might look like from different angles or under different lighting conditions.

This ability to “imagine” missing parts of the environment is a key feature of the LGM and allows for a smoother, more seamless gameplay experience.

The update will also allow Niantic to create more persistent AR experiences, where digital elements remain anchored to specific real-world locations over time.

This has the potential to enhance the game’s social features, allowing players to leave “digital markers” or even have real-world events take place in augmented spaces, fostering deeper interactions between the physical and digital worlds.

By leveraging the power of the Large Geospatial Model, Niantic is positioning Pokémon GO as a pioneer in the future of augmented reality gaming, paving the way for a more integrated and interactive digital world.