Efficient Geospatial Search Using Visual Active Search
Category Computer Science Friday - January 26 2024, 06:49 UTC - 10 months ago Researchers at Washington University in St. Louis have developed a new geospatial search framework using aerial imagery and visual reasoning to efficiently combat illegal poaching and human trafficking. The framework, called Visual Active Search (VAS), combines computer vision with adaptive learning and has been found to outperform all baselines in experiments. The team plans to further expand the framework for use in various applications, such as wildlife conservation and search and rescue operations.
The use of illegal poaching and human trafficking has been a widespread problem in recent years, and traditional methods of combatting these activities have not proven to be entirely effective. In order to address this issue, researchers at Washington University in St. Louis have developed a new geospatial search framework that combines computer vision with adaptive learning to improve search techniques. This framework, called Visual Active Search (VAS), utilizes aerial imagery and visual reasoning to guide physical search processes and help locate objects more efficiently.
Led by professors Yevgeniy Vorobeychik and Nathan Jacobs, the team's goal is to shift computer vision towards real-world applications and impact. This is achieved by integrating human and artificial intelligence, with humans performing local searches and AI using aerial geospatial images and observations to guide subsequent searches.
The VAS framework consists of three key components: an image of the entire search area, which is divided into regions, a local search function that determines if a specific object is present in a given region, and a fixed search budget that limits the number of times the local search function can be executed. By maximizing the detection of objects within the allocated search budget, the team has found that their approach outperforms all baselines in their experiments.
The findings of the research were presented at the Winter Conference on Applications of Computer Vision and are also available on the arXiv preprint server. First author Anindya Sarkar, a doctoral student in Vorobeychik's lab, shared the team's success at the conference, stating that their approach utilizes spatial correlation between regions to scale up and adapt active search for larger areas.
The team's VAS method is unique in its interactive nature, as it continuously learns from prior searches. This aspect has been suggested before but was found to be costly and not scalable for a large visual space. The team's breakthrough in this area was made possible by shifting the underlying fundamentals compared to previous techniques, allowing them to effectively use large amounts of image data.
In the future, the team plans to explore ways to expand their framework for use in various applications, such as wildlife conservation, search and rescue operations, and environmental monitoring. The potential of this method to be specialized for different domains could greatly impact the efficiency and success of geospatial searches in multiple areas.
Share