WSU, Google researchers find AI can streamline wildlife tracking, cutting time from months to days

image

The implications could be significant for conservation, WSU said. Faster processing means wildlife managers can move more quickly from collection to decision-making.

PULLMAN, Wash. — A new study led by researchers at Washington State University and Google finds artificial intelligence can dramatically speed up the painstaking work of tracking wildlife with remote cameras, cutting analysis time from months or even a year to just days, while producing nearly the same scientific conclusions as humans.

The study, published in the Journal of Applied Ecologyinvolved testing whether a fully automated AI system could replace humans in processing hundreds of thousands, even millions of camera trap images. They were collected in Washington, Montana’s Glacier National Park, and Guatemala’s Maya Biosphere Reserve.

Researchers found that, for most species, models built from AI-identified images closely matched those produced by human experts. Across key measures such as where animals are located and what environmental factors influence them, the results aligned in roughly 85–90% of cases, with limited divergence for rare or difficult-to-identify species, WSU said.

The implications could be significant for conservation, the university said. Faster processing means researchers and wildlife managers can move more quickly from collecting data to making decisions, potentially enabling near real-time monitoring of species such as jaguars, wolves, and grizzly bears.

“We’re not trying to replace people,” said WSU wildlife ecologist Daniel Thornton, the lead author of the study. “The goal is to help researchers get to answers faster so they can make better decisions about managing and conserving wildlife.”

Traditionally, that process has been slow and labor-intensive. Camera traps, which are motion-activated cameras placed in forests and other habitats, can generate enormous datasets. A single project may produce hundreds of thousands or even millions of images that must be reviewed to determine which species appear in each frame.

Even with a team of undergraduate assistants and a graduate student verifying identifications, Thornton said the process typically takes six to seven months, and sometimes up to a year, before analysis can begin.

Early AI tools offered some relief by filtering out blank images, often 60–70% of the total, but still required humans to review tens of thousands of photos containing animals. The new study tested whether that final human step could be eliminated.

Using a general AI model called SpeciesNet, developed by Google, the researchers ran images through a fully automated pipeline with no human review and compared the results to traditional, expert-labeled datasets.

“The key question wasn’t whether the AI got every image right,” said Dan Morris, a senior staff research scientist at Google who helped create SpeciesNet and is a co-author on the study. “It was whether the ecological conclusions you care about would end up being basically the same.”

For most species, they were. Even when the AI made mistakes, such as misidentifying animals or missing detections, the overall models remained robust because occupancy models rely on repeated observations over time, WSU said. In practical terms, the time savings are dramatic. Fully automated processing can now be completed in just a few days, reducing a months-long bottleneck to roughly a week.

That efficiency could be transformative, particularly for smaller or underfunded conservation groups. It may also allow researchers to expand monitoring efforts without being limited by data processing capacity. The project also contributed to the broader AI-for-conservation community by making part of its dataset publicly available, helping support tools like SpeciesNet that rely on shared data to improve.

Morris emphasized that the study takes a practical approach. Rather than developing new AI algorithms, the team focused on what current tools can already do.

“We weren’t trying to invent a new model,” he said. “We were asking whether, given where the technology is today, people can rely on it for the kinds of analyses they already do.”

The answer, at least for many common species and standard ecological models, appears to be yes.

There are still limitations, the university added. Human review is needed for many other applications of camera trapping data, and this paper only dealt with a small subset of species that may be caught on camera. For example, very rare and easily confused species are still problematic for AI detection. But the findings suggest that in some cases, image processing no longer needs to be a major constraint on large-scale camera-trapping studies.

“The big takeaway is that this doesn’t have to be a bottleneck anymore,” Thornton said. “If we can process data faster, we can respond faster, and that’s really what matters for conservation.”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top