100% Recognition of Pests Possible Using Artificial Intelligence

By Brent Martin, adjunct senior research fellow, University of Canterbury

The aim of this project was to see if the latest Machine Learning (Artificial Intelligence) tools could correctly identify the difference between rats, stoats, possums and others from videos collected from the field. This project was given to a set of 28 final year honours students at University of Canterbury.

Below is an example of the possum video used for machine learning.

Did it work?

The short answer is that the AI tools could correctly identify predators from images about 95% of the time and 100% of the time from videos. There are a couple of caveats about the accuracy given that the identified pests are from the same cameras. However it is a very small set of data and as more cameras and data are added to the system it should get better at learning over time.

Technical explanation

The tools used for this was Theano which is an open source Python set of software specifically developed for Machine Learning. This is very similar to Google Tensorflow product. Deep learning happens from crunching lots of examples rather than being explicitly programmed. Data writes the software rather than people which makes it very powerful for this type of application.

The learning is computationally intensive – it takes a lot of computing to learn from the tagged videos. The resulting formula to apply to a video to identify what is in the video does not require much computing and can run in fractions of a second on a computer as cheap as $30 (e.g. Raspberry Pi). This means a relatively inexpensive device could be used to give dramatically improved pest sampling and reporting on low bandwidth devices.

The other encouraging result from the initial investigation is that when the images were shrunk to as little as 64/48 pixels the learning seemed to work just as well, if not better. This is encouraging for the lower band width required for collecting data. This is also useful as heat cameras are available at this resolution and they could be even more accurate for pest identification given there is no UV lighting that needs to come on to illuminate them.

Next steps

Our aim is to use this project to automatically tag the videos from our device in the field that uploads video to the cloud. For the videos that need manual tagging these will then be added to the learning computation which will result in better identification for all subsequent videos. The nice thing about this process is that the additional learning does not require more coding, just more processing.

Diagram of a multi-layer feedforward artificial neural network. Credit: Christoph Burgmer.

When the learning is confirmed to be consistently 100% accurate then we will look to run the algorithm on a local device so that just the results can be sent rather than the full videos. This would enable a device to much more accurately and in real time identify all pest types with very high accuracy.

In addition to identifying animal interactions, any trap that captures a pest could also be learned so that it becomes not only a pest monitor but also could send an alert when a trap needs resetting.

Different sound and video lures can be rapidly experimented with to see how good they are at luring different predators. The auto identification can then rapidly create data to show the best types of lures in different settings.

Making the identification run very fast would be the next goal so that some type of kill activation could be made in real time (e.g. squirting poison).

This same learning technique can also be applied to enable automatic bird counts from videos recorded by the same device so as well as eliminating pests it will be possible to directly measure the impact on birds.

This project is totally open source so it is possible to create a user interface to allow anyone to upload videos and have them automatically tagged. This is not a priority for us as we are creating a device that can be fully automated to allow us to scale the impact.

There’s a lot to do, so let us know if you are able to help.

Note from the Cacophony team: many thanks to Brent and the other folks at University of Canterbury for this remarkable, useful piece of work!

Previous
Previous

Thermal Camera for 100% Detection

Next
Next

The Cacophony Project Technical Update for June 2016