Tuning the Motion Detector

The thermal video footage from the Cacophonator devices has proven to be invaluable for machine learning and studying predator behaviour.  Occasionally, however, we noticed that recording for some animals was starting later than it should. In this article we discuss the reasons why this was happening and what we have done to improve our animal detection algorithm. 

Unlike most other off-the-shelf trail cameras, our camera is always on and we use changes in its thermal image to detect animals - not a separate motion detector.  Our own comparisons have shown that this means we detect more animals than any other trail camera we have tried.

How did it get there? Recordings shouldn't start with an animal in the middle of the frame.

However, we still felt there were some improvements to be made.  For instance, sometimes the recording would start with an animal in the middle of the picture and we didn’t see it walk onto the screen.  In such cases we knew we were missing detecting the animal earlier. 

The first challenge to fix this problem was to get some footage of cases that were not recording properly.  Sure, we had some video of the problem cases after detecting the animal, but what we really needed to study were the earlier frames before the animal was detected. 

Luckily after a bit of reflection I realised we already had the footage we required.  Since we always record for 10 seconds after we last detect an animal, any animal in the last 10s of any video is undetected animal.  Going back through the footage of the previous weeks then gave me enough examples to work with.  

While reviewing the footage I quickly realised that a lot of the time problems occurred when animals, particularly curious possums, were barely moving.  A code review showed that when looking for motion we were only analysing the last three frames, or 1/3 second of footage.  This is not a very long time, so I wondered whether we could improve our results by considering changes over a longer time period, perhaps 1 or 3 seconds.

To properly test what settings would be most effective we wrote some code that would allow us to replay our existing CPTV video files through the detection algorithm.  This allowed me to experiment with different parameter settings and made it easy to see which settings were most efficient at detecting animals.  

To help with this I also collected a number of test videos that I thought would be tricky for detection.  This included a set of hard to detect animals (including slow possums and fast rats).  It also included a second set of false-positives from vegetation being blown about on windy days.  The negative set is important to consider because such videos consume resources, but provide no useful information. 

With this in place I was able to test our conjecture and it quickly became obvious that increasing the time period between frames we were comparing was going to improve our animal detection, while not increasing the number of false-positives - a real win!  After analysing the possum videos I settled on a much longer time interval of 10s, as it performed much better than 1s or 3s.  (Note we are still detecting the animal just as fast in real time, it is just that we are now comparing the current frame with a frame from 10s ago, not 1/9s).  

Another benefit of looking over a longer period of time was that the heat changes between frames are also slightly bigger.  So instead of looking for at least a 30 unit difference in heat, we now look for a 50 unit or more difference.  This change was useful as it reduced the number of false-positive recordings.  

I made other small, but important, changes.  One was only looking for warmer spots.  This was important because once an animal leaves the view, there is still a negative delta from where the animal had been.  This works well since all the animals we are looking for are mammals and warmer than the environment. 

The other change was to help us detect small, fast-moving animals.  Instead of requiring the same pixel to have a delta for two frames in a row (to avoid false-positives of random noise) we now just require the two frames in a row to have some pixels with a delta.  

Just when I thought I was done, I started looking into why the video was stopping and starting when recording a hedgehog in a trap.  I replayed this video using the new algorithm to see if it would help, but it didn’t.  Then I noticed that it was only triggering the motion detector when the hedgehog was showing its underbelly.  The trouble was that this hedgehog and other small animals didn’t quite reach the temperature threshold we had set.  So I lowered that too.

This last change is the most tricky and controversial.  It undoubtedly gives us some extra interesting videos that we were missing before.  However, at the same time it does mean on a windy night we can get a lot of false positives because the temperature of the vegetation can reach that level too.  So, the next challenge is to work out how we can reduce the number of false-positive videos.  

The good news is that as part of the work we now have a ‘library’ of tricky CPTV videos to test against, so comparing different algorithms is now much easier.  These CPTV videos are also running as module tests for the thermal recorder, so we should spot any inadvertent changes to our motion detection algorithm.  This is important because the problem with poorly performing motion detection is that you can’t know what you aren’t seeing. 

Previous
Previous

Birdsong Monitoring at Sandspit

Next
Next

Experiences Using Cameras to Film Pests in Titirangi, Auckland - David Blake