Progress report – February 2020 – Thermal imaging for human-wildlife conflict

Our idea for an alert system combines low-cost thermal sensors, machine learning and wireless transmission to build an affordable system that alerts communities to the presence of elephants, providing a chance to safely act before a human-elephant conflict (HEC) incident occurs. We decided on thermal sensors because they perform better than optical camera traps at night and in low light conditions.

Combining low-cost, low-resolution thermal sensors and machine learning for automatic ID are unique to our system.  Low-resolution sensors balance the necessary resolution for detection with an affordable price. Automatic alerts of elephant presence warn residents of potentially dangerous situations, help create reliable reports and understand elephant movement to help develop long-term conflict management strategies.

Project phases

1)    Identify the best, affordable thermal sensor for our application through research and testing at controlled elephant habitats and field sites.

2)    Prove that images taken with an affordable, low-resolution thermal sensor are suitable for machine learning by taking thousands of thermal elephant and human photos, creating a labelled database, and creating a machine learning algorithm.

3)    Design and build the first field-ready prototype of the thermal camera with power efficiency, solar power, automatic identification and alert transmission over cellular and/or long-range radio (LoRa) networks.

Phase 1 of our project was completed in 2018 by trialing several different thermal sensors at ZSL Whipsnade Zoo’s elephant habitat and several field sites in Assam, India. Reports on our progress during stage 1 can be found on WILDLABS by searching for the Arribada Initiative. This initial testing lead to us choosing to use the FLIR Lepton series thermal sensors for our system. 

Phase 2 – Building a database of thermal elephants

The success of our thermal elephant alert system rests on the ability of a camera to accurately distinguish an elephant from other subjects that pass by. And the success of machine learning algorithms often rests on the quality of their training datasets. In order to prove a computer could recognize the low-resolution silhouette of an elephant, we needed to create an extensive thermal image library of elephants and other subjects.

We returned to ZSL Whipsnade Zoo’s elephant habitat to build our image library. While we did manage to photograph wild elephants on our FLIR Lepton 2.5 thermal cameras during field testing in Assam, India, the elephant habitat at ZSL provided constant opportunities to photograph elephants in a variety of conditions. The zoo habitat was also a good place to gather images with both the FLIR Lepton 2.5 80×60 resolution sensor and the FLIR Lepton 3.5 160×120 resolution sensor. While the FLIR Lepton 2.5 sensor proved to take recognizable images in the field, we wanted to directly compare the performance of computer recognition between the two sensors, especially with smaller or more distant subjects.

Data collection methodology

Our data collection intern, Sophie Vines, took pictures of elephants at all different angles and distances, as well as in different environmental conditions, in groups and standing alone.  This created a fully comprehensive library of images to train the algorithm to recognize elephants in many different scenarios. We primarily used the same raspberry pi-based automatic cameras used in India to collect data at the zoo. Because these cameras’ motion detectors only trigger with subjects less than 12 meters, we decided to use the camera’s time-lapse mode, programmed to take images every 5 seconds. This allowed us to capture elephants at any distance from the camera.

This is how Sophie describes her data collection experience:

I set up the automatic cameras on tripods on the edge of the elephants’ outside grass paddock. Because I learned their routine, I would be ready for them at the paddock entrance at 10am when they ambled out for the day and around 4pm when they would troop back inside. Close-up shots were almost guaranteed as they had to walk right past the cameras on their way in or out of the barn. During the day, I could get many shots of the elephants at different distances out in the paddock. There were also spontaneous moments where one or more of the female elephants would come up to the camera, sometimes to get a drink from the nearby water trough or to suss me and my equipment out.

To maximize data collection, cameras were set up in the elephant barn to capture images overnight. With the automatic cameras set to capture images at 5 second intervals, I managed to gather around 5000 images of elephants under 10m from the camera in 2 weeks. Leaving the camera indoors also meant I could capture images with different background conditions than the images taken in the outdoor paddock. The cameras were also out while zookeepers worked in the elephant barn to increase the number of human images in the database.

I also recorded different variables that could affect the computer’s ability to accurately recognize a subject and could be used to understand the limitations of the computer identification. I estimated distance to human and elephant subjects by measuring the distance to items in the paddock and comparing them to the location of the subject in the image. I also recorded the temperature at the time of the image capture to establish if the computer had difficulty differentiating objects from the background when temperatures were similar. This became difficult when cameras were left for extended periods when temperature fluctuated. At these times, a temperature probe was left out which recorded maximum, minimum and current temperature to establish a range for the time period.

We teamed up with Liverpool John Moores University who are building the computer recognition algorithm. To best train the computer, our intern also annotated the images with boxes outlining the subjects and labels describing what is in the box, human or elephant. We labeled all our images using DeebLabel, a program built by the university team for image annotation.

Results

  • Number of photos collected: 36,643
  • Number of photos labelled: 28,048
  • Number of hours it took to take all images: ~160 hours, approx. 10hrs per week over 4 months
  • Number of hours it took to label: ~112 hours, approx. 7 hours per week over 4 months

Going forward

We are currently using this database to train a computer to recognize humans and elephants. We have only preliminary results at this time, but we hope to share the results of the full algorithm work once it is finished.

Thanks to WWF Netherlands and WILDLABS who funded this project in 2018 as a winner of the Human-Wildlife Conflict Tech Challenge and ZSL who worked with us in 2019 to use the elephant habitat at Whipsnade zoo the collect our thermal images.

Post written by Anne Dangerfield, Arribada Initiative project manager, field coordinator and tree climber

 

Continue exploring

GeoSeals trialled in Ethiopia

GeoSeals trialled in Ethiopia

It was a warm afternoon in late May 2022, in a coffee shop in Portsmouth, when Ruby Hill and I first questioned “how...

7 Comments

  1. Gadhu Sundaram

    Hello,

    Just to introduce myself, my name is Gadhu Sundaram. I live in Edinburgh and work for a UK corporate as a senior Technologist. But I am from The Nilgiri Hills in the Western Ghats of India where I own a piece of land and am trying out rainforest regeneration by growing native species. You can check it out at (www.jadeshola.com). I need to protect the saplings at least till they take hold properly but we are having trouble with elephants. The piece of land is in the hills (at about 2000 metres above sea level) . I was looking for affordable technologies that I could try out for early warning specifically for elephants. We have a normal fence that keeps out the deer and the bison but as you can imagine nothing can keep out the elephants. I am also willing to do any trials and collect any pictures that might help. I can spend a reasonable amount for my own equipment and also contribute as a citizen researcher. My day job is as a senior technologist in the R&D dept of a large corporate so quite familiar with a range of tech but this is purely personal. I was wondering if it could be possible for us to collaborate. Also want to understand if this is under licensed IP, any commercial considerations etc.

    Reply
    • betaua1

      Many thanks for your reply Gadhu. We will contact you directly via email to discuss how we could support you further.

      Software is open source (GPLv4) and hardware is published under CERN 1.2.

      Kind regards,

      Alasdair

      Reply
    • Subrat Kar

      Mr Sundaram: We could try a few things to detect elephants on your estate. It is not cheap however – so budgets could be in the order of several thousands of pounds per impementation. Email me at subrat.kar+elephants@gmail.com if you think we could. Subrat Kar

      Reply
  2. Dilan Perera

    Can general public get those images ?

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *