ChannelLife Australia - Industry insider news for technology resellers
Story image
Google and CSIRO use AI to help protect the Great Barrier Reef
Tue, 17th May 2022
FYI, this story is more than a year old

Google has partnered with CSIRO in Australia to implement AI solutions that help protect the Great Barrier Reef.

The project is part of Google's Digital Future Initiative, which is an AUD$1 billion investment in Australian infrastructure, research and partnership.

With the help of the Kaggle data science community, Google has helped create a machine learning solution to analyse underwater images of the deadly crown-of-thorns starfish species that is currently devastating the reef.

As opposed to having a diver behind a boat, the new AI technology can help analyse imagery and detect the deadly starfish more accurately and efficiently via a live camera feed. This way, they can more effectively prevent species outbreaks and track statuses and progress more accurately.

“CSIRO developed an edge ML platform (built on top of the NVIDIA Jetson AGX Xavier) that can analyse underwater image sequences and map out detections in near real-time,” says a blog post from Google product manager Megha Malpani and Google software engineer Ard Oerlemans.

“Our goal was to use the annotated dataset CSIRO had built over multiple field trips to develop the most accurate object detection model within a set of performance constraints, most notably, processing more than 10 frames per second (FPS) on a <30 watt device.

After hosting a Kaggle competition, Google was able to gain insights from the open source community to drive the experimentation plan. This led to them ending up running hundreds of experiments on Google TPUs.

They then used TensorFlow 2's Model Garden library as a foundation, making use of its scaled YOLOv4 model and corresponding training pipeline implementations. The model was then tested and tried for accuracy leveraging various data.

“In parallel with our modeling workstream, we experimented with batching, XLA, and auto mixed precision (which converts parts of the model to fp16) to try and improve our performance, all of which resulted in increasing our FPS by 3x,” Malpani and Oerlemans remarked.

“We found however, that on the Jetson module, using TensorFlow-TensorRT (converting the entire model to fp16) by itself actually resulted in a 4x total speed up, so we used TF-TRT exclusively moving forward.

The technology was implemented into practice when detecting the starfish and optical flow helped with future predictions.

“After the starfish are detected in specific frames, a tracker is applied that links detections over time. This means that every detected starfish will be assigned a unique ID that it keeps as long as it stays visible in the video. We link detections in subsequent frames to each other by first using optical flow to predict where the starfish will be in the next frame, and then matching detections to predictions based on their Intersection over Union (IoU) score.

“Our current 1080p model using TensorFlow TensorRT runs at 11 FPS on the Jetson AGX Xavier, reaching a sequence-based F2 score of 0.80! We additionally trained a 720p model that runs at 22 FPS on the Jetson module, with a sequence-based F2 score of 0.78,” the pair say.

Google and CSIRO have also announced they are open-sourcing both Object Detection models and have created a Colab notebook to demonstrate the server-side inference workflow.

They say they plan to continue updating their models - trackers, eventually open sourcing an entire TFX pipeline and dataset so conservation organisations and governments around the world can retrain and modify the model with their own dataset.