Reducing Annotation Times: Semantic Segmentation of Coral Reef Survey Images
Authors: Jordan Pierce, Yuri Rzhanov, Kim Lowell, Jennifer Dijkstra
My first publication came from the first two chapters of my Master's thesis, which looked at converting existing Coral Point Count (CPCe) annotations associated with an image into dense labels (i.e., pixel-level labels) automatically. CPCe annotations (and others like it) are sparse, meaning an image has less than 1% of all of the pixels labeled. Although this makes it convenient for the annotator, the coverage statistics calculated from the annotations could be potentially less accurate than if all of pixels were labeled.
In this article we first created Fast-Multilevel Superpixel Segmentation (Fast-MSS) as way of converting CPCe annotations into dense labels automatically. We then compared our implementation with the original (shoutout to Inigo Alonso) using the CamVid dataset, demonstrating that it performs both faster and more accurately.
Next, we demonstrated how Fast-MSS could be used by ecologists to augment their existing workflow. Using the Moorea Labeled Coral (MLC) dataset as our benchmark, the sparse labels associated with each image were converted into dense labels using Fast-MSS. These dense labels were then used as training data to learn a deep learning semantic segmentation algorithm. By having this deep learning model, an ecologist could go out and collect additional images from the same or similar habitats that the model was previously trained on, and use it to produce dense labels directly, without needing to create CPCe annotations manually.
Fast-MSS performs faster and more accurate than the original implementation;
Set the baseline scores for semantic segmentation using the MLC dataset;
Demonstrated that deep learning model trained on dense labels created by Fast-MSS classify with an acceptable accuracy to be used in other ecological applications;
Training a patch-based image classifier (e.g., convolutional neural networks) on the sparse labels first and then using it to add additional labels to each image can enhance Fast-MMS' output, and also the deep learning model's accuracy.