To establish an operational and integrated service at the iMagine platform for automatic processing of video imagery, collected by cameras at EMSO underwater sites, identifying and further analysing interesting images for purposes of ecosystem monitoring.
Ecosystem monitoring at EMSO sites by video imagery

Aim
Development actions during iMagine
Objective and challenge
The use case involves underwater video monitoring at three EMSO sites: EMSO-Obsea, EMSO-Azores, and EMSO-SmartBay. The aim is to establish an integrated service on the iMagine platform for the automatic processing of video imagery, enabling the identification and analysis of relevant images for ecosystem monitoring.
At the EMSO-Obsea site, there is significant unexploited image data collected from an underwater camera observing various fish species. Applying AI tools to these images would allow the extraction of valuable biological content, creating derived datasets that marine scientists can use to draw ecological conclusions. Manual analysis of the extensive dataset is time-consuming, and analyzing only a subset of the data would result in losing important information. Utilising the iMagine platform, a Deep Learning service will be trained and deployed to obtain species abundance data from existing and future images. These derived datasets will be crucial for studying species presence/absence over time and understanding changes in abundance in relation to environmental parameters, providing insights into the impact of climate change on the local fish community.
For the EMSO-Azores site, the imagery data collected by the observatory is being analyzed through the Deep Sea Spy platform, involving citizen participation. However, the data validation by experts is currently done manually and is time-consuming. Expanding the dataset with annotated and validated submarine images is essential for advancing marine science research. The iMagine AI platform will be used to develop and deploy AI models that can automatically annotate and validate images, improving the efficiency and accuracy of the process.
At the EMSO-SmartBay site, it is crucial to identify poor-quality video footage in real-time and within the Observatory Archive. Factors such as complete darkness, algal growth, suspended particulate matter reduction, and equipment failure can impact the utility of the observatory footage. Manual inspection of the video archive is time-consuming, and detecting interesting observations or occurrences, such as “Novelty” events or counting prawn burrows in the field of vision, is also laborious. The iMagine AI platform can aid in developing and deploying a service that enables quick detection of issues, efficient flagging and referencing of interesting footage, and the detection and enumeration of prawns and prawn burrows.
By leveraging the capabilities of the iMagine platform and implementing AI models, the project aims to automate and enhance the analysis of underwater video imagery at the EMSO sites, facilitating scientific research and improving ecological understanding.
Development timeline
The development roadmap for EMSO-OBSEA involves creating a workflow to automatically process underwater pictures in real-time, extracting fish abundance and taxa information. The workflow consists of two steps: segmentation and classification. Segmentation selects the regions of interest where fish specimens are present, and classification determines the taxa. The resulting abundance and taxa data will be compiled into time series datasets for easier analysis by scientists. From month 6 to 20, state-of-the-art segmentation and classification algorithms will be benchmarked and trained using the available dataset. Adjustments to the dataset may be required for optimal model performance. To improve prediction accuracy, dataset shifts caused by ambient variability will be investigated. Once a final model is developed and deployed, legacy data will be ingested, and the workflow will be put into production for real-time analysis of underwater images. The predictions will be used to extract information on the long-term biological rhythms of the fish community.
For the EMSO-Azores site, the roadmap focuses on creating a pipeline for images annotated in deepseaspy.com. This involves developing software to transform annotations into a suitable format for image segmentation using AI models. Existing labelling tools will be tested and utilised. Software tools for analyzing and validating training and test datasets will be developed or implemented. Multiple segmentation models will be trained using augmentation techniques. Motion detection and video segmentation algorithms will be investigated for species identification.
At the EMSO-SmartBay site, the roadmap begins with integrating the data into the platform and updating the workflows. Video data labelling will be performed to expand the training dataset. Various segmentation, object detection, and classification algorithms will be explored to identify poor-quality video footage. The concept of “Dataset shift,” caused by deteriorating video image quality over time due to factors like algae and dirt, will be investigated. Once a suitable AI solution is found, it will be integrated into the SmartBay service and available around month 20-22. Feedback will be collected, and the new functionality will be utilized.
Overall, the development roadmaps for each site involve training and deploying AI models, implementing data processing pipelines, and utilizing the iMagine platform to automate image analysis, improve data quality, and provide valuable insights for ecosystem monitoring and scientific research.
Involved Partners
with departments LOV and IMEV, including platform PIQv operating under the EMBRC RI.
expert in AI processing
expert in AI processing and data management
co-worker