Automated video identification of marine species (AVIMS) - new application: report
A commissioned report on the development of a web-based computer application for machine learning-based (semi-)automated analysis of underwater video footage obtained during the monitoring of aquatic environments.
Executive summary
In this project we developed a web application entitled Automated Video Identification of Marine Species (AVIMS). The project was funded by Scottish Government Contract, CASE/216380. The objective was to develop an automated video analysis capability with a user-friendly graphical interface which could be used by the Scottish Government’s Marine Directorate biologists and stakeholders’ non-specialist staff who do not have computer science and coding expertise.
The Marine Directorate collects a large amount of underwater video for a number of different purposes. Analyses of these video data is time-consuming, often requires a skilled taxonomist and hence constitutes a significant draw on resources. The high cost of the analyses of this large amount of data often results in situations where only a subset of the available video is fully analysed. Consequently, automated video analysis software performing the above tasks would be highly desirable as it would reduce the costs of carrying out the analyses and allow for the analysis of all available data. It is expected that due to a steady and rapid improvements in new sensor/camera technology and their decreasing costs, the amount of video data available will only increase making the current processing bottlenecks even more acute.
To achieve that goal, the Scottish Government funded an earlier piece of work in this area - Automated Identification of Fish and Other Aquatic Life in Underwater Video (Blowers et al. 2020) in which the authors reviewed current image and video analysis methods and how these can be applied to different types of video footage and data extraction requirements used by the Marine Directorate. The authors also made recommendations for how video analysis could be automated using state of the art and open-source machine learning (deep learning) algorithms. We have followed the recommendations of Blowers et al. (2020) closely, making a number of important refinements. Our machine learning solution has been integrated and deployed as a user-friendly web application - AVIMS.
AVIMS allows users without computer science / coding experience to create, train and execute machine learning models without any need for interaction with the underlying code. The application supports computer vision models which fall into the common framework of detection of individuals from a predefined set of marine species, tracking detected individuals across the consecutive frames in the video, and finally, counting all distinct entities in the video for each species of interest (detect/track/count).
The workflow of our web-based application implementing the above detect/track/count computer vision framework allows users to: create new survey types; define a set of species of interest for each survey type; upload video and image data for training machine learning models; annotate video and image data with objects of interest; create datasets comprising annotated data for training machine learning models; train machine learning models; upload new videos for the analysis by the machine learning models created in the previous steps of the workflow and view or download the analysis results.
The web application uses distributed computing to perform the required tasks. The computationally intensive tasks which include training machine learning models and analysing new videos are sent to a separate machine specifically equipped to handle this type of computations where they wait for their turn in a queue.
The web application has been tested by the development team and Marine Directorate scientists on several survey types including overhead in-river fish counters, salmon smolts entrained by a trawl and videos of the seabed. The initial machine learning models created in the web application have been shown to perform the required tasks. This said, in order for the system to achieve the level of accuracy that is expected from a practical application, the current small amount of annotated video data will need to be expanded to allow the machine learning models to better learn the appearance of various marine species of interest. This can be achieved from within the delivered AVIMS application.
Contact
Email: craig.robinson@gov.scot
There is a problem
Thanks for your feedback