In recent years, an increasing number of cabled Fixed Underwater Observatories (FUOs) have been deployed, many of them equipped with digital cameras recording high-resolution digital image time series for a given period. The manual extraction of quantitative information from this data regarding resident species is necessary to link the image time series information to data from other sensors but requires computational support to overcome the bottleneck problem in manual analysis. Since a priori knowledge about the objects of interest in the images is almost never available, computational methods are required that are not dependent on the posterior availability of a large training data set of annotated images.
In this paper, we propose a new strategy for collecting and using training data for machine learning-based observatory image interpretation much more efficiently. The method combines the training efficiency of a special active learning procedure with the advantages of deep learning feature representations.
The method is tested on two highly disparate data sets.
In our experiments, we can show that the proposed method ALMI achieves on one data set a classification accuracy A>90% with less than N=258 data samples and A>80% after N=150 iterations, i.e. training samples, on the other data set outperforming the reference method regarding accuracy and training data required.