Manual annotation is a tedious and time consuming process, usually
needed for generating training corpora to be used in a machine learning scenario.
The distant supervision paradigm aims at automatically generating such corpora
from structured data. The active learning paradigm aims at reducing the effort
needed for manual annotation. We explore active and distant learning approaches
jointly to limit the amount of automatically generated data needed for the use case
of relation extraction by increasing the quality of the annotations.
The main idea of using distantly labeled corpora is that they can simplify and
speed-up the generation of models, e. g. for extracting relationships between entities
of interest, while the selection of instances is typically performed randomly.
We propose the use of query-by-committee to select instances instead. This approach
is similar to the active learning paradigm, with a difference that unlabeled
instances are weakly annotated, rather than by human experts. Different strategies
using low or high confidence are compared to random selection. Experiments on
publicly available data sets for detection of protein-protein interactions show a
statistically significant improvement in F1 measure when adding instances with a
high agreement of the committee.