Brain-Computer Interfaces (BCI) are tools that open a new channel of communication between humans and machines.
The majority of human input devices for computers require proper functioning of our primary sensors and motor
functions like grasping, moving and visual perception. In the case of severe motor disabilities, like amyotrophic
lateral sclerosis (ALS) or spinal chord injury (SCI),
The most common method to measure brain activity suitable for BCI are electroencephalographic measurements (EEG)
due to their relative cost effectiveness and ease of use. Alternative ways to extract brain signals exist but
either require invasive procedures, i.e. opening the skull, or are very costly and bulky (MEG, fMRI)
which renders them unusable for home appliance.
One of the most popular brain controlled input methods is the P300-Speller paradigm
which gives the user control over a virtual keyboard to enter text. The term P300 refers to a specific EEG component
that can be measured whenever a rare task relevant stimulus is interspersed with many non-relevant stimuli. This method requires
the ability to control the visual presentation of stimuli and therefore also requires some sort of computer controlled display.
The recognition rates for this type of BCI, yet already quite high with roughly 80-90% accuracy,
are still prone to errors and may not be suitable for critical applications like issuing movement commands to a wheelchair
in a highly populated environment. Commands to stop the wheelchair might be recognized too late.
Further, it is impossible with the standard stimulus matrix to react to external influences like obstacles or select physical objects
in a scene which does not allow the user to interact with a dynamic environment.
This work aims to fuse state of the art BCI techniques into one single system to control an artificial actuator
like a robot arm and use it to manipulate the physical environment. To achieve this goal, multiple techniques originating
from different fields of research as augmented reality, computer vision, psychology, machine learning and data mining have
to be combined to form a robust, intuitively to use input device.