de
en
Schliessen
Detailsuche
Bibliotheken
Projekt
Impressum
Datenschutz
zum Inhalt
Detailsuche
Schnellsuche:
OK
Ergebnisliste
Titel
Titel
Inhalt
Inhalt
Seite
Seite
Im Dokument suchen
Reich, Christian: Learning machine monitoring models from sparse and noisy sensor data annotations. 2020
Inhalt
Abstract
Zusammenfassung
Acknowledgments
Contents
Notation
1 Introduction
1.1 Goal and Focus of Thesis
1.2 Challenges of Thesis
1.3 Contributions
1.4 Summary of Contributions and Thesis Outline
2 Theoretical Background and Related Work
2.1 Machine Health Monitoring
2.1.1 Tool Condition Monitoring
2.1.2 Imbalance Detection
2.2 Signal Segmentation
2.2.1 Piecewise Linear Approximation
2.2.2 Clustering-based Approaches
2.2.3 Changepoint Approaches
2.3 Modeling Non-Stationary Frequency Components
2.3.1 Parameter Estimation
2.3.2 Frequency Component Tracking
2.4 Anomaly Detection
2.4.1 Shallow Models
2.4.2 Deep Models
2.5 Annotations by Human Users
2.6 Weakly Supervised Learning
2.7 Summary
I Task-Specific Machine Monitoring Features and Models
3 Signal Segmentation
3.1 Motivation
3.2 Methods
3.2.1 Modeling Recurrent Signal Segments with Gaussian Mixture Models
3.2.2 Bayesian Estimation of Recurrent Signal Segments
3.3 Experiments on Signal Segmentation
3.3.1 Data for Signal Segmentation
3.3.2 Signal Segmentation by Clustering-based Methods
3.3.3 Quality and Cost of Signal Segmentation
3.3.4 Signal Segmentation by Bayesian Online Changepoint Detection and Extensions
3.3.5 Selected Predictive Tasks
3.4 Conclusions
3.5 Related Publications
4 Modeling Non-Stationary Discrete Frequency Components
4.1 Motivation
4.2 Methods
4.2.1 Estimation of Signal Model Parameters
4.2.2 Discrete Frequency Component (DFC) Tracking
4.3 Results
4.3.1 Noise Variations for Artificial Data
4.3.2 DFC Tracking and Assignment for Measured Sensor Data
4.4 Conclusions
II Low-Cost Annotation and Robust Detection of Generic Machine Tool Anomalies
5 User Study: Quality of Live Annotations and Influencing Factors
5.1 Motivation
5.2 Measurement Setup
5.3 Description of the Visualization and Labeling Prototype
5.3.1 Design Process of the Labeling Prototype
5.3.2 Functionality of the Labeling Prototype
5.4 Assumptions on Evaluation Measures
5.4.1 Assumptions on Measures for Quality of Label Feedback
5.4.2 Assumptions on Measures for Annotator Motivation
5.5 Experiments
5.5.1 Selection of a Generic Anomaly Detection Algorithm
5.5.2 Evaluation of Label Feedback
5.6 Conclusions
5.7 Related Publications
6 Neural Anomaly Detection
6.1 Motivation
6.2 Methods
6.2.1 Loss Functions
6.2.2 Network Layers
6.2.3 Training and Hyperparameter Optimization
6.2.4 Label Generation via Probabilistic Graphical Models (PGMs)
6.3 Results
6.3.1 Experimental Setup
6.3.2 Anomaly Detection with Unsupervised Models
6.3.3 Utilizing Labels for Anomaly Detection Model Extensions
6.3.4 Anomaly Propositions with Neural Anomaly Detection Models
6.4 Conclusions
7 Summary
7.1 Summary of Contributions
7.2 Conclusions and Outlook
A Appendix for User Study (Chapter 5)
A.1 Original Version of Labeling Prototype Screens
B Appendix for Neural Anomaly Detection (Chapter 6)
B.1 Encoder Networks
B.1.1 Multilayer Perceptron (MLP) Encoder
B.1.2 Fully Convolutional Network (FCN) Encoder
B.1.3 Convolutional Encoder
B.1.4 Temporal Convolutional Network (TCN) Encoder
B.2 Decoder Networks
B.2.1 Multilayer Perceptron (MLP) Decoder
B.2.2 Convolutional Decoder
B.3 Variational Autoencoder (VAE) Projection Network
B.4 Training of Neural Anomaly Detection Models
B.5 Optimization of Hyperparameters
C List of Figures
D List of Tables
Bibliography