de
en
Schliessen
Detailsuche
Bibliotheken
Projekt
Impressum
Datenschutz
zum Inhalt
Detailsuche
Schnellsuche:
OK
Ergebnisliste
Titel
Titel
Inhalt
Inhalt
Seite
Seite
Im Dokument suchen
Wienke, Johannes: Framework-level resource awareness in robotics and intelligent systems. Improving dependability by exploiting knowledge about system resources. 2018
Inhalt
Abstract
Acknowledgments
Contents
List of figures
List of tables
List of code listings
Research topic
1 Introduction
2 Fundamental concepts and terminology
2.1 Resources and related concepts
2.1.1 Resource categorization schemes
2.1.2 Metrics, KPIs, and performance counters
2.1.3 A conceptual model of system resources
2.2 Dependable computing and FD*
2.2.1 Dependability
2.2.2 Threads to dependability
2.2.3 Means of dependability
2.2.3.1 Unified terminology
2.2.4 Dependability and performance
3 A survey on bugs in robotics systems
3.1 Tool usage
3.2 Bugs and their origins
3.3 Performance bugs
3.4 Bug examples
3.5 Summary
3.6 Threats to validity
4 A concept of resource awareness
4.1 Resource awareness in computing systems
4.1.1 Server infrastructure operation
4.1.2 Cloud computing
4.1.3 Model-based performance prediction
4.2 Resource awareness in robotics
4.2.1 Space robotics
4.2.2 Cloud robotics
4.2.3 Resource-aware algorithms
4.2.4 Resource-aware planning and execution
4.2.5 Infrastructure monitoring of robotics systems
4.2.6 Model-driven approaches
4.3 Summary
Technological foundation
5 Component-based robotics systems
5.1 Component-based software engineering
5.2 CBSE and distributed systems
5.3 CBSE in robotics
5.4 Patterns in component-based robotics systems
5.5 Summary
6 Middleware foundation: RSB
6.1 Architecture
6.1.1 Event model
6.1.2 Naming model
6.1.3 Notification model
6.1.4 Time model
6.1.5 Observation model
6.1.6 Extension points
6.2 Introspection
6.3 Domain data types: RST
6.4 Tool support
6.5 Interoperability with other middlewares
6.6 Applications
6.7 Summary
7 A holistic dataset creation process
7.1 Challenges in creating datasets
7.2 Description of the holistic process
7.3 Realization based on RSB
7.3.1 Data sources
7.3.2 Calibration
7.3.3 Unification
7.3.4 View generation and annotation
7.4 Summary
8 System metric collection
8.1 Available system metric sources
8.2 Resource acquisition tools
8.3 Implementation
8.3.1 Host collection
8.3.2 Processes collection
8.3.3 Subprocess handling
8.3.4 Data representation
8.3.5 System integration
8.4 Summary
Developer perspective
9 Runtime resource introspection
9.1 Available tools
9.2 Resource utilization dashboard implementation
9.2.1 Time series database adapter
9.3 Dashboard design
9.4 Evaluation
9.4.1 Qualitative evidences
9.4.2 Quantitative evaluation
9.4.2.1 Dashboard usage
9.4.2.2 Usefulness for debugging
9.5 Summary
10 Systematic resource utilization testing
10.1 Related work
10.2 Performance testing framework concept
10.3 Realization
10.3.1 Load generation
10.3.1.1 The action tree
10.3.1.2 Parameters
10.3.2 Environment setup
10.3.3 Test execution
10.3.3.1 Orchestration
10.3.3.2 Data acquisition & recording
10.3.4 Test analysis
10.3.4.1 Data preparation
10.3.4.2 Manual inspection
10.3.4.3 Automatic regression detection
10.3.5 Automation
10.4 Evaluation
10.5 Summary
11 Model-based performance testing
11.1 Related work
11.2 Language design
11.2.1 Metamodel
11.2.1.1 Actions
11.2.1.2 Data generation
11.2.1.3 Parameter specification
11.2.2 Editors
11.2.3 Code generation
11.3 Notable language features
11.3.1 Inline data generation
11.3.2 Type safety for embedded custom code
11.3.3 Expressive custom code via embedding
11.4 Evaluation
11.5 Summary
Autonomy perspective
12 A dataset for performance bug research
12.1 Recording method
12.2 Included performance bugs
12.2.1 Algorithms & logic
12.2.2 Resource leaks
12.2.3 Skippable computation
12.2.4 Configuration
12.2.5 Threading
12.2.6 Inter-process communication
12.3 Automatic fault scheduling
12.4 Summary
13 Runtime resource utilization prediction
13.1 Feature generation
13.1.1 Accumulated event window features
13.1.2 Adding previous system metrics
13.1.3 Baseline: system metrics
13.1.4 Preprocessing
13.2 Model learning
13.3 Evaluation
13.3.1 Results on the ToBi dataset
13.3.2 Influences of the component behavior
13.4 Learning from performance tests
13.4.1 Evaluation
13.4.2 Influences of the test structure
13.5 Related work
13.6 Summary
14 Runtime performance degradation detection
14.1 Related approaches
14.2 Residual-based performance degradation detection
14.3 Evaluation
14.3.1 Results on the ToBi dataset
14.3.2 Influence of component behavior
14.4 Summary
Perspectives
15 Conclusion
16 Outlook
Appendix
A Survey: failures in robotics systems
A.1 Introduction
A.2 Monitoring Tools
A.2.1 How often do you use the following kinds of tools to monitor the operation of running systems?
A.2.2 Please name the concrete tools that you use for monitoring running systems.
A.3 Debugging Tools
A.3.1 How often do you use the following tools for debugging?
A.3.2 Please name the concrete tools that you use for debugging.
A.4 General Failure Assessment
A.4.1 Averaging over the systems you have been working with, what to do you think is the mean time between failures for these systems?
A.4.2 Please indicate how often the following items were the root cause for system failures that you know about.
A.4.3 Which other classes of root causes for failures did you observe?
A.5 Resource-Related Bugs
A.5.1 How many of the bugs you have observed or know about had an impact on computational resources, e.g. by consuming more or less of these resources as expected?
A.6 Impact on Computational Resources
A.6.1 Please indicate how often the following computational resources were affected by bugs you have observed.
A.6.2 If there are other computational resources that have been affected by bugs, please name these.
A.7 Performance Bugs
A.7.1 Please rate how often the following items were the root causes for performance bugs you have observed.
A.8 Case Studies
A.8.1 Thinking about the systems you have worked with so far, is there a bug that you remember which happened several times or which is representative for a class of comparable bugs?
A.9 Case Study: Representative Bug
A.9.1 How was the representative bug noticed?
A.9.2 What was the root cause for the bug?
A.9.3 Which steps were necessary to analyze and debug the problem?
A.9.4 Which computational resources were affected by the bug?
A.10 Case Studies
A.10.1 Thinking about the systems you have worked with so far, is there a bug that you remember which was particularly interesting for you?
A.11 Case Study: Interesting Bug
A.11.1 How was the interesting bug noticed?
A.11.2 What was the root cause for the bug?
A.11.3 Which steps were necessary to analyze and debug the problem?
A.11.4 Which computational resources were affected by the bug?
A.12 Personal Information
A.12.1 In which context do you develop robotics or intelligent systems?
A.12.2 How many years of experience in robotics and intelligent systems development do you have?
A.12.3 How much of your time do you spend on developing in the following domains?
A.13 Final remarks
B Failure survey results
B.1 Used monitoring tools
B.2 Used debugging tools
B.3 Summarization of free form bug origins
B.4 Summarization of other resources affected by bugs
B.5 Representative bugs
B.5.1 Representativ bug 8
B.5.2 Representativ bug 10
B.5.3 Representativ bug 14
B.5.4 Representativ bug 21
B.5.5 Representativ bug 26
B.5.6 Representativ bug 30
B.5.7 Representativ bug 41
B.5.8 Representativ bug 42
B.5.9 Representativ bug 46
B.5.10 Representativ bug 60
B.5.11 Representativ bug 69
B.5.12 Representativ bug 70
B.5.13 Representativ bug 76
B.5.14 Representativ bug 81
B.5.15 Representativ bug 96
B.5.16 Representativ bug 128
B.5.17 Representativ bug 135
B.5.18 Representativ bug 136
B.5.19 Representativ bug 156
B.5.20 Representativ bug 190
B.5.21 Representativ bug 191
B.6 Interesting bugs
B.6.1 Interesting bug 5
B.6.2 Interesting bug 21
B.6.3 Interesting bug 32
B.6.4 Interesting bug 46
B.6.5 Interesting bug 60
B.6.6 Interesting bug 69
B.6.7 Interesting bug 76
B.6.8 Interesting bug 83
B.6.9 Interesting bug 133
B.6.10 Interesting bug 149
B.6.11 Interesting bug 150
B.6.12 Interesting bug 153
B.6.13 Interesting bug 156
B.6.14 Interesting bug 162
B.7 Collected system metrics
B.7.1 Host system metrics
B.7.1.1 Memory
B.7.1.2 Swap
B.7.1.3 CPU
B.7.1.4 Disk
B.7.1.5 Network
B.7.1.6 Users
B.7.1.7 Processes
B.7.2 Process metrics
B.7.2.1 Source proc/stat
B.7.2.2 Source proc/io
B.7.2.3 Source proc/fd
C Survey: dashboard evaluation
C.1 Introduction
C.2 General
C.2.1 Please rate, how often you consult the monitoring dashboard in different situations?
C.2.2 How much insight do you gain into the consumption and availability of computational resources (like CPU, I/O or memory) when using the dashboard?
C.2.3 Do you think you have a better understanding of the use of computational resource in the system as a result of the dashboard?
C.2.4 For the different kinds of computational resources, how much did the dashboard improve your understanding of the consumption of these resources?
C.2.5 Please describe briefly, in which situation you find the dashboard most valuable.
C.3 Debugging
C.3.1 How often are issues that you observe in the system visible in the dashboard?
C.3.2 Does the dashboard help to isolate the origin of bugs?
C.3.3 Did you find bugs through the dashboard that you wouldn't have noticed at all or much later otherwise?
C.3.4 Please briefly describe the bugs that you have found.
C.4 Tools
C.4.1 Which tools do / did you use apart from the dashboard to understand resource utilization?
C.4.2 Did the dashboard reduce the use of other tools for the purpose of understanding resource utilization?
C.5 End
C.5.1 In case you have further comments or ideas regarding the performance dashboard, please indicate them here.
C.6 Final remarks
D Dashboard survey results
D.1 Found bugs
E ToBi dataset details
E.1 Included components
E.2 Relation of bugs to components
Acronyms
Glossary
Bibliography
Own publications
General
Software packages
Declaration
Colophon