de
en
Schliessen
Detailsuche
Bibliotheken
Projekt
Impressum
Datenschutz
zum Inhalt
Detailsuche
Schnellsuche:
OK
Ergebnisliste
Titel
Titel
Inhalt
Inhalt
Seite
Seite
Im Dokument suchen
Elbrechter, Christof: Towards Anthropomorphic Robotic Paper Manipulation. 2020
Inhalt
Introduction
Related Work
Perception
Modeling
Robot Control
Two Exemplary Projects
Contribution of this Thesis
Outline
Rich Research Challenge of Paper Manipulation
Hard and Software Prerequisites
The Bielefeld Curious Robot Setup
The Image Component Library (ICL) for Visual Perception
Shifting Paper as an Entry Point
Perception, Modeling and Robot Control
Results
Discussion
A New Image Processing Library
Requirements
Ease of Use
Speed
Function Volume
Alternative Computer Vision Libraries
OpenCV
Intel IPP
Halcon
Matlab Image Processing Toolbox
Less Common Libraries
Point Cloud Library (PCL)
Comparison
The Image Component Library (ICL)
Design Principles
ICL Modules
Documentation
Positioning ICL in the Landscape of Vision-Libraries
Important Tools for this Work
Easy to Use Core Functionality
Grabber Framework for Dynamic Image Source Selection
2D and 3D Visualization
Marker Detection Toolbox
Soft and Rigid-Body Physics Module
Discussion and Next Steps
Picking up Paper
Related Work
Perception using Fiducial Markers
Modeling Paper
Robot Control
Perception
Marker-based Detection of a Deformed Sheet of Paper
3D Key-Point Estimation
Modeling
Prerequisites
The Simple Geometric Mathematical Model
Physics-Based Modeling
Evaluation
Fiducial Marker Detection Accuracy
Qualitative Comparison of Modeling Performance
Quantitative Evaluation of the Mean Modeling Error
Distance Preservation Error
Conclusion
Robot Control
Vision- and Robot System
Picking up Paper With the Robot
Discussion
Visual Detection
Physics-based Modeling
Robot Control
Bending and Folding
Related Work
Visual Detection
Modeling Foldable Objects
Robot Control
Planning Folding Sequences
Perception
Detecting BCH Markers
Paper Layout
Modeling
Simulation of Folds
A Generalized Model Control Law
Evaluation
BCH-Code-based Markers
Detecting and Modeling Paper with Creases
Robot Control
Updated Vision and Robot Setup
Registration of Reference Objects on the Robot Server
Closed Loop Feedback Controllers
Folding Paper With the Robot
Discussion
Visual Detection
Modeling
Robot Control
Advanced Aspects
A Generalized Paper Model
Constraints
Folds
Moving the Model
Kinect-based Paper Detection
A Kinect-based Prototype for Tracking Paper
Strengths, Weaknesses and Heuristical Improvements
Folding the Paper in Half
Discussion
Supplementing Point-clouds with 2D-SURF-Features
Extending the ICP-Pipeline by SURF-feature Detection
Qualitative Evaluation of Human Folding Sequences
Tracking Folding of Common Textured Paper
Discussion
Automatic Fold Detection and Optimization
A Prototype System
Fold Onset Detection
Fold Geometry Estimation
Qualitative Evaluation
Discussion
Robotic Manipulation of Paper from a System Perspective
Bootstrapping a Bottom-Up Approach
An Extendable Set of Basic Action Primitives
Primitive Sequencing and Planning and Learning
Using our Primitives to Manipulate other Deformable Objects
Discussion
Generalization to 1D and 3D Deformable Objects
Conclusion
Summary & Discussion
Outlook & Future Work
Bibliography