de
en
Schliessen
Detailsuche
Bibliotheken
Projekt
Impressum
Datenschutz
zum Inhalt
Detailsuche
Schnellsuche:
OK
Ergebnisliste
Titel
Titel
Inhalt
Inhalt
Seite
Seite
Im Dokument suchen
Geiping, Jonas: Modern optimization techniques in computer vision : from variational models to machine learning security. 2021
Inhalt
Zusammenfassung
Abstract
Acknowledgements
Contents
Introduction
Fast Convex Relaxations using Graph Discretizations
Introduction
Related Work
Graph Discretizations for Convex Relaxations
Preliminaries
Graph Discretization
Numerical Evaluation
Segmentation
Stereo Matching
Conclusions
Proof of Proposition 1
Algorithmic Details
Implementation
Experimental Setup
Superpixel-Sublabel Stereo Lifting
Further plots
Composite Optimization by Nonconvex Majorization-Minimization
Introduction
Related Work
Organization of this work
The General Principle
The Algorithm
Special Cases
Proximity relative to the inner function
Algorithm Discussion and Convergence
Basic Properties
Descent Properties
Convergence Properties
Global Convergence of the inner sequence
Implementation Details
Modeling
Choices for the Bregman Distance
An example of a non-separable, solvable subproblem
Inertia
Experimental results
Synthetic experiments
Time-of-Flight Depth Reconstruction
Conclusions
Reformulation as continuous Majorizer
Details regarding Grid Search
Parametric Majorization for Data-Driven Energy Minimization Methods
Introduction
Related Work
Bi-Level Learning
Majorization of Bi-level Problems
Single-Level Majorizers
Intermission: One-Dimensional Example
Iterative Majorizers
Examples
Computed Tomography
Variational Segmentation
Analysis Operator Models
Conclusions
Convex Analysis in Section 3
Details for Derivation of (4.11) to (4.12)
Details for Derivation of (4.14) to (4.15)
Proof of Proposition 2
Derivation of the surrogate functions for the example in 4.3.3
Proof of Proposition 4
Experimental Setup
CT - Additional Details
Segmentation - Additional Details
Analysis Operators - Additional Details
Extended Overview of Related Work
Analysis Operator Learning - Additional Figures
Derivation from Support Vector Machine Principles
On Hessian Inversion
Generalization of the Iterative Surrogate
Inverting Gradients - How easy is it to break privacy in federated learning?
Introduction
Related Work
Theoretical Analysis: Recovering Images from their Gradients
A Numerical Reconstruction Method
Single Image Reconstruction from a Single Gradient
Distributed Learning with Federated Averaging and Multiple Images
Conclusions
Broader Impact - Federated Learning does not guarantee privacy
Variations of the threat model
Dishonest Architectures
Dishonest Parameter Vectors
Experimental Details
Hyperparameter Settings
Settings for the experiments in Sec.5.5
Setting for experiments in Sec. 5.6
Proofs for section 5.2
Additional Examples
Additional CIFAR-10 examples
Visualization of experiments in Sec. 5.5
More ImageNet examples for Sec. 5.5
Multi-Image Recovery of Sec. 5.6
General case of Sec. 5.6
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Introduction
Related Work
Efficient Poison Brewing
Threat Model
Motivation
The Central Mechanism: Gradient Alignment
Making attacks that transfer and succeed ``in the wild''
Theoretical Analysis
Experimental Evaluation
Evaluations on CIFAR-10
Poisoning ImageNet models
Deficiencies of Defense Strategies
Conclusion
Remarks
Experimental Setup
Hardware
Models
Cloud AutoML Setup
Proof of Proposition 6.1
Poisoned Datasets
Visualizations
Additional Experiments
Full-scale MetaPoison Comparisons on CIFAR-10
Deficiencies of Filtering Defenses
Details: Defense by Differential Privacy
Details: Gradient Alignment Visualization
Ablation Studies - Reduced Brewing/Victim Training Data
Ablation Studies - Method
Transfer Experiments
Multi-Target Experiments
Conclusions
References
Index
Lists of Figures and Tables