In this chapter, we describe an effort to reproduce the main results of the published paper “Joint, Incremental Disfluency Detection and Utterance Segmentation from Speech” [1], published as part of the proceedings of the “European Chapter of the Association for Computational Linguistics” (EACL) in 2017. The paper focuses on the task of disfluency detection and utterance segmentation and proposes a simple deep learning system that processes dialogue transcriptions and Automatic Speech Recognition (ASR) output. For this purpose, the Dialogue Systems Group (DSG) at Bielefeld University developed a library that relies on a data model for live ASR data that combines timing and textual information. It utilizes a refined text corpus of open data to demonstrate the feasibility of the system for simultaneously detecting disfluencies and segmenting the individual utterances for use in conversational systems and similar speech related tasks. The code and data for this reproducibility experiment are available at https://gitlab.ub.uni-bielefeld.de/conquaire/deep_disfluency.