This website requires JavaScript.

Rapid Extraction of Respiratory Waveforms from Photoplethysmography: A Deep Encoder Approach

Harry J. DaviesDanilo P. Mandic
Dec 2022
Much of the information of breathing is contained within thephotoplethysmography (PPG) signal, through changes in venous blood flow, heartrate and stroke volume. We aim to leverage this fact, by employing a novel deeplearning framework which is a based on a repurposed convolutional autoencoder.Our model aims to encode all of the relevant respiratory information containedwithin photoplethysmography waveform, and decode it into a waveform that issimilar to a gold standard respiratory reference. The model is employed on twophotoplethysmography data sets, namely Capnobase and BIDMC. We show that themodel is capable of producing respiratory waveforms that approach the goldstandard, while in turn producing state of the art respiratory rate estimates.We also show that when it comes to capturing more advanced respiratory waveformcharacteristics such as duty cycle, our model is for the most partunsuccessful. A suggested reason for this, in light of a previous study onin-ear PPG, is that the respiratory variations in finger-PPG are far weakercompared with other recording locations. Importantly, our model can performthese waveform estimates in a fraction of a millisecond, giving it the capacityto produce over 6 hours of respiratory waveforms in a single second. Moreover,we attempt to interpret the behaviour of the kernel weights within the model,showing that in part our model intuitively selects different breathingfrequencies. The model proposed in this work could help to improve theusefulness of consumer PPG-based wearables for medical applications, wheredetailed respiratory information is required.