dc.description.abstract |
Most work on leveraging machine learning techniques has been focused on using chest CT scans or X-ray images. However, this approach requires special machinery, and is not very scalable. Using audio data to perform this task is still relatively nascent and there is much room for exploration. In this paper, we explore using breath and cough audio samples as a means of detecting the presence of COVID-19, in an attempt to reduce the need for close contact required by current techniques. We apply a three-fold approach of using traditional machine learning models using handcrafted features, convolutional neural networks on spectrograms and recurrent neural networks on instantaneous audio features, to perform a binary classification of whether a person is COVID-positive or not. We provide a description of the preprocessing techniques, feature extraction pipeline, model building and a summary of the performance of each of the three approaches. The traditional machine learning model approaches state-of-the-art metrics using fewer features as compared to similar work in this domain. |
en_US |