With the worldwide spreading of Coronavirus disease 2019 (Covid-19) pandemic, besides the traditional diagnosing approach, Artificial Intelligence provides additional support for the pre-diagnosis of Covid-19 by using data such as patients' images, and sounds, etc. Being able to recognize Covid-19 positive patients quickly and correctly is the key to preventing the expansion of the disease. However, the existing Covid-19 diagnosis models still face challenges due to the complex network structure and additional medical examination. It takes much time to return a diagnosis result. In this paper, a diagnostic model is proposed as an early work for Covid-19 diagnosis using sound samples. The features of sound signals are expressed by Mel Frequency Cepstral Coefficients, which are input into the Online Sequential Extreme Learning Machine for normal/abnormal detection. Data from an open-source database were used to train the proposed model, the experiments show that using vowel pronunciations the model can achieve an accuracy of 96.4% on average, with about 10 times faster for testing than the Support Vector Machine.