wav file, etc…)Īnyways long story short, here is the code that I can run and it allows me to use DeepSpeech without have to create a. Another python package called SpeechRecognition, has built in support to create, in-memory, an audioData object that is acquired by some audio source (microphone. Well, in a nutshell (and according to client.py) the Model just needs the audio source to be a flattened Numpy Array. I know on the FAQs there is a section that addresses that people would like to see if DeepSpeech can be used without having to save audio as a. I am not sure how to properly contribute this knowledge to GitHub.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |