Hello everyone.
I'm pretty new to the media streaming world and still learning all the protocols, how things work, etc. So bear with me if my question seems a little bit basic.
I'm developing a LiveView system where one of the requisites is to create a continuous audio stream of the user's microphone via the browser he is using and send that data to my server. In the server, I want to receive that stream and pass it to a speech-to-text model so I have a transcription of what the user is doing in (soft) real-time.
I believe Membrane can help me here with the part of creating an audio stream from the user's browser microphone input and sending it to the server to consume it, but I'm a bit lost on what parts of membrane would help me with that, I believe I would not use webRTC for that since it is, AFAIK, a peer-to-peer protocol, and in this case, I just want a stream from client to server.
Can you guys have suggestions on how I should do that?