Hi! I'm building an app with a pipeline that currently ends with incoming MP3 (or OggVorbis or PCM if that helps) data from Amazon Polly (text-to-speech). As this data arrives, I'd like to broadcast it to ~hundreds of clients. I need as close to real-time as possible, so HLS is not a viable option. It seems an audio track over WebRTC (so OPUS, I guess) would be the way to go. I'm not sure how to go about ingesting the MP3 data and streaming it over WebRTC with Membrane. As a stretch goal, I'd like to broadcast "comfort noise" or possibly "room noise" continuously and overlay the incoming MP3 files on that.
I'd really appreciate some guidance/suggestions from the <@&1007223478610571344> team about how to solve this problem. FWIW I've gotten the Jellyfish videoroom running (very cool!) and played around with various Membrane examples. I'm a fairly experience Elixir programmer but have little experience in multimedia and WebRTC.