Hi everyone 🙂 (message can be redundant with the one I posted on slack, sorry about that) I'm currently building a POC for live transcription using whisper model. I checked Lawik example already which uses either the mic as or a file as a source to achieve it. In my case I do have RTMP source with AAC codec for audio and flv format. I've built a pipeline which seems to be "ok":
def handle_init(_ctx, socket: socket) do
structure = [
child(:source, %Membrane.RTMP.SourceBin{
socket: socket,
validator: Membrane.RTMP.DefaultMessageValidator
})
|> via_out(:audio)
|> child(:decoder, Membrane.AAC.FDK.Decoder)
|> child(:converter, %Converter{
output_stream_format: %RawAudio{
sample_format: :f32le,
sample_rate: 16000,
channels: 1
}
})
|> child(:fake_sink, Membrane.Fake.Sink.Buffers)
]
{[spec: structure, playback: :playing], %{}}}
end
When I start it it initialise without issues however it doesn't do anything (from what I can see) (I see debug logs showing elements received the play request)
I would be happy to get advises on that one :D, thanks !