Membrane help

RTSP authentication problem?

Hi, I am playing with an RTSP camera (Tapo C210) that is on the local network. The path to the stream is rtsp://myadmin:[email protected]:554/stream1 with credential obfuscated as myadmin and mypassword. I tested the endpoint with VLC and it works and I can see the stream so the credentials are good. I wanted to obtain some information from the camera so I wanted to get information about the session as per the RTSP documentation:

alias Membrane.RTSP
alias Membrane.RTSP.Response
alias Membrane.RTSP.Request

url = "rtsp://myadmin:[email protected]:554/stream1"
{:ok, session} = RTSP.start_link(url)

However, I received:

iex> {:error, {:invalid_addrtype, "o=- 14665860 31787219 1 IN IP4"}}

So I reproduced the steps that are taken by the library to see where the error could happen:

uri = URI.parse(url)
iex> %URI{
  scheme: "rtsp",
  authority: "myadmin:[email protected]:554",
  userinfo: "myadmin:mypassword",
  host: "",
  port: 554,
  path: "/stream1",
  query: nil,
  fragment: nil

So far so good, userinfo is set correctly. I read the comment that the default authentication is "basic" (

VLC by default utilises Digest and ommits Basic authentication. Am I setting something wrong or missing opts somewhere? Is there any place you could recommend to look at when trying to view all available opts and their values (or examples of them)? Thank you!

Debugging bundlex/unifex errors

Hello, I've been tinkering with membrane cross-compiled to Nerves (rpi4).

I've had features (e.g. microphone input) working recently but it seems it has broken with recent upgrades.

Would anyone have any tips on debugging the unifex_create/3 error below?

I suspect it's loading precompiled NIFs onto the wrong architecture but I'm not seeing where I can disable precompiled libraries (or if this is even the right thing to be investigating...)

 (stop) {:membrane_child_crash, :encoder, {β€œNif fail: Elixir.Membrane.Opus.Encoder.Native.unifex_create/3”, [{:erlang, :nif_error, [β€œNif fail: Elixir.Membrane.Opus.Encoder.Native.unifex_create/3"], [error_info: %{module: :erl_erts_errors}]}, {Membrane.Opus.Encoder.Native.Nif, :unifex_create, 3, [file: ~c”lib/membrane_opus/encoder/native.ex”, line: 1]}, {Membrane.Opus.Encoder, :handle_setup, 2, [file: ~c”lib/membrane_opus/encoder.ex”, line: 70]}, {Membrane.Core.CallbackHandler, :exec_callback, 4, [file: ~c”lib/membrane/core/callback_handler.ex”, line: 139]}, {Membrane.Core.CallbackHandler, :exec_and_handle_callback, 5, [file: ~c”lib/membrane/core/callback_handler.ex”, line: 69]}, {Membrane.Core.Element.LifecycleController, :handle_setup, 1, [file: ~c”lib/membrane/core/element/lifecycle_controller.ex”, line: 62]}, {Membrane.Core.Element, :handle_continue, 2, [file: ~c”lib/membrane/core/element.ex”, line: 172]}, {:gen_server, :try_handle_continue, 3, [file: ~c”gen_server.erl”, line: 1085]}]}}

Spinning up a new GenServer for each room

I have been learning from the videoroom demo and I have a few questions.

# meeting.ex
  @impl true
  def init(%{name: name, jellyfish_address: jellyfish_address}) do
    Logger.metadata(room_name: name)

    client = jellyfish_address)

    with {:ok, room, jellyfish_address} <- create_new_room(client, name) do
      peer_timeout = Application.fetch_env!(:videoroom, :peer_join_timeout)

      client = Jellyfish.Client.update_address(client, jellyfish_address)"Created meeting room id: #{}")

         client: client,
         name: name,
         peer_timers: %{},
         peer_timeout: peer_timeout,
         jellyfish_address: jellyfish_address
      {:error, reason} ->
        Logger.error("Failed to create a meeting, reason: #{inspect(reason)}")
        raise "Failed to create a meeting, reason: #{inspect(reason)}"

I am wondering what the overhead would be for starting a GenServer for each room at a large scale and if there is a better way? My first impression which may not be correct or smart, is that we could skip the overhead of GenServers and interface directly with the jellyfish sdk directly without a GenServer. All of the nice state like peer_timers could be put into a in-memory key-value store

would it work to store a single map in an in-memory key-value store of the form

"jelly_address_1": %Jellyfish.Client{ ... }
"jelly_address_2": %Jellyfish.Client{ ... }

This way a jellyfish client is only created once for each jellyfish instance instead of creating one for each room or is this a bad idea? Is it wrong / a bottle neck if many rooms try to use the same jellyfish client?

Membrane.Realtimer and OPUS stream: dropping packets

Hi. We're trying to stream PCM audio we're getting from Amazon Polly (text-to-speech) over WebRTC in OPUS format and are so very close 😁 , but the problem now is that the packets are being sent too fast and the browser is just discarding the earliest ones 😭 . So if the message is short ("hello"), we hear the whole thing. Above a certain length ("hello, how are you. Is the weather nice?"), it drops the beginning and we hear only "...ther nice?", for example. The payload from Polly has the full audio and as the message length increases, we see in chrome://webrtc-internals/ that packets received increases. When some of the message is dropped, we see that too (see screenshot).

We believe Realtimer is supposed to limit playback speed to realtime but it doesn't seem to have any effect. If anyone could take a look at these code snippets and suggest what we might be doing wrong or missing, I'd appreciate it. The value of state.track.encoding is :OPUS and state.track.clock_rate is 4800.

payloader = Membrane.RTP.PayloadFormat.get(state.track.encoding).payloader
payloader_bin = %Membrane.RTP.PayloaderBin{
  payloader: payloader,
  ssrc: 22,
  payload_type: 96,
  clock_rate: state.track.clock_rate
spec = [
  child(:input, %Polly.Source{})
  |> child(:encoder, %Membrane.Opus.Encoder{
    application: :audio,
    input_stream_format: %RawAudio{
      channels: 1,
      sample_format: :s16le,
      sample_rate: 16_000
  |> child(:parser, %Membrane.Opus.Parser{delimitation: :undelimit})
  |> child(:payloader, payloader_bin)
  |> via_in(:input, toilet_capacity: @toilet_capacity)
  |> child(:realtimer, Membrane.Realtimer)
  |> via_in(Pad.ref(:input, {, :high}))
  |> child(:track_sender, %Membrane.RTC.Engine.Endpoint.WebRTC.TrackSender{
    track: state.track,
    variant_bitrates: %{high: 5},
  |> via_out(Pad.ref(:output, {, :high}))
  |> child(:inspector, InterpretoWeb.InspectElement)
  |> bin_output(pad)

Custom RTC endpoint PCM -> RTC

Hi, i am trying the solution proposed here (not sure if I am expected to post in the same thread or not)

I have a PCM audio that i am trying to route to multiple WebRTC endpoints, and it seems i am not handling the audio very well. For debugging I'm sending the output to a file and while the file gets written it is unplayable.

The main pipeline receive the input from a pubsub (temporary easy solution to push audio in until I can make an proper endpoint / element, the PubSubSource was previously used and worked in a pipeline used to push to HLS)

def_output_pad(:output, availability: :on_request, demand_unit: :buffers, accepted_format: _any)

  @impl true
  def handle_pad_added(Pad.ref(:output, {_track_id, _rid}) = pad, _ctx, state) do
    payloader = RTP.PayloadFormat.get(state.track.encoding).payloader

    payloader_bin = %PayloaderBin{
      payloader: payloader,
      ssrc: 18,
      payload_type: 96,
      clock_rate: state.track.clock_rate

    spec = [
      child(:input, %PubSubSource{})
      |> child(:encoder, %Opus.Encoder{
        application: :audio,
        input_stream_format: %RawAudio{
          channels: 2,
          sample_format: :s16le,
          sample_rate: 16000
      |> child(:parser, %Opus.Parser{delimitation: :undelimit})
      |> child(:payloader, payloader_bin)
      |> child(:track_sender, %StaticTrackSender{
        track: state.track,
        is_keyframe: true
      |> via_out(:output)
      |> bin_output(pad)

    {[spec: spec], state}

Thank you for you help

React-Native connection?

I'm struggling to get a react-native client to connect to my membrane server. I'm just running locally right now. I start my membrane server with EXTERNAL_IP={my ip} mix phx.server. I'm using the @jellyfish-dev/react-native-membrane-webrtc client in my react native code

Then I have the following connection code in my react-native view. I see the console log statements for init connect and attempting connect. But it never connects. I don't see a connection message in my phoenix server, nor the successful connection message.

I tried increasing the log verbosity, but didn't get anything out of the logs from react-native. Is there something obviously wrong with my connection string? Is it expecting something different for the server URL?

const startServerConnection = async () => {
    const deviceID = await getUniqueId();
    try {
      console.log('attempting connnect')
      //Should make this environment aware at some point
      connect('', "room:" + deviceID, {
        endpointMetadata: {
          displayName: deviceID,
        socketChannelParams: {
          childrenNames: params.childrenNames,
          talkAbout: params.talkAbout          
      }).then(() => {
        console.log('connected. starting mic')
        startMicrophone({ audioTrackMetadata: { active: true, type: 'audio' } });
      }).catch((e) => {
        console.log('connection error: ' + e);  
    } catch (e) {
      console.log('connection error: ' + e);

    let isMounted = true;
    const logLevel: LoggingSeverity = LoggingSeverity.Verbose
    const initConnection = async ()=>{
      try {
        console.log('init connnect')
        await startServerConnection();
        if (isMounted) {
      } catch (error) {
        console.log('init connection error: ' + error)

Live Streaming on Raspberry Pi Nerves Device

Hello! I'm new to Membrane, but seasoned Elixir dev. I'm currently working on a project (similar to 3D printer) where a RPi Nerves device is controlling an ongoing process. As a part of this, there are two USB cameras which plug into the pi and I would like to livestream to the user. One good resource that I found was this excellent tutorial by pressy4pie, one of the Nerves maintainers (huge thank you).

One major difference between what is documented here though is that I don't have a main server running on the internet somewhere. My Nerves project also bundles and starts a Phoenix LiveView UI on device boot. What this means: user connects directly to https://nerves.local:4000 which is running on the device. The way it currently works, the Phoenix project starts a GenServer which constantly reads the cameras using the OpenCV bindings (Evision project). Upon receiving the frame, it does some basic image processing, encodes the image as base64 JPG and then pushes it over the websocket to the LiveView client img tag. It works, but under poor network conditions we are starting to see that the LiveView process mailbox is getting clogged, resulting in significant lag in the live stream as well as for user input events.

I'd like to see if Membrane is suitable to tackle this problem. Preferably, we could use the hardware h264 accelerator in the pi for better performance. My requirements are to live stream to browser (via HLS? WebRTC? idk), and allow the software running on device grab the most recent frame from the camera stream for async image processing.

Some recommendations to get started in the right direction would be appreciated πŸ™‚


Dynamically add & remove source elements for custom RTC Engine endpoint

Hi, I'm looking for some help with dynamically adding and removing audio sources from a Membrane Bin making sure that all audio from previously added sources is interrupted when a new source is added, in the context of an application built with membrane_rtc_engine,

The users of my application can join a WebRTC room along with a chatbot "peer", represented by the Chatbot module below:

While a user is in the room with the chatbot peer, they can submit an event {:user_event, url} that trigger the Chatbot endpoint to fetch audio from url using membrane_hackney_plugin.

When a user submits such an event, I would like to interrupt whatever audio the chatbot is currently playing and replace it with audio from the new url.

Ultimately I'm trying to determine a way to structure this Bin so that I can implement handle_parent_notification/3

defmodule MediaServer.Endpoint.Speech.Chatbot do
  def handle_parent_notification({:user_event, {:play_audio, url}}, _ctx, state) do
    # 1. interrupt whatever audio is currently playing 
    #   - I think I need to remove every element from :audio_source to before :track_sender with a :remove_child action?
    # 2. Replace it with audio from url
    #   - I think I need to create a new spec for the elements from :audio_source to before :track sender with the new URL with the :spec action?

I've tried a few variations of adding / removing children, but keep running into issues -- either with adding duplicate children, or linking pads multiple times. Any advice or examples of how to do this would be appreciated!

RTP demo with RawAudio

Hello friends, I'm trying to get microphone input (via Membrane.PortAudio.Source) packaged into an RTP stream and sent to a server and can't quite seem to get it right.

Excerpt below based on the demo in membrane-demo/rtp but with microphone input substituted and newer syntax.

alias Membrane.{RTP, UDP, PortAudio, RawAudio}
links = [
      child(:mic_input, %PortAudio.Source{
        channels: 2,
        sample_rate: 24_000
      |> child(:encoder, %Membrane.Opus.Encoder{
        application: :audio,
        input_stream_format: %RawAudio{
          channels: 2,
          sample_format: :s16le,
          sample_rate: 24_000
      |> via_in(Pad.ref(:input, audio_ssrc), options: [payloader: RTP.Opus.Payloader])
      |> child(:rtp, %RTP.SessionBin{
        secure?: secure?,
        srtp_policies: [
            ssrc: :any_inbound,
            key: srtp_key
      |> via_out(Pad.ref(:rtp_output, audio_ssrc), options: [encoding: :OPUS])
      |> child(:audio_realtimer, Membrane.Realtimer)
      |> child(:audio_sink, %UDP.Sink{
        destination_port_no: destination_port,
        destination_address: destination_address

This seems to throw an error when generating the headers because buffer.pts is nil:

[error] <0.815.0>/:rtp/{:stream_send_bin, 1236}/:payloader/:header_generator Error handling action {:split, {:handle_buffer, [[:input, %Membrane.Buffer{payload: <<220, 255, 254>>, pts: nil, dts: nil, metadata: %{}}]]}} returned by callback Membrane.RTP.HeaderGenerator.handle_buffers_batch

[error] GenServer #PID<0.849.0> terminating
** (FunctionClauseError) no function clause matching in Ratio.mult/2
    (ratio 2.4.2) lib/ratio.ex:418: Ratio.mult(nil, 48000)
    (membrane_rtp_plugin 0.23.0) lib/membrane/rtp/header_generator.ex:71: Membrane.RTP.HeaderGenerator.handle_process/4

Does anyone have any tips on what I might be doing wrong?

Unable to Compile :membrane_core

Hello, I apologize if this question is trivial as I am new to the Erlang/Elixir ecosystem. I recently cloned to try out the webrtc_videoroom demo. Everything went fine up until running mix phx.server:

== Compilation error in file lib/membrane/testing/endpoint.ex ==
** (FunctionClauseError) no function clause matching in String.trim/1

    The following arguments were given to String.trim/1:

        # 1

    Attempted function clauses (showing 1 out of 1):

        def trim(string) when is_binary(string)

    (elixir 1.15.0) lib/string.ex:1270: String.trim/1
    lib/membrane/core/child.ex:28: Membrane.Core.Child.generate_moduledoc/2
    expanding macro: Membrane.Element.Base.__before_compile__/1
    lib/membrane/testing/endpoint.ex:1: Membrane.Testing.Endpoint (module)
could not compile dependency :membrane_core, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile membrane_core --force", update it with "mix deps.update membrane_core" or clean it with "mix deps.clean membrane_core"

I am using Elixir v1.15.0, Erlang v26. This compile error was thrown both with the default mix.exs and after everything was updated to the latest versions listed on Hex. Thank you for any assistance, and I apologize if this is a beginner Elixir error -- it definitely feels like a user-error what with String being the 'problem'.

RTC Engine, HTTP Sources, and pts offsets

Hi folks, I'm building an application with Membrane and MembraneRTCEngine and am having some trouble with garbled & dropped audio resulting from some issues with timestamping audio from an HTTP source.

The application lets a user join a WebRTC audio call with a chatbot "peer".

The user can click a button in their browser which triggers the chatbot to "speak" -- under the hood, this requests audio data via HTTP which is then piped through a chatbot "peer" endpoint.

Currently, I am able to hear the audio received from the HTTP source, but it is garbled, truncated, or otherwise incoherent. My Chatbot endpoint uses Membrane.LiveMixer to mix audio from two sources:

  • An HTTP Source, parsed with Membrane.RawAudioParser to apply timestamps
  • A Silence generator which is Membrane.SilenceGenerator passed through Membrane.RealTimer

Membrane.RawAudioParser allows you to set pts_offset value, which Membrane.LiveMixer uses to select the correct audio to mix from each stream.

The examples for Membrane.RawAudioParser and Membrane.LiveMixer use constant offsets (e.g. "If Source B starts 5 seconds after Source A..."). However, I do not know the offset to set in advance -- it should be the difference between the start of the Chatbot endpoint / silence generator, and the start of the stream from the HTTP source.

Do you have any suggestions as to how to correctly set the pts_offset for Membrane.RawAudioParser when I do not know the offset in advance? Is there a way to set it dynamically (e.g. using start_of_stream events?)

How to pass some client side parameters to an RTMP pipeline

Hey team I have an RTMP pipeline (which I simplified for the purpose of the question):

  def handle_init(ctx, socket: socket) do"Starting RTMP pipeline")

    structure = [
      child(:source, %Membrane.RTMP.Source{
        socket: socket,
        validator: Membrane.RTMP.DefaultMessageValidator
      |> child(:demuxer, Membrane.FLV.Demuxer)
      |> via_out(Pad.ref(:audio, 0))
      # we use a fake sink to terminate the pipeline
      |> child(:fake_sink, Membrane.Fake.Sink.Buffers)

    {[spec: structure, playback: :playing], %{}}

Which is served by an TCP server:

 {Membrane.RTMP.Source.TcpServer, rtmp_server_options()},
defp rtmp_server_options do
      port: rtmp_port(),
      listen_options: [
        packet: :raw,
        active: false,
        ip: rtmp_ip()
      socket_handler: fn socket ->
        # On new connection a pipeline is started
        {:ok, _supervisor, pipeline} = MyPipeline.Pipeline.start_link(socket: socket)
        {:ok, pipeline}

This works great, I can initiate an rtmp session using this kind of url for example: rtmp://localhost:5000

Now I'm trying to use url like this: rtmp://localhost:5000?rtmp_id=xxxx or even something like this: rtmp://localhost:5000/xxxx/ So within my pipeline I can get this xxxx and use if for different things (maybe you can think of a filename which could be anice usecase) So far I don't manage to find how I can get this xxxx within my pipeline, happy to know if there is anotherway to do that πŸ˜„ ! Thanks

Intermittent Failures with RTMP Sink

We are running into some intermittent failures with the RTMP sink. What we notice is that sometimes a given pipeline will stream all the way through and sometimes the RTMP sink will raise when writing frames and crash the pipeline.

22:01:58.807 [error] <0.5365.0>/:rtmp_sink/ Error handling action {:split, {:handle_write, [[:video, %Membrane.Buffer{payload: <<0, 0, 139, 207, 65, 154, 128, 54, 188, 23, 73, 255, 152, 130, 98, 10, 43, 88, 28, 64, 176, 10, 39, 247, 233, 179, 54, 27, 17, 168, 97, 24, 82, 152, 175, 21, 138, 252, 216, 108, 205, 134, ...>>, pts: 27240000000, dts: 27240000000, metadata: %{h264: %{key_frame?: false}, mp4_payload: %{key_frame?: false}}}]]}} returned by callback Membrane.RTMP.Sink.handle_write_list

22:01:58.808 [error] GenServer #PID<0.5369.0> terminating
** (RuntimeError) writing audio frame failed with reason: "End of file"
    (membrane_rtmp_plugin 0.11.2) lib/membrane_rtmp_plugin/rtmp/sink/sink.ex:269: Membrane.RTMP.Sink.write_frame/3

this one crashed ~20 seconds into a 1 hour long stream, so I don't understand how it could have been the end of file. Additionally, in the error log I see an error handling a video buffer, however, the crash is in the write audio frame path.

22:57:02.461 [error] <0.3134.0>/:rtmp_sink/ Error handling action {:split, {:handle_write, [[:video, %Membrane.Buffer{payload: <<0, 0, 0, 7, 65, 154, 0, 122, 0, 63, 204>>, pts: 3626320000000, dts: 3626320000000, metadata: %{h264: %{key_frame?: false}, mp4_payload: %{key_frame?: false}}}]]}} returned by callback Membrane.RTMP.Sink.handle_write_list

22:57:02.461 [error] GenServer #PID<0.3138.0> terminating
** (RuntimeError) writing video frame failed with reason: "Invalid argument"
    (membrane_rtmp_plugin 0.11.2) lib/membrane_rtmp_plugin/rtmp/sink/sink.ex:285: Membrane.RTMP.Sink.write_frame/3

RTSP demo with Tapo C320WS


I'm trying to get my IP camera to work with the rtsp_to_hls demo. I get this:

[debug] Application is starting
[debug] Pipeline start link: module: Membrane.Demo.RtspToHls.Pipeline,
pipeline options: %{output_path: "hls_output", port: 20000, stream_url: "rtsp://tapoadmin:[email protected]:554/stream1"},
process options: []

[debug] Source handle_init options: %{output_path: "hls_output", port: 20000, stream_url: "rtsp://tapoadmin:[email protected]:554/stream1"}
[debug] ConnectionManager: start_link, args: [stream_url: "rtsp://tapoadmin:[email protected]:554/stream1", port: 20000, pipeline: #PID<0.345.0>]
[debug] ConnectionManager: Initializing
[debug] ConnectionManager: Connecting
[debug] [pipeline@<0.345.0>] Changing playback state from stopped to prepared
[debug] [pipeline@<0.345.0>] Playback state changed from stopped to prepared
[debug] [pipeline@<0.345.0>] Changing playback state from prepared to playing
[debug] [pipeline@<0.345.0>] Playback state changed from prepared to playing
[debug] ConnectionManager: Setting up RTSP description
[warning] ConnectionManager: Connection failed: :getting_rtsp_description_failed

Nothing ends up in the hls_outputdirectory. The camera works, both via the Tapo App on my mobile, as well as answering ping from me. The use of the User:Password , I got from:

Any hints on how to debug this would be very nice πŸ™‚

Errors with membrane_hackney_plugin

I am trying to get the membrane_hackney_plugin to send files to a CDN. Trying to get it to work with one file and then set it up for HLS if all goes well. I'm getting an error whenever I run it.

The Code:

defmodule Membrane.Demo.SimplePipeline do
  use Membrane.Pipeline

  alias Membrane.{File, Hackney}

  @impl true
  def handle_init(_context, _) do

    structure = [
      child(:source, %Membrane.File.Source{location: "./sample.mp3"})
      |> child(:sink, %Membrane.Hackney.Sink{method: :put, location: build_uri("my-bucket", "sample.mp3"), headers: [access_key(),
{"content-type", "audio/flac"}]}),

    {[spec: {structure, []}, playback: :playing], %{}}

  @impl true
  def handle_notification(%Hackney.Sink.Response{} = response, from, _ctx, state) do
    IO.inspect({from, response})
    {[], state}

  def handle_notification(_notification, _from, _ctx, state) do
    {[], state}

  defp access_key do
    {"AccessKey", "foo-bar"}

  defp build_uri(bucket, name) do
    "{bucket}/" <>
      URI.encode_query(upload_type: "media", name: name)

and the error

** (MatchError) no match of right hand side value: {:error, :req_not_found}
    (membrane_hackney_plugin 0.9.0) lib/membrane_hackney/sink.ex:93: Membrane.Hackney.Sink.handle_end_of_stream/3
    (membrane_core 0.11.3) lib/membrane/core/callback_handler.ex:138: Membrane.Core.CallbackHandler.exec_callback/4
    (membrane_core 0.11.3) lib/membrane/core/callback_handler.ex:62: Membrane.Core.CallbackHandler.exec_and_handle_callback/5
    (membrane_core 0.11.3) lib/membrane/core/element/event_controller.ex:79: 

Help upgrading to v0.11.0

Can someone help me figure out what I'm doing wrong here? I tried to follow the update to v0.11.0 guides with the simple_pipeline demo (modified to use the Hackney.Sink) and I'm getting errors.

structure = [ child(:source, %Membrane.File.Source{location: "sample.mp3"}), child(:sink, %Membrane.Hackney.Sink{method: :put, location: build_uri("my_store", "sample.mp3"), headers: [access_key(), {"content-type", "audio/flac"}]}), get_child(:source) |> child(:sink) ]

 {[spec: structure, playback: :playing], %{}}

I am getting the error

** (MatchError) no match of right hand side value: {:error, {%Membrane.ParentError{message: "Invalid children config: {:sink, [], %{get_if_exists: false}}"}, [{Membrane.Core.Parent.ChildEntryParser, :parse_child, 1, [file: 'lib/membrane/core/parent/child_entry_parser.ex', line: 40]}, {Enum, :"-map/2-lists^map/1-0-", 2, [file: 'lib/enum.ex', line: 1658]}, {Membrane.Core.Parent.ChildLifeController, :setup_children, 3, [file: 'lib/membrane/core/parent/child_life_controller.ex', line: 189]}, {Enum, :"-flat_map_reduce/3-fun-1-", 3, [file: 'lib/enum.ex', line: 1288]}, {Enumerable.List, :reduce, 3, [file: 'lib/enum.ex', line: 4751]}, {Enum, :flat_map_reduce, 3, [file: 'lib/enum.ex', line: 1287]}, {Membrane.Core.Parent.ChildLifeController, :handle_spec, 2, [file: 'lib/membrane/core/parent/child_life_controller.ex', line: 124]}, {Membrane.Core.CallbackHandler, :"-handle_callback_result/5-fun-0-", 5, [file: 'lib/membrane/core/callback_handler.ex', line: 187]}]}} lib/membrane_demo/simple_pipeline.ex:80: (file) (elixir 1.14.2) lib/kernel/parallel_compiler.ex:346: anonymous fn/5 in Kernel.ParallelCompiler.spawn_workers/7

Unable to establish WebRTC connection with Membrane VideoRoom from different origin client

When attempting to connect to Membrane VideoRoom from a client on a different origin, the WebRTC connection fails to establish. The following error message is displayed in the console:

"ICE failed, your TURN server appears to be broken"

Steps to Reproduce:

  • Open the Membrane VideoRoom application on a server with a TURN server configured
  • Attempt to connect to the application from a client on a different origin
  • Observe that the WebRTC connection fails to establish and the error message "ICE failed, your TURN server appears to be broken" is displayed in the console.

Expected Result: The WebRTC connection should successfully establish, allowing the client to participate in the video conference.

Actual Result: The WebRTC connection fails to establish and the error message "ICE failed, your TURN server appears to be broken" is displayed in the console.

Workarounds: None found.

Additional Information:

  • The TURN server has been configured correctly and is operational.
  • The server's firewall has been configured to allow incoming traffic on the required ports.
  • The WebRTC connection is successfully established when connecting from a client on the same origin.
  • The issue only occurs when attempting to connect to the application from a client on a different origin.
  • I'm developing a frontend-only application to connect to the server using the React application from membrane_videoroom hosted on a different origin. I've used the Corsica plug to allow all origins.

Environment: Membrane VideoRoom version: 2.0.1 Server OS: Ubuntu 20.04 Client OS: Windows 10 Browser: Google Chrome 97.0.4692.99

Attachments: Log files from the server and client are attached for further investigation.

Request: Could you please provide a beginner-friendly explanation of the root cause of the issue and suggest any possible solutions that could help resolve it? Thank you in advance for your help!

Problems specifying toilet capacity of Realtimer

I have a pipeline which I see fail intermittently on startup due to a toilet overflow of a realtimer element. The downstream element of the realtimer is a rtmp sink.

In my pipeline I have specified the toilet capacity with via_in to the realtimer as shown in the docs:

      # shortened for brevity
      |> child(:h264_encoder, %Membrane.H264.FFmpeg.Encoder{preset: :ultrafast})
      |> child(:h264_parser, %Membrane.H264.FFmpeg.Parser{
        framerate: {30, 1},
        alignment: :au,
        attach_nalus?: true,
        skip_until_keyframe?: false
      |> child(:video_payloader, Membrane.MP4.Payloader.H264)
      |> via_in(:input, toilet_capacity: 500)
      |> child(:video_realtimer, Membrane.Realtimer)
      |> via_in(:video)
      |> get_child(:rtmp_sink),
      # shortened for brevity
      |> child(:aac_encoder, Membrane.AAC.FDK.Encoder)
      |> child(:aac_parser, %Membrane.AAC.Parser{
        in_encapsulation: :ADTS,
        out_encapsulation: :ADTS
      |> via_in(:input, toilet_capacity: 500)
      |> child(:audio_realtimer, Membrane.Realtimer)
      |> via_in(:audio)
      |> get_child(:rtmp_sink)

However, I see the reported toilet size as 200 instead of the specified value when the pipeline. crashes:

10:58:09.755 [error] <0.591.0>/:audio_realtimer/ Toilet overflow.

Reached the size of 201, which is above toilet capacity (200)
when storing data from output working in push mode. It means that some element in the pipeline
processes the stream too slow or doesn't process it at all.
To have control over amount of buffers being produced, consider using output in pull mode
(see `Membrane.Pad.mode_t`).
You can also try changing the `toilet_capacity` in `Membrane.ChildrenSpec.via_in/3`.

Any thoughts on why the config isn't bubbling down to the realtimer?