The problem#
My previous post got the video from my smartphone
to show up as a camera device on my desktop, but for a video
chat, we probably also want audio. So, now the question is:
how to build GStreamer pipelines that will allow
minimal-webrtc-gstreamer
to use virtual
microphone and speaker devices that I can point a voice/video chat
application at, allowing me to use my smartphone's microphone and
speaker for applications on my desktop.
The solution#
The following requires that you are using
PulseAudio as your sound server and have downloaded
minimal-webrtc-gstreamer
:
pactl load-module module-null-sink sink_name=virtspk \
sink_properties=device.description=Virtual_Speaker
pactl load-module module-null-sink sink_name=virtmic \
sink_properties=device.description=Virtual_Microphone_Sink
pactl load-module module-remap-source \
master=virtmic.monitor source_name=virtmic \
source_properties=device.description=Virtual_Microphone
./minimal-webrtc-host.py\
--url "https://apps.aweirdimagination.net/camera/"\
--receiveAudioTo device=virtmic\
--sendAudio "pulsesrc device=virtspk.monitor"\
--sendVideo false --receiveVideo false
You can reset your PulseAudio configuration by killing PulseAudio:
pulseaudio -k
You can make the PulseAudio settings permanent by following
these instructions to put them in your default.pa
file.
The details#
Unix pipe PulseAudio module#
This answer gives directions for setting up a virtual
microphone device that takes its input from a Unix pipe using
module-pipe-source
, but not instructions for
using it with GStreamer.
GStreamer supports redirecting to a file using the
filesink
element. As usually when outputting to a file,
the appropriate thing to do is to write to it as fast as possible,
that behavior has to be overridden using the sync=true
property.
Additionally, the proper capabilities have to be set to match what
PulseAudio expects to be coming on that pipe... which I don't think I
got correct since I never managed to get it to sound right with this
method. Here's the code I had which doesn't quite work:
caps = Gst.Caps.from_string("audio/x-raw,format=S16LE,channels=1")
capsfilter = Gst.ElementFactory.make("capsfilter", "afilter")
capsfilter.set_property("caps", caps)
sink = Gst.ElementFactory.make('filesink')
sink.set_property('location', 'path/to/virtmic')
sink.set_property('sync', 'true')
Null output PulseAudio module#
With further research, I found multiple posts talking about making virtual microphones in order to combine microphone and game/media output into the same stream. This one is the most detailed one I found.
The core concept is PulseAudio's module-null-sink
module. In PulseAudio, a "sink" is usually a speaker or other physical
device that actually plays the sound in the real world. A "null"
sink is a device that can be set as a sound output for a program but
discards the sound instead of sending it to a real device (equivalent
to redirecting a pipe to /dev/null
). But it's still useful because
every sink has an associated "monitor" source that plays back all sound
received on the corresponding sink.
We use null sinks to define two new devices which correspond to the remote speaker and microphone:
pactl load-module module-null-sink sink_name=virtspk \
sink_properties=device.description=Virtual_Speaker
pactl load-module module-null-sink sink_name=virtmic \
sink_properties=device.description=Virtual_Microphone_Sink
(Note the use of underscores ("_
") instead of spaces in the device
descriptions; I couldn't figure out how to properly escape spaces in
those even though they're clearly allowed as existing devices have
spaces in their descriptions.)
Remap source#
While the null sink automatically includes a "monitor" source, many
programs know to exclude monitors when listing microphones. To work
around that, the module-remap-source
module
lets us clone that source to another one not labeled as being a monitor:
pactl load-module module-remap-source \
master=virtmic.monitor source_name=virtmic \
source_properties=device.description=Virtual_Microphone
Connecting to minimal-webrtc-gstreamer.py#
We want to pipe the received audio (from the remote microphone) to the
virtual microphone device (virtmic
) and send to the remote speaker the
monitor of the virtual speaker device (virtspk.monitor
):
./minimal-webrtc-host.py\
--url "https://apps.aweirdimagination.net/camera/"\
--receiveAudioTo device=virtmic\
--sendAudio "pulsesrc device=virtspk.monitor"\
--sendVideo false --receiveVideo false
Connecting more audio streams with loopback module#
The tutorial I linked above discusses using
module-loopback
to pipe audio among various
devices, so, for example, the monitor of your actual speakers could be
sent to the remote microphone and/or speakers.
pavucontrol
makes it easy to select which devices
different programs and modules use so you don't need to know the
PulseAudio device names and you can change which device is being used
even if the program doesn't have an interface to do so.
Comments
Have something to add? Post a comment by sending an email to comments@aweirdimagination.net. You may use Markdown for formatting.
There are no comments yet.