Tag Archives: VOIP

SFU!!! (Geek warning)

No, SFU does not mean what you might think. This is another geeky and technical post detailing how to get Matrix VOIP running as a self-hosted service. This is about the SFU or Selective Forwarding Unit, that is the VOIP backend driving your self-hosted matrixRTC instance. These are the components involved:

  1. A matrix homeserver where you have control over the .well-known/matrix/client file. Its installation not described here.
  2. An instance of the element-call widget. I chose not to self-host this for now and am simply using the provided https://call.element.io/. Self-hosting this would be possible and e.g. neccessary if your country/ISP blocks requests to call.element.io.
  3. Redis: messaging broker, turned out to be unneccessary
  4. lk-jwt-service: A small go binary providing auth tokens
  5. Livekit: The SFU or Selective Forwarding Unit. It takes a video/audio stream and multiplexes it to those users who need it. This way, users in a video conference do not need to send media to ALL other participants (which does NOT scale).
  6. A few rules poking holes into your firewall to make livekit work.
  7. Nginx proxying a few routes from livekit and lk-lwt-service to https://livekit.sspaeth.de.
  8. The livekit-provided TURN server. I have enabled it, but not verified that/if it actually works.

Details

My main domain is sspaeth.de, the homeserver lives on matrix.sspaeth.de, and everything livekit-related lives on livekit.sspaeth.de. I install this on a Debian server using a mix of local services and docker images. OK. Let’s start.

1. Homeserver configuration

You need to enable this in your config:

experimental_features:
  #room previews
  msc3266_enabled: true
# enable delayed events, so we quit calls that are interrupted somehow
max_event_delay_duration: 24h

2. Define the Element Call widget

First, when you start a video call, the Element Web or Element X Android/Ios clients (EW/EXA/EXI) look up where they load the element calls widget from. This is configured on https://sspaeth.de/.well-known/element/element.json. Just create a small text file containing:

{"call": {"widget_url": "https://call.element.io"}}

Modify this if you self-host element call. I was told that the long-term plan is to include a bundled element-call widget with clients in the future. So, this might be a temporary thing. Although, nothing is as permanent as a stopgap solution… 😉

3. Telling the participants which SFU to use

Second, we need to tell the clients which SFU to use. This is done by adding

"org.matrix.msc4143.rtc_foci":[
 {"type": "livekit",
  "livekit_service_url": "https://livekit.sspaeth.de"}]

to https://sspaeth.de/.well-known/matrix/client. I believe currently the starter of a video conference is the one deciding on the SFU which is used by participants.

4. Redis

Redis is a message broker, it is only needed for horizontal scaling of multiple livekit instances, apparently. So we can skip this step. YAY!
I used the Debian-supplied one by doing sudo apt install redis. You can configure redis to connect via a port or a socket. My /etc/redis/redis.conf contains these lines, so this is where I want to establish a connection:
unixsocket /run/redis/redis-server.sock
unixsocketperm 770
That is all I did about redis, nothing else was needed.

5. lk-jwt-service

It generates JWT tokens with a given identity for a given room, so that users can use them to authenticate against LiveKit SFU. I dabbled with creating a docker image for this, but that was too complicated for me. Given it is a single binary, I compiled it locally and just run it on the Debian server. This is how to do it:

a) Checkout the git repo at https://github.com/element-hq/lk-jwt-service, I put it into /opt/lk-jwt-service.
b) With a go compiler installed, compile it: /usr/lib/go-1.22/bin/go build -o lk-jwt-service (use just “go” instead of the full path if go it is in your PATH). If this suceeds, you’ll end up with the binary lk-jwt-service as a result. If you execute it from a shell, it will output: LIVEKIT_KEY, LIVEKIT_SECRET and LIVEKIT_URL environment variables must be set and exit.

Next, I create a systemd service to start it automatically: Note the values from LIVEKIT_SECRET and LIVEKIT_KEY which need to be taken from the livekit configuration.

[Unit]
Description=LiveKit JWT Service
After=network.target
[Service]
Restart=always
User=www-data
Group=www-data
WorkingDirectory=/opt/lk-jwt-service
Environment="LIVEKIT_URL=wss://livekit.sspaeth.de"
Environment="LIVEKIT_SECRET=this_is_a_secret_from_the_livekit.yaml"
Environment="LIVEKIT_KEY=this_is_a_key_from_the_livekit.yaml"
Environment="LK_JWT_PORT=8081"
ExecStart=/opt/lk-jwt-service/lk-jwt-service
[Install]
WantedBy=multi-user.target

P.S. Yes, it would be more elegant to include those environment variables in a separate file than hardcoding them in the service file.
P.P.S. Note, that I am running this on port 8081 instead of the default 8080 because 8080 is already in use on this box.

Last but not least, you need to proxy the output of two routes (/sfu/get and /healthz) on https://livekit.sspaeth.de. The nginx rules are in the nginx section.

If you enable and start the lk-jwt-service via systemctl, you should be able to open https://livekit.sspaeth.de/healthz in a web browser and get an empty webpage (aka HTTP STATUS 200). For testing purposes you can also start the service manually and observe its output by executing (in 1 line!): LIVEKIT_URL=wss://livekit.sspaeth.de LIVEKIT_SECRET=this_is_a_secret_from_the_livekit.yaml LIVEKIT_KEY=this_is_a_key_from_the_livekit.yaml LK_JWT_PORT=8081 /opt/lk-jwt-service/lk-jwt-service

6. livekit

I installed this one via precompiled docker and did not do the “Just execute this shell script as root and trust us”™.

I generated in initial configuration file using the livekit/generate image (to be written) and pruned most of the resulting stuff, as I do not run caddy for instance, and I do not want these scripts to be installing stuff on my main host.

6.1 Generating the initial configuration

docker pull livekit/generate
docker run --rm -it -v$PWD:/output livekit/generate

The above creates a folder with the name of domain you provided, containing the following files: caddy.yaml, docker-compose.yaml, livekit.yaml, redis.conf, init_script.sh.

I discarded most of the above (but built on the resulting livekit.yaml and docker-compose.yaml). I found particularly useful that it creates an API key and secret in livekit.yaml.

6.2 Final configuration and setup

This is my docker-compose.yaml file that I ended up using in order to built my livekit image:

services:
  livekit-docker:
    image: livekit/livekit-server:latest
    command: --config /etc/livekit.yaml
    restart: unless-stopped
    network_mode: "host"
    volumes:
      - ./livekit.yaml:/etc/livekit.yaml
      - /run/redis/redis-server.sock:/run/redis.sock

Running docker-compose up --no-start resulted in this output

Creating livekitsspaethde_livekit-docker_1 … done

This is my /etc/systemd/system/livekit.service file to get livekit started:

[Unit]
Description=LiveKit Server Container
After=docker.service
Requires=docker.service
After=network.target
Documentation=https://docs.livekit.io

[Service]
LimitNOFILE=500000
Restart=on-failure
WorkingDirectory=/etc/livekit
# start -a attaches STDOUT/STDERR so we get log output and prevents forking
ExecStart=docker start -a livekitsspaethde_livekit-docker_1
ExecStop=docker stop livekitsspaethde_livekit-docker_1

[Install]
WantedBy=multi-user.target

This is my final livekit.yaml config file

port: 7880
bind_addresses:
    - ""
rtc:
    tcp_port: 7881
    port_range_start: 50000
    port_range_end: 60000
    use_external_ip: true
    enable_loopback_candidate: false
turn:
    enabled: true
    domain: livekit.sspaeth.de
    # without a load balancer this is supposed to be port 443, and I am not using this, as my port 443 is occupied.
    tls_port: 5349
    udp_port: 3478
    external_tls: true
keys:
    # KEY: secret were autogenerated by livekit/generate
    # in the lk-jwt-service environment variables
    APIXCVSDFldksef: DLKlkddfgkjldhjkndfjkldfgkkldkdflkfdglk

6.1 Firewall rules

I allow inbound ports=7881/tcp and 3478,50000:60000/udp and these ports never go through the nginx proxy.

7. nginx configuration

I am not sure if all of those proxy headers actually need to be set, but they don’t hurt. These

proxy_set_header Connection “upgrade”;
proxy_set_header Upgrade $http_upgrade;

are actually important though (they enable the switch from https to websocket)!

server {
    access_log /var/log/nginx/livekit.sspaeth.de.log;
    error_log /var/log/nginx/livekit.sspaeth.de.error;
    listen 123.123.123.123:443 ssl;
    listen [dead:beef:dead:beef:dead:beef:dead:beef]:443 ssl;
    ssl_certificate XXXX;
    ssl_certificate_key XXXX;
    server_name livekit.sspaeth.de;

    # This is lk-jwt-service
    location ~ ^(/sfu/get|/healthz) {
        proxy_pass http://[::1]:8081;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
    #and this is livekit
    location / {
       proxy_pass http://localhost:7880;
       proxy_set_header Connection "upgrade";
       proxy_set_header Upgrade $http_upgrade;
       #add_header Access-Control-Allow-Origin "*" always;

       proxy_set_header Host $host;
       proxy_set_header X-Forwarded-Server $host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
    }
}