Tag: room

  • MatrixRTC aka Element-call setup (Geek warning)

    This is another geeky and technical post detailing how to get Matrix VOIP running as a self-hosted service. This is about setting up the SFU or Selective Forwarding Unit, that is the VOIP backend driving your self-hosted matrixRTC instance. Some time after publication of this post, element improved its own documentation for self-hosting which you should also check out here. For any MatrixRTC call these are the components involved:

    1. A matrix homeserver where you have control over the .well-known/matrix/client file. Its installation is not described here.
    2. An instance of the element-call widget (the frontend of element-call). This is about to be bundled with the Element-web and Element X mobile clients.
    3. lk-jwt-service: A small go binary providing auth tokens
    4. Livekit: The SFU or Selective Forwarding Unit. It takes a video/audio stream and multiplexes it to those users who need it. This way, users in a video conference do not need to send media to ALL other participants (which does NOT scale).
    5. A few rules poking holes into your firewall to make livekit work.
    6. Nginx proxying a few routes from livekit and lk-lwt-service to https://livekit.sspaeth.de.
    7. The livekit-provided TURN server. I have enabled it, but not verified that/if it actually works.

    Details

    My main domain is sspaeth.de, the homeserver lives on matrix.sspaeth.de, and everything livekit-related lives on livekit.sspaeth.de. I install this on a Debian server using a mix of local services and docker images. OK. Let’s start.

    1. Homeserver configuration

    You need to enable this in your config:

    experimental_features:
      #room previews
      msc3266_enabled: true  # MSC4222 needed for syncv2 state_after. This allow clients to
      # correctly track the state of the room.
      msc4222_enabled: true
    
    # enable delayed events, so we quit calls that are interrupted somehow
    max_event_delay_duration: 24h

    2. Define the Element Call widget (see update at end of section)

    First, when you start a video call, the Element Web or Element X Android/Ios clients (EW/EXA/EXI) look up where they load the element calls widget from. This is configured on https://sspaeth.de/.well-known/element/element.json. I chose not to self-host this for now and am simply using the provided https://call.element.io/. Self-hosting this would be possible and e.g. neccessary if your country/ISP blocks requests to call.element.io. Just create a small text file containing:

    {"call": {"widget_url": "https://call.element.io/room"}}

    Modify this if you self-host element call.

    UPDATE MAR 2025: This frontend is about to be bundled with the Element-Web and Element X mobile clients (EXA 25.3.3) , so you would not need an externally hosted widget frontend with these clients and can ignore this entire section.

    3. Telling the participants which SFU to use

    Second, we need to tell the clients which SFU to use. This is done by adding

    "org.matrix.msc4143.rtc_foci":[
     {"type": "livekit",
      "livekit_service_url": "https://livekit.sspaeth.de"}]

    to https://sspaeth.de/.well-known/matrix/client. Strictly speaking, this will query the lk-jwt-service at https://livekit.sspaeth.de/sfu/get, which will return the livekit SFU intance together with an JWT SFU access token. The current alogorithm is to use the SFU of the oldest participant (the one who started the call).

    4. lk-jwt-service

    It generates JWT tokens with a given identity for a given room, so that users can use them to authenticate against LiveKit SFU. I dabbled with creating a docker image for this, but that was too complicated for me. Given it is a single binary, I compiled it locally and just run it on the Debian server. This is how to do it:

    a) Checkout the git repo at https://github.com/element-hq/lk-jwt-service, I put it into /opt/lk-jwt-service.
    b) With a go compiler installed, compile it: /usr/lib/go-1.22/bin/go build -o lk-jwt-service (use just “go” instead of the full path if go it is in your PATH). If this suceeds, you’ll end up with the binary lk-jwt-service as a result. If you execute it from a shell, it will output: LIVEKIT_KEY, LIVEKIT_SECRET and LIVEKIT_URL environment variables must be set and exit.

    Next, I create a systemd service to start it automatically: Note the values from LIVEKIT_SECRET and LIVEKIT_KEY which need to be taken from the livekit configuration.

    [Unit]
    Description=LiveKit JWT Service
    After=network.target
    [Service]
    Restart=always
    User=www-data
    Group=www-data
    WorkingDirectory=/opt/lk-jwt-service
    Environment="LIVEKIT_URL=wss://livekit.sspaeth.de"
    Environment="LIVEKIT_SECRET=this_is_a_secret_from_the_livekit.yaml"
    Environment="LIVEKIT_KEY=this_is_a_key_from_the_livekit.yaml"
    Environment="LIVEKIT_JWT_PORT=8081"
    ExecStart=/opt/lk-jwt-service/lk-jwt-service
    [Install]
    WantedBy=multi-user.target
    

    P.S. Yes, it would be more elegant to include those environment variables in a separate file than hardcoding them in the service file.
    P.P.S. Note, that I am running this on port 8081 instead of the default 8080 because 8080 is already in use on this box.

    Last but not least, you need to proxy the output of two routes (/sfu/get and /healthz) on https://livekit.sspaeth.de. The nginx rules are in the nginx section.

    If you enable and start the lk-jwt-service via systemctl, you should be able to open https://livekit.sspaeth.de/healthz in a web browser and get an empty webpage (aka HTTP STATUS 200). For testing purposes you can also start the service manually and observe its output by executing (in 1 line!): LIVEKIT_URL=wss://livekit.sspaeth.de LIVEKIT_SECRET=this_is_a_secret_from_the_livekit.yaml LIVEKIT_KEY=this_is_a_key_from_the_livekit.yaml LIVEKIT_JWT_PORT=8081 /opt/lk-jwt-service/lk-jwt-service

    5. livekit

    I installed this one via precompiled docker and did not do the “Just execute this shell script as root and trust us”™.

    I generated in initial configuration file using the livekit/generate imageand pruned most of the resulting stuff, as I do not run caddy for instance, and I do not want these scripts to be installing packages and stuff on my main host.

    5.1 Generating the initial configuration

    docker pull livekit/generate
    docker run --rm -it -v$PWD:/output livekit/generate

    The above creates a folder with the name of domain you provided, containing the following files: caddy.yaml, docker-compose.yaml, livekit.yaml, redis.conf, init_script.sh.

    I discarded most of the above (but built on the resulting livekit.yaml and docker-compose.yaml). I found particularly useful that it creates an API key and secret in livekit.yaml.

    5.2 Final configuration and setup

    This is my docker-compose.yaml file that I ended up using in order to built my livekit image:

    services:
      livekit-docker:
        image: livekit/livekit-server:latest
        command: --config /etc/livekit.yaml
        restart: unless-stopped
        network_mode: "host"
        volumes:
          - ./livekit.yaml:/etc/livekit.yaml
          - /run/redis/redis-server.sock:/run/redis.sock
    

    Running docker-compose up --no-start resulted in this output

    Creating livekitsspaethde_livekit-docker_1 … done

    This is my /etc/systemd/system/livekit.service file to get livekit started:

    [Unit]
    Description=LiveKit Server Container
    After=docker.service
    Requires=docker.service
    After=network.target
    Documentation=https://docs.livekit.io
    
    [Service]
    LimitNOFILE=500000
    Restart=on-failure
    WorkingDirectory=/etc/livekit
    # start -a attaches STDOUT/STDERR so we get log output and prevents forking
    ExecStart=docker start -a livekitsspaethde_livekit-docker_1
    ExecStop=docker stop livekitsspaethde_livekit-docker_1
    
    [Install]
    WantedBy=multi-user.target
    

    This is my final livekit.yaml config file

    port: 7880
    bind_addresses:
        - ""
    rtc:
        tcp_port: 7881
        port_range_start: 50000
        port_range_end: 60000
        use_external_ip: true
        enable_loopback_candidate: false
    turn:
        enabled: true
        domain: livekit.sspaeth.de
        # without a load balancer this is supposed to be port 443, and I am not using this, as my port 443 is occupied.
        tls_port: 5349
        udp_port: 3478
        external_tls: true
    keys:
        # KEY: secret were autogenerated by livekit/generate
        # in the lk-jwt-service environment variables
        APIXCVSDFldksef: DLKlkddfgkjldhjkndfjkldfgkkldkdflkfdglk
    
    

    5.3 Firewall rules

    I allow inbound ports=7881/tcp and 3478,50000:60000/udp and these ports never go through the nginx proxy.

    6. nginx configuration

    I am not sure if all of those proxy headers actually need to be set, but they don’t hurt. These

    proxy_set_header Connection “upgrade”;
    proxy_set_header Upgrade $http_upgrade;

    are actually important though (they enable the switch from https to websocket)!

    server {
        access_log /var/log/nginx/livekit.sspaeth.de.log;
        error_log /var/log/nginx/livekit.sspaeth.de.error;
        listen 123.123.123.123:443 ssl;
        listen [dead:beef:dead:beef:dead:beef:dead:beef]:443 ssl;
        ssl_certificate XXXX;
        ssl_certificate_key XXXX;
        server_name livekit.sspaeth.de;
    
        # This is lk-jwt-service
        location ~ ^(/sfu/get|/healthz) {
            proxy_pass http://[::1]:8081;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Server $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
        #and this is livekit
        location / {
           proxy_pass http://localhost:7880;
           proxy_set_header Connection "upgrade";
           proxy_set_header Upgrade $http_upgrade;
           #add_header Access-Control-Allow-Origin "*" always;
    
           proxy_set_header Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;
        }
    }

    Bonus section: testing your setup

    https://livekit.io/connection-test can be used to test your livekit setup. You “just need to catch a room token from the jwt service. I do that by starting a call in a room with the FIrefox dev network console open, filtering for “/sfu/get” URLS and catch the following line: POST
    https://livekit.sspaeth.de/sfu/get
    . The response to that is a JSON blob containing the jwt token.

    You can then go to https://livekit.io/connection-test, enter wss://livekit.sspaeth.de as livekit url and the long access token as room token. This will test, whether livekit is reachable, all ports are open, TURN is enabled and working etc…