mirror of
https://github.com/edumeet/edumeet.git
synced 2026-01-23 02:34:58 +00:00
Adding central docs folder
This commit is contained in:
parent
acb05f2c08
commit
43f8fe3b07
3 changed files with 5 additions and 0 deletions
101
docs/HAproxy.md
Normal file
101
docs/HAproxy.md
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
# Howto deploy a (room based) load balanced cluster
|
||||
|
||||
This example will show how to setup an HA proxy to provide load balancing between several
|
||||
edumeet servers.
|
||||
|
||||
## IP and DNS
|
||||
|
||||
In this basic example we use the following names and ips:
|
||||
|
||||
### Backend
|
||||
|
||||
* `mm1.example.com` <=> `192.0.2.1`
|
||||
* `mm2.example.com` <=> `192.0.2.2`
|
||||
* `mm3.example.com` <=> `192.0.2.3`
|
||||
|
||||
### Redis
|
||||
|
||||
* `redis.example.com` <=> `192.0.2.4`
|
||||
|
||||
### Load balancer HAproxy
|
||||
|
||||
* `meet.example.com` <=> `192.0.2.5`
|
||||
|
||||
## Deploy multiple edumeet servers
|
||||
|
||||
This is most easily done using Ansible (see below), but can be done
|
||||
in any way you choose (manual, Docker, Ansible).
|
||||
|
||||
Read more here: [mm-ansible](https://github.com/edumeet/edumeet-ansible)
|
||||
[](https://asciinema.org/a/311365)
|
||||
|
||||
## Setup Redis for central HTTP session store
|
||||
|
||||
### Use one Redis for all edumeet servers
|
||||
|
||||
* Deploy a Redis cluster for all instances.
|
||||
* We will use in our actual example `192.0.2.4` as redis HA cluster ip. It is out of scope howto deploy it.
|
||||
|
||||
OR
|
||||
|
||||
* For testing you can use Redis from one the edumeet servers. e.g. If you plan only for testing on your first edumeet server.
|
||||
* Configure Redis `redis.conf` to not only bind to your loopback but also to your global ip address too:
|
||||
|
||||
``` plaintext
|
||||
bind 192.0.2.1
|
||||
```
|
||||
|
||||
This example sets this to `192.0.2.1`, change this according to your local installation.
|
||||
|
||||
* Change your firewall config to allow incoming Redis. Example (depends on the type of firewall):
|
||||
|
||||
``` plaintext
|
||||
chain INPUT {
|
||||
policy DROP;
|
||||
|
||||
saddr mm2.example.com proto tcp dport 6379 ACCEPT;
|
||||
saddr mm3.example.com proto tcp dport 6379 ACCEPT;
|
||||
}
|
||||
```
|
||||
|
||||
* **Set a password, or if you don't (like in this basic example) take care to set strict firewall rules**
|
||||
|
||||
## Configure edumeet servers
|
||||
|
||||
### Server config
|
||||
|
||||
mm/configs/server/config.js
|
||||
|
||||
``` js
|
||||
redisOptions : { host: '192.0.2.4'},
|
||||
listeningPort: 80,
|
||||
httpOnly: true,
|
||||
trustProxy : ['192.0.2.5'],
|
||||
```
|
||||
|
||||
## Deploy HA proxy
|
||||
|
||||
* Configure certificate / letsencrypt for `meet.example.com`
|
||||
* In this example we put a complete chain and private key in /root/certificate.pem.
|
||||
* Install and setup haproxy
|
||||
|
||||
`apt install haproxy`
|
||||
|
||||
* Add to /etc/haproxy/haproxy.cfg config
|
||||
|
||||
``` plaintext
|
||||
backend edumeet
|
||||
balance url_param roomId
|
||||
hash-type consistent
|
||||
|
||||
server mm1 192.0.2.1:80 check maxconn 2000 verify none
|
||||
server mm2 192.0.2.2:80 check maxconn 2000 verify none
|
||||
server mm3 192.0.2.3:80 check maxconn 2000 verify none
|
||||
|
||||
frontend meet.example.com
|
||||
bind 192.0.2.5:80
|
||||
bind 192.0.2.5:443 ssl crt /root/certificate.pem
|
||||
http-request redirect scheme https unless { ssl_fc }
|
||||
reqadd X-Forwarded-Proto:\ https
|
||||
default_backend edumeet
|
||||
```
|
||||
5
docs/README.md
Normal file
5
docs/README.md
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
# 
|
||||
# Documentation / Table Of Contents
|
||||
## [Documentation of configuration for client/app] (/app/README.md)
|
||||
## [Documentation of configuration for server] (/server/README.md)
|
||||
## [Documentation of Development enviroment with Docker] (/compose/README.md)
|
||||
48
docs/SCALING_AND_HARDWARE.md
Normal file
48
docs/SCALING_AND_HARDWARE.md
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
# Scaling and recommended Hardware
|
||||
## Recommended hardware for running mm per node
|
||||
* MM scales by threads so more cores are better
|
||||
* 8GB RAM is enough
|
||||
* Disk space is not so important - 10GB is a good start - but logs can get huge :)
|
||||
* 1GB/s network adapter better with 2 or more ports + bonding
|
||||
* If you have more than 1 network interface + public IPs TURN server can run on same machine but we recommend extra TURN server or use a distributed TURN service
|
||||
* 1 TURN server / 4 Multiparty Meeting servers (TURN server can have fewer CPUs / cores)
|
||||
|
||||
## Network
|
||||
The bandwidth requirements are quite tunable both on client and server, but server downstream to clients bandwidth will be one of the largest constraints on the system. If you have 1Gbit on your nodes, the number of users should not exceed ~600 per server node, and this can be run without a problem on a modern 8 core server. If you have higher bandwidth per node, the numbers can be scaled up linearly (2Gbit/16core/1200 users). Note that this is concurrent users, so if you anticipate ~10000 concurrent users, scale it according to these numbers. Real number of concurrent users depends on typical size of rooms(bigger is better), lastN config (lower is better), maxIncomingBitrate config (lower is better), use of simulcast.
|
||||
|
||||
## Example calculation
|
||||
### 1 Server
|
||||
* 4 cores - 8 threads:
|
||||
* 8 x 500 = 4000 consumers / server node
|
||||
* Bandwidth: (2000 audio consumers x 50Kbps + 2000 video consumers x 400Kbps (low quality- 320x240 vp8)
|
||||
= 100 Mbps + 800 Mbps = 900 Mbps
|
||||
* 1 Gbit/s connection should be enough for low quality
|
||||
* For higher quality video you need **more** than 1Gbit/s
|
||||
* Additional servers will scale to 4000 consumers x number of server nodes
|
||||
* Not enough network bandwidth will reduce video-quality automatically
|
||||
|
||||
### Consumer:
|
||||
* Def.: Streams that are consumed by participants
|
||||
* Former limitation (500 consumers per room) is not there anymore
|
||||
* Example 1(refering to example server from above): lastN=1: max 2000 students can consume audio + video stream from 1 lecturer (2 streams x 2000 students = 4000 consumers)
|
||||
* Example 2(refering to example server from above): lastN=5: Rooms with 6 users each: 6 users x 5 remote users x 2 consumers = 60 consumers per room; 500 (consumers/thread) / 60 (consumers/room) = around 8 rooms / thread => around 50 concurrent users per thread --> 400 concurrent users per server
|
||||
* Example 3(refering to example server from above): lastN=5: 1 one big room 4000 [consumer] / 10 [consumer/participant] = 400 [participant] That's the maximum number of participants per server for lastN=5 in one big room.
|
||||
* Example 3(refering to example server from above): lastN=25: 1 one big room 4000 [consumer] / 50 [consumer/participant] = 80 [participant] That's the maximum number of participants per server for lastN=25 in one single big room
|
||||
|
||||
|
||||
### Bandwidth:
|
||||
* Configurable: maxIncomingBitrate per participant in server config
|
||||
* Low video bandwidth is around 160Kbps (240p-vp8)
|
||||
* Typical acceptable good video bandwidth is around (800-1000)Kbps (720p)
|
||||
* Possibility to activate Simulcast / SVC to provide different clients with different bandwidths
|
||||
## Scaling
|
||||
You can setup more than 1 server with same configuration and load balance with HA-proxy:
|
||||
https://github.com/havfo/multiparty-meeting/blob/master/HAproxy.md
|
||||
This will scale linearly.
|
||||
|
||||
### Limitations / work in progress / ToDo
|
||||
You can fine tune max number of active streams in same room by setting lastN parameter in server/config/config.js - this is then globally for whole server installation. Clients can override this in advanced settings locally.
|
||||
|
||||
There is heavy development for separating signal/control part from media part (branch **[feat-media-node](https://github.com/havfo/multiparty-meeting/tree/feat-media-node)** ) when this is ready you can fire up several media nodes completely separated from signal/control. For multi-tenant you can install one server node (or more for redundancy) per tenant with separate configurations/domains and share all media nodes across tenants. One room can then spread over several media nodes so max number of participants is limited only by size of your infrastructure.
|
||||
|
||||
Right now simulcast is supported and we are working on using bandwidth more effectively. Small video windows don't need high quality video streams so we should switch to lower quality streams according to video container sizes on screen. This could enable for much higher number of lastN.
|
||||
Loading…
Add table
Add a link
Reference in a new issue