Re-add UDPTunnel fallback to WebRTC version

This commit is contained in:
Jonas Herzig 2020-11-25 16:58:07 +01:00
commit 506a799592
9 changed files with 138 additions and 101 deletions

View File

@ -1,20 +1,18 @@
**If you do not have specific requirements, please consider using the `webrtc` version instead: https://github.com/Johni0702/mumble-web/tree/webrtc (note that setup instructions differ significantly).
It should be near identical in features but less susceptible to performance issues. If you are having trouble with the `webrtc` version, please let us know.**
PRs, unless webrtc-specific, should still target `master`.
# mumble-web # mumble-web
mumble-web is an HTML5 [Mumble] client for use in modern browsers. mumble-web is an HTML5 [Mumble] client for use in modern browsers.
A live demo is running [here](https://voice.johni0702.de/?address=voice.johni0702.de&port=443/demo). A live demo is running [here](https://voice.johni0702.de/?address=voice.johni0702.de&port=443/demo) (or [without WebRTC](https://voice.johni0702.de/?address=voice.johni0702.de&port=443/demo&webrtc=false)).
The Mumble protocol uses TCP for control and UDP for voice. The Mumble protocol uses TCP for control and UDP for voice.
Running in a browser, both are unavailable to this client. Running in a browser, both are unavailable to this client.
Instead Websockets are used for all communications. Instead Websockets are used for control and WebRTC is used for voice (using Websockets as fallback if the server does not support WebRTC).
libopus, libcelt (0.7.1) and libsamplerate, compiled to JS via emscripten, are used for audio decoding. In WebRTC mode (default) only the Opus codec is supported.
Therefore, at the moment only the Opus and CELT Alpha codecs are supported.
In fallback mode, when WebRTC is not supported by the server, only the Opus and CELT Alpha codecs are supported.
This is accomplished with libopus, libcelt (0.7.1) and libsamplerate, compiled to JS via emscripten.
Performance is expected to be less reliable (especially on low-end devices) than in WebRTC mode and loading time will be significantly increased.
Quite a few features, most noticeably all Quite a few features, most noticeably all
administrative functionallity, are still missing. administrative functionallity, are still missing.
@ -23,7 +21,7 @@ administrative functionallity, are still missing.
#### Download #### Download
mumble-web can either be installed directly from npm with `npm install -g mumble-web` mumble-web can either be installed directly from npm with `npm install -g mumble-web`
or from git: or from git (recommended because the npm version may be out of date):
``` ```
git clone https://github.com/johni0702/mumble-web git clone https://github.com/johni0702/mumble-web
@ -38,34 +36,14 @@ to e.g. customize the theme before building it.
Either way you will end up with a `dist` folder that contains the static page. Either way you will end up with a `dist` folder that contains the static page.
#### Setup #### Setup
At the time of writing this there seems to be only one Mumble server (which is [grumble](https://github.com/mumble-voip/grumble)) At the time of writing this there do not seem to be any Mumble servers which natively support Websockets+WebRTC.
that natively support Websockets. To use this client with any other standard mumble [Grumble](https://github.com/mumble-voip/grumble) natively supports Websockets and can run mumble-web in fallback mode but not (on its own) in WebRTC mode.
server, websockify must be set up (preferably on the same machine that the To use this client with any standard mumble server in WebRTC mode, [mumble-web-proxy] must be set up (preferably on the same machine that the Mumble server is running on).
Mumble server is running on).
You can install websockify via your package manager `apt install websockify` or Additionally you will need some web server to serve static files and terminate the secure websocket connection (mumble-web-proxy only supports insecure ones).
manually from the [websockify GitHub page]. Note that while some versions might
function better than others, the python version generally seems to be the best.
There are two basic ways you can use websockify with mumble-web: Here are two web server configuration files (one for [NGINX](https://www.nginx.com/) and one for [Caddy server](https://caddyserver.com/)) which will serve the mumble-web interface at `https://voice.example.com` and allow the websocket to connect at `wss://voice.example.com/demo` (similar to the demo server).
- Standalone, use websockify for both, websockets and serving static files Replace `<proxybox>` with the host name of the machine where `mumble-web-proxy` is running. If `mumble-web-proxy` is running on the same machine as your web server, use `localhost`.
- Proxied, let your favorite web server serve static files and proxy websocket connections to websockify
##### Standalone
This is the simplest but at the same time least flexible configuration. Replace `<mumbleserver>` with the URI of your mumble server. If `websockify` is running on the same machine as `mumble-server`, use `localhost`.
```
websockify --cert=mycert.crt --key=mykey.key --ssl-only --ssl-target --web=path/to/dist 443 <mumbleserver>:64738
```
##### Proxied
This configuration allows you to run websockify on a machine that already has
another webserver running. Replace `<mumbleserver>` with the URI of your mumble server. If `websockify` is running on the same machine as `mumble-server`, use `localhost`.
```
websockify --ssl-target 64737 <mumbleserver>:64738
```
Here are two web server configuration files (one for [NGINX](https://www.nginx.com/) and one for [Caddy server](https://caddyserver.com/)) which will serve the mumble-web interface at `https://voice.example.com` and allow the websocket to connect at `wss://voice.example.com/demo` (similar to the demo server). Replace `<websockify>` with the URI to the machine where `websockify` is running. If `websockify` is running on the same machine as your web server, use `localhost`.
* NGINX configuration file * NGINX configuration file
```Nginx ```Nginx
@ -79,7 +57,7 @@ server {
root /path/to/dist; root /path/to/dist;
} }
location /demo { location /demo {
proxy_pass http://<websockify>:64737; proxy_pass http://<proxybox>:64737;
proxy_http_version 1.1; proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade; proxy_set_header Connection $connection_upgrade;
@ -101,12 +79,19 @@ http://voice.example.com {
https://voice.example.com { https://voice.example.com {
tls "/etc/letsencrypt/live/voice.example.com/fullchain.pem" "/etc/letsencrypt/live/voice.example.com/privkey.pem" tls "/etc/letsencrypt/live/voice.example.com/fullchain.pem" "/etc/letsencrypt/live/voice.example.com/privkey.pem"
root /path/to/dist root /path/to/dist
proxy /demo http://<websockify>:64737 { proxy /demo http://<proxybox>:64737 {
websocket websocket
} }
} }
``` ```
To run `mumble-web-proxy`, execute the following command. Replace `<mumbleserver>` with the host name of your Mumble server (the one you connect to using the normal Mumble client).
Note that even if your Mumble server is running on the same machine as your `mumble-web-proxy`, you should use the external name because (by default, for disabling see its README) `mumble-web-proxy` will try to verify the certificate provided by the Mumble server and fail if it does not match the given host name.
```
mumble-web-proxy --listen-ws 64737 --server <mumbleserver>:64738
```
If your mumble-web-proxy is running behind a NAT or firewall, take note of the respective section in its README.
Make sure that your Mumble server is running. You may now open `https://voice.example.com` in a web browser. You will be prompted for server details: choose either `address: voice.example.com/demo` with `port: 443` or `address: voice.example.com` with `port: 443/demo`. You may prefill these values by appending `?address=voice.example.com/demo&port=443`. Choose a username, and click `Connect`: you should now be able to talk and use the chat. Make sure that your Mumble server is running. You may now open `https://voice.example.com` in a web browser. You will be prompted for server details: choose either `address: voice.example.com/demo` with `port: 443` or `address: voice.example.com` with `port: 443/demo`. You may prefill these values by appending `?address=voice.example.com/demo&port=443`. Choose a username, and click `Connect`: you should now be able to talk and use the chat.
Here is an example of systemd service, put it in `/etc/systemd/system/mumble-web.service` and adapt it to your needs: Here is an example of systemd service, put it in `/etc/systemd/system/mumble-web.service` and adapt it to your needs:
@ -180,6 +165,6 @@ See [here](https://docs.google.com/document/d/1uPF7XWY_dXTKVKV7jZQ2KmsI19wn9-kFR
ISC ISC
[Mumble]: https://wiki.mumble.info/wiki/Main_Page [Mumble]: https://wiki.mumble.info/wiki/Main_Page
[websockify GitHub page]: https://github.com/novnc/websockify [mumble-web-proxy]: https://github.com/johni0702/mumble-web-proxy
[MetroMumble]: https://github.com/xPoke/MetroMumble [MetroMumble]: https://github.com/xPoke/MetroMumble
[Matrix]: https://matrix.org [Matrix]: https://matrix.org

View File

@ -32,6 +32,7 @@ window.mumbleWebConfig = {
'token': '', 'token': '',
'username': '', 'username': '',
'password': '', 'password': '',
'webrtc': 'auto', // whether to enable (true), disable (false) or auto-detect ('auto') WebRTC support
'joinDialog': false, // replace whole dialog with single "Join Conference" button 'joinDialog': false, // replace whole dialog with single "Join Conference" button
'matrix': false, // enable Matrix Widget support (mostly auto-detected; implies 'joinDialog') 'matrix': false, // enable Matrix Widget support (mostly auto-detected; implies 'joinDialog')
'avatarurl': '', // download and set the user's Mumble avatar to the image at this URL 'avatarurl': '', // download and set the user's Mumble avatar to the image at this URL

View File

@ -5,6 +5,7 @@ import ByteBuffer from 'bytebuffer'
import MumbleClient from 'mumble-client' import MumbleClient from 'mumble-client'
import WorkerBasedMumbleConnector from './worker-client' import WorkerBasedMumbleConnector from './worker-client'
import BufferQueueNode from 'web-audio-buffer-queue' import BufferQueueNode from 'web-audio-buffer-queue'
import mumbleConnect from 'mumble-client-websocket'
import audioContext from 'audio-context' import audioContext from 'audio-context'
import ko from 'knockout' import ko from 'knockout'
import _dompurify from 'dompurify' import _dompurify from 'dompurify'
@ -118,6 +119,9 @@ function ConnectDialog () {
self.hide = self.visible.bind(self.visible, false) self.hide = self.visible.bind(self.visible, false)
self.connect = function () { self.connect = function () {
self.hide() self.hide()
if (ui.detectWebRTC) {
ui.webrtc = true
}
ui.connect(self.username(), self.address(), self.port(), self.tokens(), self.password(), self.channelName()) ui.connect(self.username(), self.address(), self.port(), self.tokens(), self.password(), self.channelName())
} }
@ -336,7 +340,10 @@ class GlobalBindings {
constructor (config) { constructor (config) {
this.config = config this.config = config
this.settings = new Settings(config.settings) this.settings = new Settings(config.settings)
this.connector = new WorkerBasedMumbleConnector() this.detectWebRTC = true
this.webrtc = true
this.fallbackConnector = new WorkerBasedMumbleConnector()
this.webrtcConnector = { connect: mumbleConnect }
this.client = null this.client = null
this.userContextMenu = new ContextMenu() this.userContextMenu = new ContextMenu()
this.channelContextMenu = new ContextMenu() this.channelContextMenu = new ContextMenu()
@ -449,12 +456,27 @@ class GlobalBindings {
// Note: This call needs to be delayed until the user has interacted with // Note: This call needs to be delayed until the user has interacted with
// the page in some way (which at this point they have), see: https://goo.gl/7K7WLu // the page in some way (which at this point they have), see: https://goo.gl/7K7WLu
this.connector.setSampleRate(audioContext().sampleRate) let ctx = audioContext()
this.fallbackConnector.setSampleRate(ctx.sampleRate)
if (!this._delayedMicNode) {
this._micNode = ctx.createMediaStreamSource(this._micStream)
this._delayNode = ctx.createDelay()
this._delayNode.delayTime.value = 0.15
this._delayedMicNode = ctx.createMediaStreamDestination()
}
// TODO: token // TODO: token
this.connector.connect(`wss://${host}:${port}`, { (this.webrtc ? this.webrtcConnector : this.fallbackConnector).connect(`wss://${host}:${port}`, {
username: username, username: username,
password: password, password: password,
webrtc: this.webrtc ? {
enabled: true,
required: true,
mic: this._delayedMicNode.stream,
audioContext: ctx
} : {
enabled: false,
},
tokens: tokens tokens: tokens
}).done(client => { }).done(client => {
log(translate('logentry.connected')) log(translate('logentry.connected'))
@ -535,6 +557,10 @@ class GlobalBindings {
this.connectErrorDialog.type(err.type) this.connectErrorDialog.type(err.type)
this.connectErrorDialog.reason(err.reason) this.connectErrorDialog.reason(err.reason)
this.connectErrorDialog.show() this.connectErrorDialog.show()
} else if (err === 'server_does_not_support_webrtc' && this.detectWebRTC && this.webrtc) {
log(translate('logentry.connection_fallback_mode'))
this.webrtc = false
this.connect(username, host, port, tokens, password, channelName)
} else { } else {
log(translate('logentry.connection_error'), err) log(translate('logentry.connection_error'), err)
} }
@ -686,24 +712,32 @@ class GlobalBindings {
} }
}).on('voice', stream => { }).on('voice', stream => {
console.log(`User ${user.username} started takling`) console.log(`User ${user.username} started takling`)
var userNode = new BufferQueueNode({ let userNode
audioContext: audioContext() if (!this.webrtc) {
}) userNode = new BufferQueueNode({
userNode.connect(audioContext().destination) audioContext: audioContext()
})
userNode.connect(audioContext().destination)
}
if (stream.target === 'normal') {
ui.talking('on')
} else if (stream.target === 'shout') {
ui.talking('shout')
} else if (stream.target === 'whisper') {
ui.talking('whisper')
}
stream.on('data', data => { stream.on('data', data => {
if (data.target === 'normal') { if (this.webrtc) {
ui.talking('on') // mumble-client is in WebRTC mode, no pcm data should arrive this way
} else if (data.target === 'shout') { } else {
ui.talking('shout') userNode.write(data.buffer)
} else if (data.target === 'whisper') {
ui.talking('whisper')
} }
userNode.write(data.buffer)
}).on('end', () => { }).on('end', () => {
console.log(`User ${user.username} stopped takling`) console.log(`User ${user.username} stopped takling`)
ui.talking('off') ui.talking('off')
userNode.end() if (!this.webrtc) {
userNode.end()
}
}) })
}) })
} }
@ -825,6 +859,15 @@ class GlobalBindings {
voiceHandler.setMute(true) voiceHandler.setMute(true)
} }
this._micNode.disconnect()
this._delayNode.disconnect()
if (mode === 'vad') {
this._micNode.connect(this._delayNode)
this._delayNode.connect(this._delayedMicNode)
} else {
this._micNode.connect(this._delayedMicNode)
}
this.client.setAudioQuality( this.client.setAudioQuality(
this.settings.audioBitrate, this.settings.audioBitrate,
this.settings.samplesPerPacket this.settings.samplesPerPacket
@ -1055,6 +1098,12 @@ function initializeUI () {
if (queryParams.password) { if (queryParams.password) {
ui.connectDialog.password(queryParams.password) ui.connectDialog.password(queryParams.password)
} }
if (queryParams.webrtc !== 'auto') {
ui.detectWebRTC = false
if (queryParams.webrtc == 'false') {
ui.webrtc = false
}
}
if (queryParams.channelName) { if (queryParams.channelName) {
ui.connectDialog.channelName(queryParams.channelName) ui.connectDialog.channelName(queryParams.channelName)
} }
@ -1251,23 +1300,26 @@ function translateEverything() {
async function main() { async function main() {
await localizationInitialize(navigator.language); await localizationInitialize(navigator.language);
translateEverything(); translateEverything();
initializeUI(); try {
initVoice(data => { const userMedia = await initVoice(data => {
if (testVoiceHandler) { if (testVoiceHandler) {
testVoiceHandler.write(data) testVoiceHandler.write(data)
}
if (!ui.client) {
if (voiceHandler) {
voiceHandler.end()
} }
voiceHandler = null if (!ui.client) {
} else if (voiceHandler) { if (voiceHandler) {
voiceHandler.write(data) voiceHandler.end()
} }
}, err => { voiceHandler = null
log(translate('logentry.mic_init_error'), err) } else if (voiceHandler) {
}) voiceHandler.write(data)
}
})
ui._micStream = userMedia
} catch (err) {
window.alert('Failed to initialize user media\nRefresh page to retry.\n' + err)
return
}
initializeUI();
} }
window.onload = main window.onload = main

View File

@ -1,10 +1,10 @@
import { Writable } from 'stream' import { Writable } from 'stream'
import MicrophoneStream from 'microphone-stream' import MicrophoneStream from 'microphone-stream'
import audioContext from 'audio-context' import audioContext from 'audio-context'
import getUserMedia from 'getusermedia'
import keyboardjs from 'keyboardjs' import keyboardjs from 'keyboardjs'
import vad from 'voice-activity-detection' import vad from 'voice-activity-detection'
import DropStream from 'drop-stream' import DropStream from 'drop-stream'
import { WorkerBasedMumbleClient } from './worker-client'
class VoiceHandler extends Writable { class VoiceHandler extends Writable {
constructor (client, settings) { constructor (client, settings) {
@ -33,8 +33,12 @@ class VoiceHandler extends Writable {
return this._outbound return this._outbound
} }
// Note: the samplesPerPacket argument is handled in worker.js and not passed on if (this._client instanceof WorkerBasedMumbleClient) {
this._outbound = this._client.createVoiceStream(this._settings.samplesPerPacket) // Note: the samplesPerPacket argument is handled in worker.js and not passed on
this._outbound = this._client.createVoiceStream(this._settings.samplesPerPacket)
} else {
this._outbound = this._client.createVoiceStream()
}
this.emit('started_talking') this.emit('started_talking')
} }
@ -160,16 +164,13 @@ export class VADVoiceHandler extends VoiceHandler {
var theUserMedia = null var theUserMedia = null
export function initVoice (onData, onUserMediaError) { export function initVoice (onData) {
getUserMedia({ audio: true }, (err, userMedia) => { return window.navigator.mediaDevices.getUserMedia({ audio: true }).then((userMedia) => {
if (err) { theUserMedia = userMedia
onUserMediaError(err) var micStream = new MicrophoneStream(userMedia, { objectMode: true, bufferSize: 1024 })
} else { micStream.on('data', data => {
theUserMedia = userMedia onData(Buffer.from(data.getChannelData(0).buffer))
var micStream = new MicrophoneStream(userMedia, { objectMode: true, bufferSize: 1024 }) })
micStream.on('data', data => { return userMedia
onData(Buffer.from(data.getChannelData(0).buffer))
})
}
}) })
} }

View File

@ -125,7 +125,7 @@ class WorkerBasedMumbleConnector {
} }
} }
class WorkerBasedMumbleClient extends EventEmitter { export class WorkerBasedMumbleClient extends EventEmitter {
constructor (connector, clientId) { constructor (connector, clientId) {
super() super()
this._connector = connector this._connector = connector
@ -342,11 +342,12 @@ class WorkerBasedMumbleUser extends EventEmitter {
props props
] ]
} else if (name === 'voice') { } else if (name === 'voice') {
let [id] = args let [id, target] = args
let stream = new PassThrough({ let stream = new PassThrough({
objectMode: true objectMode: true
}) })
this._connector._voiceStreams[id] = stream this._connector._voiceStreams[id] = stream
stream.target = target
args = [stream] args = [stream]
} else if (name === 'remove') { } else if (name === 'remove') {
delete this._client._users[this._id] delete this._client._users[this._id]

View File

@ -164,7 +164,7 @@ import 'subworkers'
}) })
}) })
return [voiceId] return [voiceId, stream.target]
}) })
registerEventProxy(id, user, 'remove') registerEventProxy(id, user, 'remove')

View File

@ -79,6 +79,7 @@
"connecting": "Connecting to server", "connecting": "Connecting to server",
"connected": "Connected!", "connected": "Connected!",
"connection_error": "Connection error:", "connection_error": "Connection error:",
"connection_fallback_mode": "Server does not support WebRTC, re-trying in fallback mode..",
"unknown_voice_mode": "Unknown voice mode:", "unknown_voice_mode": "Unknown voice mode:",
"mic_init_error": "Cannot initialize user media. Microphone will not work:" "mic_init_error": "Cannot initialize user media. Microphone will not work:"
}, },

18
package-lock.json generated
View File

@ -5501,13 +5501,12 @@
"dev": true "dev": true
}, },
"mumble-client": { "mumble-client": {
"version": "1.3.0", "version": "github:johni0702/mumble-client#f73a08bcb223c530326d44484a357380dfe3e6ee",
"resolved": "https://registry.npmjs.org/mumble-client/-/mumble-client-1.3.0.tgz", "from": "github:johni0702/mumble-client#f73a08b",
"integrity": "sha512-4z/Frp+XwTsE0u+7g6BUQbYumV17iEaMBCZ5Oo5lQ5Jjq3sBnZYRH9pXDX1bU4/3HFU99/AVGcScH2R67olPPQ==",
"dev": true, "dev": true,
"requires": { "requires": {
"drop-stream": "^0.1.1", "drop-stream": "^0.1.1",
"mumble-streams": "0.0.4", "mumble-streams": "github:johni0702/mumble-streams#47b84d1",
"promise": "^7.1.1", "promise": "^7.1.1",
"reduplexer": "^1.1.0", "reduplexer": "^1.1.0",
"remove-value": "^1.0.0", "remove-value": "^1.0.0",
@ -5565,20 +5564,17 @@
} }
}, },
"mumble-client-websocket": { "mumble-client-websocket": {
"version": "1.0.0", "version": "github:johni0702/mumble-client-websocket#5b0ed8dc2eaa904d21cd9d11ab7a19558f13701a",
"resolved": "https://registry.npmjs.org/mumble-client-websocket/-/mumble-client-websocket-1.0.0.tgz", "from": "github:johni0702/mumble-client-websocket#5b0ed8d",
"integrity": "sha1-QFT8SJgnFYo6bP4iw0oYxRdnoL8=",
"dev": true, "dev": true,
"requires": { "requires": {
"mumble-client": "^1.0.0",
"promise": "^7.1.1", "promise": "^7.1.1",
"websocket-stream": "^3.2.1" "websocket-stream": "^3.2.1"
} }
}, },
"mumble-streams": { "mumble-streams": {
"version": "0.0.4", "version": "github:johni0702/mumble-streams#47b84d190ada23df1035f02735f70b6731f58fa2",
"resolved": "https://registry.npmjs.org/mumble-streams/-/mumble-streams-0.0.4.tgz", "from": "github:johni0702/mumble-streams#47b84d1",
"integrity": "sha1-p6H50Rx437bPQcT+2V4YnXhT40g=",
"dev": true, "dev": true,
"requires": { "requires": {
"protobufjs": "^5.0.1" "protobufjs": "^5.0.1"

View File

@ -42,9 +42,9 @@
"libsamplerate.js": "^1.0.0", "libsamplerate.js": "^1.0.0",
"lodash.assign": "^4.2.0", "lodash.assign": "^4.2.0",
"microphone-stream": "^5.1.0", "microphone-stream": "^5.1.0",
"mumble-client": "^1.3.0", "mumble-client": "github:johni0702/mumble-client#f73a08b",
"mumble-client-codecs-browser": "^1.2.0", "mumble-client-codecs-browser": "^1.2.0",
"mumble-client-websocket": "^1.0.0", "mumble-client-websocket": "github:johni0702/mumble-client-websocket#5b0ed8d",
"node-sass": "^4.14.1", "node-sass": "^4.14.1",
"patch-package": "^6.2.1", "patch-package": "^6.2.1",
"raw-loader": "^4.0.2", "raw-loader": "^4.0.2",