Webrtc video/audio chat in Laravel+Vue.JS - laravel

I am making a video+voice chat in webrtc. The issue i am facing is that my voice is coming back to me and other person's to him. We both can listen each other but we both should not listen our own voices in our speakers. We are using headphones and away from each other. This is not an echo issue. If someone know any configuration option for this or any other solution please let me know.
VueJS:
export default {
props: ['conversation' , 'currentUser', 'threads'],
data() {
return {
data:"",
conversationId : this.conversation.conversationId,
channel : this.conversation.channel_name,
messages : this.conversation.messages,
withUser : this.conversation.user,
text : '',
constraints : {
audio: true,
video: false
},
}
}
I am using this api:
navigator.mediaDevices.getUserMedia({
audio: true,
video: false
})

Make sure the local video is muted. See e.g. the left video on https://simpl.info/rtcpeerconnection/
If you can hear yourself before you are even connected that is most likely the issue.

Related

Cant make the google API process a WAV Audio

I want to transcribe a simple audio from a phone call.
I'm currently working with the Speech API
const speech = require('#google-cloud/speech').v1p1beta1;
The Information about the Audio I'm trying to transcribe:
Codec: PCM MU-LAW (mlaw)
Channels: Stereo
Sample Rate: 8000
Bits per Sample: 16
Duration: 35 seconds
I'm using this configuration for the API:
const requestGoogle = {
audio: {
uri: [ my audio location ]
},
config: {
audioChannelCount: 2,
enableSeparateRecognitionPerChannel: true,
enableAutomaticPunctuation: true,
languageCode,
model: 'default',
useEnhanced: true,
interactionType: 'PHONE_CALL',
encoding: 'MULAW',
microphoneDistance: 'NEARFIELD',
recordingDeviceType: 'PHONE_LINE',
}
};
When requesting that to the API I get a 400 response status with the error message:
{
"error": "3 INVALID_ARGUMENT: Invalid recognition 'config': bad channel count."
}
If someone could help me with this would be awesome, Thanks!
Convert the Codec data - from Codec: PCM MU-LAW (mlaw) - to - Codec: PCM - using G711 decoder.
Use the Channel : Mono.

WebRTC on AWS EC2 with TURN server on AWS EC2 on 2 different networks, error: ICE failed, add a STUN server

Currently I have a videochat web app using WebRTC and written in Reactjs deployed on an AWS EC2 Instance. The videochat works with two users on two different computers on a local network or the same internet network and we can easily talk and see each other.
However when I try to videochat with another user who is on a different network, the videochat stops working and I got an error message in my Chrome browser console like this:
Uncaught (in promise) DOMException: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Error processing ICE candidate
and the other user gets:
ICE failed, add a STUN server and see about:webrtc for more details
I believe the issue is with the TURN server, however I have set up the TURN server using COTURN (https://github.com/coturn/coturn) on an AWS EC2 instance and it seems to work when I test it on https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ with the same credentials when I try to see for relays.
I deployed the TURN server using instructions from this stackoverflow post:
How to create stun turn server instance using AWS EC2
I have also allowed inbound port access for UDP and TCP for a large range of ports on AWS security groups.
Some relevant code, this one processes the responses I get back from a WebRTC signalling server:
/**
* Parse a broadcast message and reply back with
* the appropriate details
*/
receiveBroadcast(packetObject) {
try {
var payload = JSON.parse(packetObject.Payload)
} catch (err) {
var payload = packetObject.Payload
}
if (payload.Type == 'Ice Offer') {
// Set remote descriptions and construct an ICE answer
var icePacket = new this.rtcSessionDescription({
type: payload.IcePacket.type,
sdp: payload.IcePacket.sdp,
})
this.peerConnection.setRemoteDescription(icePacket, function () {
this.peerConnection.createAnswer(this.onCreateAnswerSuccess.bind(this), this.onCreateSessionDescriptionError)
}.bind(this), this.onSetSessionDescriptionError)
} else if (payload.Type == 'Ice Answer') {
// Set the remote description
var icePacket = new this.rtcSessionDescription({
type: payload.IcePacket.type,
sdp: payload.IcePacket.sdp,
})
this.peerConnection.setRemoteDescription(icePacket, function () {
this.onSetRemoteSuccess()
}.bind(this), this.onSetSessionDescriptionError)
} else if (payload.Type == 'Ice Candidate') {
console.log('ICE payload :')
console.log(payload)
// Add the candidate to the list of ICE candidates
var candidate = new this.rtcIceCandidate({
sdpMLineIndex: payload.sdpMLineIndex,
sdpMid: payload.sdpMid,
candidate: payload.candidate,
})
this.peerConnection.addIceCandidate(candidate)
}
}
Its mainly the last line that is not working.
I set up console.logs to see what the process looks like:
Local stream set
bundle.js:1 Video Set
bundle.js:1 setRemoteDescription complete
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:0 1 UDP 2122252543 xxx.xxx.x.xx 57253 typ host", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:1 1 UDP 2122187007 xx.xxx.x.xx 53622 typ host", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:2 1 TCP 2105524479 xxx.xxx.x.xx 9 typ host tcptype active", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:3 1 TCP 2105458943 xx.xxx.x.xx 9 typ host tcptype active", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "", sdpMid: "0"}
Figured it out, it was a stupid mistake, the config JSON I was using for specifying the ICE servers had an extra layer and WebRTC just couldn't process it. However the WebRTC error messages are pretty much unusable and not very informational.
So for any future people stuck on debugging WebRTC, these are the steps I figured out and resources I used so that other people can better debug their problems.
1) Use Chrome
2) Open up in new tab chrome://webrtc-internals/
3) Open up your videochat app in another tab and observe whats happening in chrome://webrtc-internals/
4) Make sure not to close the tab with the videochat app, if you do chrome://webrtc-internals will refresh
5) Open up a working videochat app like https://morning-escarpment-67980.herokuapp.com/ which is an app built from this github repo: https://github.com/nguymin4/react-videocall
6) Compare the differences between yours and the successful video chat app's chrome://webrtc-internals
7) Use this resource to help understand error messages and more details: https://blog.codeship.com/webrtc-issues-and-how-to-debug-them/
Hopefully this helps.

Firefox - mediaDevices.getUserMedia throws AbortError

I get the following error in firefox (no problems in Chrome / Edge / Safari):
MediaStreamError { name: "AbortError", message: "Starting video failed", constraint: "", stack: "" }
Browser console only shows < unavailable > when this error is thrown.
I am using adapter-latest.js from webrtc.github.io and the code works perfectly well on other pages within my application but not in one particular page. Is there a possibility to find out, what interferes with getUserMedia? I allready tried commenting out all other libraries and includes.
My code is:
var video = document.getElementById('recorder');
video.onloadedmetadata = function(e) {
$("#takePicture").show();
if($("#customerImage").attr("src") == ""){
$("#recorder").show();
}
};
navigator.mediaDevices.getUserMedia({ video: true})
.then(stream => {
video.srcObject = stream;
})
.catch(e => console.log(e));
I was facing this issue because I had Chrome also running the same app and using the webcam. So basically the webcam was already in use and I was trying to access it via Firefox too.
Are you really sure that you need adapter? In my experience it makes more problems than it solves.
Can you try to use constraints?
Example (I removed some things, but it works like a charm here):
constraints = {
width: 1280,
height: 720,
frameRate: 10, //mobile
facingMode: {
exact: "environment"
} //mobile
}
navigator.mediaDevices.getUserMedia({
audio: false,
video: constraints
}).
then(handleSuccess).catch(handleError);
function handleSuccess(stream) {
video.src = URL.createObjectURL(stream);
video.play();
}
function handleError(error) {
console.log('navigator.getUserMedia error: ', error);
}

Laravel .mov validation

In a project I want to upload video. in my request I use 'path' => 'mimes:mp4,mov,avi,mpg,mpeg;quicktime|nullable',
When uploading a .mov video I always get the error "The video path must be a file of type: mp4, mov, avi, mpg, mpeg, quicktime.". The meme type of the video is video/quicktime.
Uploading .mp4 files works perfect, didn't test with other video types yet. Does anyone have a solution?
You can manually check for mime-type if the validation is not working for you:
$video = Input::file('path');
$mime = $video->getMimeType();
$accepted_mimes = array("video/x-flv", "video/mp4", "application/x-mpegURL",
"video/MP2T", "video/3gpp", "video/quicktime",
"video/x-msvideo", "video/x-ms-wmv");
if(in_array($mime, $accepted_mimes)) {
//valid video format begin upload
} else {
//invalid video mime type
// return back with errors
return redirect->back()->withErrors(['msg', 'Invalid video']);
}
For a list of all available mime-types see here
.mov Is just a container. So maybe the mime type / codec is still wrong. You should first verify this with a tool like this: https://mediaarea.net/. As a solution to you problem however (which is less secure) you could only verify the extension (pathname).
Here you see an example of a .mxf file but with an MPEG codec to help you understand that containers do not have only one mime type (and codec) that belongs to it most of the time.
Warning for just validating file extensions: this is very insecure and could lead to all kinds of trouble. Like people uploading php files or other type of files.
$Video= request('PostDetailsVideo');//this is name of posted file
$rules=[
'PostDetailsVideo' => 'required|mimetypes:video/x-ms-wmv,video/x-msvideo,video/quicktime,video/3gpp,video/MP2T,application/x-mpegURL,video/mp4,video/x-flv|max:32768'
];
$CheckIsVideo = Validator::make($request->all(),$rules);
if($CheckIsVideo->fails()){//this not video
return response()->json([
'Success'=> false,
], 200);
}
else
return response()->json([
'Success'=> true,
], 200);

webRTC convert webm to mp4 with ffmpeg.js

I am trying to convert webM files to mp4 with ffmpeg.js.
I am recording a video from canvas(overlayer with some information) and recording the audio data from the video.
stream = new MediaStream();
var videoElem = document.getElementById('video');
var videoStream = videoElem.captureStream();
stream.addTrack(videoStream.getAudioTracks()[0]);
stream.addTrack(canvas.captureStream().getVideoTracks()[0]);
var options = {mimeType: 'video/webm'};
recordedBlobs = [];
mediaRecorder = new MediaRecorder(stream, options);
mediaRecorder.onstop = handleStop;
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.start(100); // collect 100ms of data
function handleDataAvailable(event) {
if (event.data && event.data.size > 0) {
recordedBlobs.push(event.data);
}
}
mediaRecorder.stop();
This code works as expected and returns a webm video
var blob = new Blob(recordedBlobs, {type: 'video/webm'});
Now I want a mp4 file and checked the ffmpeg.js from muaz-khan.
The examples just show how to convert to mp4 when you have 2 single streams (audio and video). But I have one stream with an additional audio track. Can I convert such a stream to mp4? How can that be done?
As per the provided code sample, your recorder stream is having only one audio & one video tracks.
If your input file is having both Audio & Video, then you need to specify output codec for both tracks here as following.
worker.postMessage({
type: 'command',
arguments: [
'-i', 'audiovideo.webm',
'-c:v', 'mpeg4',
'-c:a', 'aac', // or vorbis
'-b:v', '6400k', // video bitrate
'-b:a', '4800k', // audio bitrate
'-strict', 'experimental', 'audiovideo.mp4'
],
files: [
{
data: new Uint8Array(fileReaderData),
name: 'audiovideo.webm'
}
]
});
Trans-coding the video inside browser is not recommend, as it will consume more CPU Time & Memory. And ffmpeg_asm.js is heavy. May be ok for POC :)
What is your use case? webm(vp8/vp9) is widely using these days.
Chrome will support following mime types:
"video/webm"
"video/webm;codecs=vp8"
"video/webm;codecs=vp9"
"video/webm;codecs=h264"
"video/x-matroska;codecs=avc1"
So you can get mp4 recording directly from chrome MediaRecorder with following hack
var options = {mimeType: 'video/webm;codecs=h264'};
mediaRecorder = new MediaRecorder(stream, options);
.....
//Before merging blobs change output mime
var blob = new Blob(recordedBlobs, {type: 'video/mp4'});
// And name your file as video.mp4

Resources