navigator.mediaDevice.getUserMedia... how to access the actual stream? - recording

I test a piece of code taken from
https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia.
I am trying to record microphone Data to access it to other users and record it to a database. Actually i dont even get data from the webcam anyway.
// Prefer camera resolution nearest to 1280x720.
var constraints = { audio: true, video: { width: 1280, height: 720 } };
navigator.mediaDevices.getUserMedia(constraints)
.then(function(mediaStream) {
var video = document.querySelector('video');
video.srcObject = mediaStream;
video.onloadedmetadata = function(e) {
video.play();
};
})
.catch(function(err) { console.log(err.name + ": " + err.message); }); // always check for errors at the end.
For me, the above code tries to open the user media and if successful, the stream information from mediaStream is saved to video and video is played. The problem is, that the mediaStream is not given by the getUserMedia itsself.
To say it clear: even if getUserMedia is working and permission is granted, WHERE DO I GET THE STREAM FROM?
ty for answer

This is working for me, I can see the video stream in browser.
https://granite-ambulance.glitch.me
const video = document.querySelector('video');
async function stream() {
try {
const mediaStream = await navigator.mediaDevices.getUserMedia({ audio: true, video: { width: 1280, height: 720 } })
video.srcObject = mediaStream;
} catch (e) {
console.error(e)
}
video.onloadedmetadata = async function(event) {
try {
await video.play();
} catch (e) {
console.error(e)
}
}
}
stream()
<video></video>

Related

How can i add a custom video in broadcast in opentok

I wanted to add video while broadcasting.
To do this i am refering this link :
https://github.com/opentok/opentok-web-samples/tree/main/Publish-Video
After OT.initPublisher i am publishing this publisher in to session session.publish
But video is not showing in livestreaming.
Can anybody help me with this?
We can Publish custom audio source and video source from Video Element, using the captureStream() / mozCaptureStream() methods
Like mentioned in the below code snip
const contentVideoElement = document.createElement('VIDEO');
let screenPublisher = null;
contentVideoElement.autoplay = true;
contentVideoElement.controls = true;
contentVideoElement.classList.add('cameraContainer');
const url = URL.createObjectURL(file); // choose video file from input file control
contentVideoElement.src = url;
try {
await contentVideoElement.play();
} catch (error) {
console.log(error);
return;
}
let mediaStream = null;
if (contentVideoElement.captureStream) {
mediaStream = contentVideoElement.captureStream();
} else if (contentVideoElement.mozCaptureStream) {
mediaStream = contentVideoElement.mozCaptureStream();
} else {
console.error('Stream capture is not supported');
mediaStream = null;
return;
}
const videoTracks = mediaStream.getVideoTracks();
const audioTracks = mediaStream.getAudioTracks();
if (videoTracks.length > 0 && audioTracks.length > 0) {
const el = document.createElement('div');
screenPublisher = window.OT.initPublisher(
'content-video-element-id',
{
insertMode: 'append',
videoSource: videoTracks[0],
audioSource: audioTracks[0],
fitMode: 'contain', // Using default
width: '100%',
height: '100%',
showControls: false,
name:`Guest (Video)`,
},
(error) => {
if (error) {
contentVideoElement.pause();
console.log(error);
} else {
contentVideoElement.play();
openToksession.publish(screenPublisher, (error) => {
if (error) {
console.log(error);
} else {
// write here code after success publish video stream
}
});
}
},
);
screenPublisher.on({
streamDestroyed: ({ stream }) => {
contentVideoElement.pause();
},
});
contentVideoElement.addEventListener(
'ended',
() => {
console.log('Shared video ended');
},
false,
);
}
For capture MediaStream in reactjs: click here

how to send react-native-audio-record recorded audio file to server?

I need to be record audio and upload audio to server and for the record audio i am using "react-native-audio-record" react native package.
When i am using file_get_contents($request->file('inputFile')) all time file_get_contents returning 500 internal server error to me in Laravel.
I tried form-data, blob object.
Here is my React Native code and everything what i used to solve this:
onStartRecord = async () => {
this.setState({ isPlaying: false })
let dirs = RNFetchBlob.fs.dirs
if (Platform.OS === 'android') {
try {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.WRITE_EXTERNAL_STORAGE,
{
title: 'Permissions for write access',
message: 'Give permission to your storage to write a file',
buttonPositive: 'ok',
},
);
if (granted === PermissionsAndroid.RESULTS.GRANTED) {
console.log('You can use the storage');
} else {
console.log('permission denied');
return;
}
} catch (err) {
console.warn(err);
return;
}
}
if (Platform.OS === 'android') {
try {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: 'Permissions for write access',
message: 'Give permission to your storage to write a file',
buttonPositive: 'ok',
},
);
if (granted === PermissionsAndroid.RESULTS.GRANTED) {
console.log('You can use the camera');
} else {
console.log('permission denied');
return;
}
} catch (err) {
console.warn(err);
return;
}
}
const path = Platform.select({
ios: 'hello.m4a',
//android: dirs.DocumentDir+'/hello.aac',
android: 'sdcard/hello.mp3',
});
const audioSet: AudioSet = {
// AudioEncoderAndroid: AudioEncoderAndroidType.AAC,
// AudioSourceAndroid: AudioSourceAndroidType.MIC,
// AVEncoderAudioQualityKeyIOS: AVEncoderAudioQualityIOSType.high,
// AVNumberOfChannelsKeyIOS: 2,
// AVFormatIDKeyIOS: AVEncodingOption.aac,
};
//console.log('audioSet', audioSet);
const uri = await this.audioRecorderPlayer.startRecorder(path);
console.log("URI => ",uri);
// RNFS.readFile(uri, 'base64')
// .then(res =>{
// console.log(res);
// });
// RNFetchBlob.fs.writeFile(path, base64Str, 'base64');
// RNFetchBlob.android.actionViewIntent(path, 'application/aac');
this.audioRecorderPlayer.addRecordBackListener((e: any) => {
//console.log("E ====>>>>>>>>>",e);
this.setState({
recordSecs: e.current_position,
recordTime: this.audioRecorderPlayer.mmssss(
Math.floor(e.current_position),
),
});
});
//alert(`uri: ${uri}`);
// var body = new FormData();
// //console.log("BODY",abc);
// body.append('file', uri);
//
// console.log("+++++++=========body=========++++++",body);
var body = new FormData();
//console.log("BODY",abc);
body.append('inputFile', {
name: 'sound.mp4',
type: 'audio/mp3',
uri: uri
});
console.log("+++++++=========body=========++++++",body);
// console.log("BODY",body);
// RNFS.readFile(uri, "base64").then(data => {
// // binary data
// console.log("+++++++=========URI=========++++++",data);
// });
// const formData = [];
// formData.push({
// name: "sound",
// filename: `sound.mp4`,
// data: RNFetchBlob.wrap(uri)
// });
const blob = await (await fetch(uri)).blob();
// const file = new File(this.state.recordTime, `me-at-thevoice${1}.mp3`, {
// type: blob.type,
// lastModified: Date.now()
// });
// console.log("Bolb data file",file);
var bodyData = new FormData();
//console.log("BODY",abc);
bodyData.append('inputFile', { blob });
//
// console.log("RNFetchBlob blob",blob);
// await new Promise(resolve => {
// var reader = new FileReader();
// reader.readAsDataURL(blob);
// reader.onloadend = () => {
// var base64data = reader.result;
// console.log("reader",reader);
// console.log("base64data =--->>>",base64data);
// // let pth = path
// // RNFetchBlob.fs.writeFile(pth, reader.result.substr(base64data.indexOf(',')+1), 'base64').then((res) => {
// // console.log("RNFetchBlob res",res);
// // blob.close()
// // resolve();
// // });
//
this.props.setLoader(true);
this.props.uploadAudio(bodyData).then(result => {
console.log("this.props.audioRecordingResponse |||||=====|||||",this.props.audioRecordingResponse);
if (this.props.audioRecordingResponse.success) {
this.handler('success','Success',this.props.audioRecordingResponse.message);
// this.refs["sign"].resetImage();
// this.setState({
// signatures: [],
// isDragged: false,
// signatureCount: 0
// })
//this.props.navigation.navigate('AudioRecording',{templateId:templateId, documentId: documentId});
} else {
this.props.setLoader(false);
this.handler('error','Error',this.props.audioRecordingResponse.message);
}
})
// }
// })
};
Please let me know if anyone having solution for the same.
I am not sure whether this answers your specific case, but this is how I send my code from a react native app:
import AudioRecord from 'react-native-audio-record';
import * as RNFS from 'react-native-fs'
.....
record = () => {
if (!this.state.recording) {
this.setState({recording: true}, () => {
AudioRecord.start()
})
} else {
AudioRecord.stop().then(r => {
this.setState({recording: false})
RNFS.readFile(r, 'base64') // r is the path to the .wav file on the phone
.then((data) => {
this.context.socket.emit('sendingAudio', {
sound: data
});
})
});
}
}
I use sockets for my implementation but you can use pretty much anything as all I am sending is a long string. On the server side I then decode the string as so:
export async function sendingAudio(data) {
let fileName = `sound.wav`
let buff = Buffer.from(data.sound, 'base64');
await fs.writeFileSync(fileName, buff)
}
So basically I create a wav file on the phone, then read it into a base64 encoding, send that to the server and on the server I decode it from base64 into a .wav file.
For Laravel I believe this could help you Decode base64 audio . Just dont save it as mp3 but a wav.

Electron check if a window is already open and close before creating

Is it possible to detect in electron if window is already created and close before creating another one?
here is my sample code
// video window listener
ipcMain.on("load-video-window", (event, data) => {
// create the window
//window.close() if window exist;
let videoPlayer = new BrowserWindow({
show: true,
width: 840,
height: 622,
webPreferences: {
nodeIntegration: true,
plugins: true,
},
});
if (process.env.WEBPACK_DEV_SERVER_URL) {
// Load the url of the dev server if in development mode
videoPlayer.loadURL(
process.env.WEBPACK_DEV_SERVER_URL + "video_player.html"
);
if (!process.env.IS_TEST) videoPlayer.webContents.openDevTools();
} else {
videoPlayer.loadURL(`app://./video_player`);
}
videoPlayer.on("closed", () => {
videoPlayer = null;
});
// here we can send the data to the new window
videoPlayer.webContents.on("did-finish-load", () => {
videoPlayer.webContents.send("data", data);
});
});
I think this should work
let playerWindow;
ipcMain.on("load-video-window", (event, data) => {
if (playerWindow) {
playerWindow.close();
}
playerWindow = new BrowserWindow();
});
Extending #Lord Midi code, we can check to see if a window is not destroyed and is still focusable. You can do that with the following code:
let playerWindow;
const isPlayerWindowOpened = () => !playerWindow?.isDestroyed() && playerWindow?.isFocusable();
ipcMain.on("load-video-window", (event, data) => {
if (isPlayerWindowOpened()) {
playerWindow.close();
}
playerWindow = new BrowserWindow();
})

Webrtc connect peers on call function

I know this probaly is a longshot, but I was hoping for someone to point me in the right direction here.
I've made a simple peer to peer video connection, and wrapped it inside a function, so I can call it on a button click, but it's not running.
When it's not wrapped inside the "activateVideoStream" function, but just on load, it works fine. I have a feeling that the issue is around the async function, but I can't wrap my head around it.
Here is the code:
let isAlreadyCalling = false;
const remoteVideo = document.getElementById("remote-video");
const {
RTCPeerConnection,
RTCSessionDescription
} = window;
let peerConnection;
function activateVideoStream() {
const configuration = {
"iceServers": [{
"urls": "stun:stun.l.google.com:19302"
}]
};
console.log("Activate Video Stream");
peerConnection = new RTCPeerConnection(configuration);
navigator.getUserMedia({
video: true,
audio: true
},
stream => {
const localVideo = document.getElementById("local-video");
if (localVideo) {
localVideo.srcObject = stream;
}
stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));
},
error => {
console.warn(error.message);
}
);
peerConnection.ontrack = function ({
streams: [stream]
}) {
if (remoteVideo) {
remoteVideo.srcObject = stream;
}
};
}
async function callUser(socketId) {
console.log("Call User");
remoteVideo.style.display = "block";
const offer = await peerConnection.createOffer();
await peerConnection.setLocalDescription(new RTCSessionDescription(offer));
socket.emit("callUser", {
offer,
to: socketId
});
}
socket.on("callMade", async data => {
console.log("Call made");
await peerConnection.setRemoteDescription(
new RTCSessionDescription(data.offer)
);
const answer = await peerConnection.createAnswer();
await peerConnection.setLocalDescription(new RTCSessionDescription(answer));
remoteVideo.style.display = "block";
socket.emit("makeAnswer", {
answer,
to: data.socket
});
});
socket.on("answerMade", async data => {
console.log("Answer made");
await peerConnection.setRemoteDescription(
new RTCSessionDescription(data.answer)
);
if (!isAlreadyCalling) {
callUser(data.socket);
isAlreadyCalling = true;
}
});
I've noticed that "peerConnection.connectionState" inside "callUser" is being set to "new", but without the function wrapped around, it's set to "complete", so that's probaly the issue.

Recording and sending audio in nativescript/nativescript-audio

Im working with angular/nativescript with the plugin 'nativescript-audio', I need record audio with best quality possible and send to an API with HttpClient.
There are a few things I need to do, like converting audio to base64 and recording it with a good quality, but I do not have a good knowledge of native audio.
Currently, using this plugin, I can record and play the audio in the library itself, but when sent in base 64 it arrives unrecognizable in the API.
What Im doing:
private async _startRecord(args) {
if (!this._recorder.hasRecordPermission()) await this._askRecordPermission(args);
try {
this._isRecording = true;
await this._recorder.start(this._getRecorderOptions());
} catch (err) {
console.log(err);
}
}
private _getRecorderOptions(): AudioRecorderOptions {
let audioFileName = 'record'
let audioFolder = knownFolders.currentApp().getFolder('audio');
let recordingPath = `${audioFolder.path}/${audioFileName}.${this.getPlatformExtension()}`;
let androidFormat, androidEncoder;
if (platform.isAndroid) {
androidFormat = 4;
androidEncoder = 2;
}
return {
filename: recordingPath,
format: androidFormat,
encoder: androidEncoder,
metering: true,
infoCallback: info => { console.log(JSON.stringify(info)); },
errorCallback: err => this._isRecording = false
}
}
Then:
let audioFileName = 'record';
let audioFolder = knownFolders.currentApp().getFolder('audio');
let file: File = audioFolder.getFile(`${audioFileName}.${this.getPlatformExtension()}`);
let b = file.readSync();
var javaString = new java.lang.String(b);
var encodedString = android.util.Base64.encodeToString(
javaString.getBytes(),
android.util.Base64.DEFAULT
);
this.service.sendFile(encodedString)
.subscribe(e => {
console.log(e);
}, error => {
console.log('ERROR ////////////////////////////////////////////');
console.log(error.status);
})
The service:
sendFile(fileToUpload: any): Observable<any> {
let url: string = `myapi.com`
let body = { "base64audioFile": fileToUpload }
return this.http.post<any>(url, body, {
headers: new HttpHeaders({
// 'Accept': 'application/json',
// 'Content-Type': 'multipart/form-data',
// "Content-Type": "multipart/form-data"
}),
observe: 'response'
});
}
I've already tried changing the recording options in several ways, but I do not know which one is right for the best audio quality and which formats and encodings I need:
androidFormat = 3;
androidEncoder = 1;
channels: 2,
sampleRate: 96000,
bitRate: 1536000,
The result base64 varies a lot with the type of encode I make, but so far I have not managed to get anything recognizable, just some hissing and unrecognizable noises.
First of all, you may just pass the bytes directly to encodeToString(...) method. You don't have to create a string from bytes, it's not valid.
Also use NO_WRAP flag instead of DEFAULT.
var encodedString = android.util.Base64.encodeToString(
b,
android.util.Base64.NO_WRAP
);
Here is a Playground Sample which I wrote a while ago to test base64 encode on iOS & Android. You might have to update the file url, the one in the source gives 404 (I had just picked a random mp3 file form internet for testing).

Resources