AWS Chime SDK js does not recognize video and audio elements - aws-chime-sdk

I am attempting to get the basic tutorial for the AWS Chime SDK to work in our application and the meetingSession.audioVideo.listVideoInputDevices() always returns nothing/null.
I am running this on lastest chrome, my operating system is a windows 10 workspace instance. I have headphones plugged in; but that shouldn't make a difference.
My expected result is to return at least one device for the video. Here is the output from the Logger.
2020-08-26T15:29:19.127Z [INFO] MyLogger - attempting to trigger media device labels since they are hidden
chime-bundle.js:1 2020-08-26T15:29:19.133Z [INFO] MyLogger - unable to get media device labels
chime-bundle.js:1 2020-08-26T15:29:19.134Z [INFO] MyLogger - API/DefaultDeviceController/listVideoInputDevices null -> []
chime-bundle.js:1 Uncaught (in promise) TypeError: Cannot read property 'deviceId' of undefined
*Note. The video and audio elements are not hidden.
I have tried the code snippits from various demos. Which are all just a copy of AWS's walkthrough. So pretty much zero information. I have researched how the audio devices work in html5 and looking through the files provided in the sdk-js, I am even more confused. Can someone point me in the right direction?
Here is the basic code, you can get it, and a description from the link above.
var fetchResult = await window.fetch(
window.encodeURI("<our endpoint for backend (running c# instead of node)>",
{
method: 'POST'
}
);
let result = await fetchResult.json();
console.log("Result from Chime API:", result);
const logger = new ConsoleLogger('MyLogger', LogLevel.INFO);
const deviceController = new DefaultDeviceController(logger);
const meetingResponse = result.JoinInfo.Meeting;
const attendeeResponse = result.JoinInfo.Attendee;
const configuration = new MeetingSessionConfiguration(meetingResponse, attendeeResponse);
// In the usage examples below, you will use this meetingSession object.
const meetingSession = new DefaultMeetingSession(
configuration,
logger,
deviceController
);
console.log("MEETING SESSION", meetingSession);
//SETUP AUDIO
const audioElement = document.getElementById('notary-audio');
meetingSession.audioVideo.bindAudioElement(audioElement);
const videoElement = document.getElementById('notary-video');
// Make sure you have chosen your camera. In this use case, you will choose the first device.
const videoInputDevices = await meetingSession.audioVideo.listVideoInputDevices();
// The camera LED light will turn on indicating that it is now capturing.
// See the "Device" section for details.
await meetingSession.audioVideo.chooseVideoInputDevice(videoInputDevices[0].deviceId);
const observer = {
audioVideoDidStart: () => {
console.log('Started');
},
audioVideoDidStop: sessionStatus => {
// See the "Stopping a session" section for details.
console.log('Stopped with a session status code: ', sessionStatus.statusCode());
},
audioVideoDidStartConnecting: reconnecting => {
if (reconnecting) {
// e.g. the WiFi connection is dropped.
console.log('Attempting to reconnect');
}
},
// videoTileDidUpdate is called whenever a new tile is created or tileState changes.
videoTileDidUpdate: tileState => {
// Ignore a tile without attendee ID and other attendee's tile.
if (!tileState.boundAttendeeId || !tileState.localTile) {
return;
}
// videoTileDidUpdate is also invoked when you call startLocalVideoTile or tileState changes.
console.log(`If you called stopLocalVideoTile, ${tileState.active} is false.`);
meetingSession.audioVideo.bindVideoElement(tileState.tileId, videoElement);
localTileId = tileState.tileId;
},
videoTileWasRemoved: tileId => {
if (localTileId === tileId) {
console.log(`You called removeLocalVideoTile. videoElement can be bound to another tile.`);
localTileId = null;
}
}
};
meetingSession.audioVideo.addObserver(observer);
meetingSession.audioVideo.start();

Related

nativescript-phone prevents Nativescript-contacts from returning

I have an app where I want to select a person from contacts and then send a text to that person. It works as expected for the first user, but after that the app never receives control after the contact is selected. I've isolated the problem to the Nativescript-phone plugin. If you simply call phone.sms() to send a text, and then call contacts.getContact(), the problem occurs. I see this on both Android and iOS.
I've created a sample app that demos the problem at https://github.com/dlcole/contactTester. The sample app is Android only. I've spent a couple days on this and welcome any insights.
Edit 4/21/2020:
I've spent more time on this and can see what's happening. Both plugins have the same event handler and same request codes:
nativescript-phone:
var SEND_SMS = 1001;
activity.onActivityResult = function(requestCode, resultCode, data) {
nativescript-contacts:
var PICK_CONTACT = 1001;
appModule.android.on("activityResult", function(eventData) {
What happens is that after invoking phone.sms, calling contacts.getContact causes control to return to the phone plugin, and NOT the contacts plugin. I tried changing phone's request code to 1002 but had the same results.
So, the next step is to determine how to avoid the collision of the event handlers.
Instead of using activityResult event, nativescript-phone plugin overwrites the default activity result callback.
A workaround is to set the callback to it's original value after you are done with nativescript-phone.
exports.sendText = function (args) {
console.log("entering sendText");
const activity = appModule.android.foregroundActivity || appModule.android.startActivity;
const onActivityResult = activity.onActivityResult;
permissions.requestPermissions([android.Manifest.permission.CALL_PHONE],
"Permission needed to send text")
.then(() => {
console.log("permission granted");
phone.sms()
.then((result) => {
console.log(JSON.stringify(result, null, 4));
activity.onActivityResult = onActivityResult;
})
})
}

Twilio Base64 Media Payload for Google Speech To Text API not Responding

I have a need to do some real time transcriptions from twilio phone calls using Google speech-to-text api and I've followed a few demo apps showing how to set this up. My application is in .net core 3.1 and I am using webhooks with a Twilio defined callback method. Upon retrieving the media from Twilio through the callback it is passed as Raw audio in encoded in base64 as you can see here.
https://www.twilio.com/docs/voice/twiml/stream
I've referenced this demo on Live Transcribing as well and am trying to mimic the case statement in the c#. Everything connects correctly and the media and payload is passed into my app just fine from Twilio.
The audio string is then converted to a byte[] to pass to the Task that needs to transcribe the audio
byte[] audioBytes = Convert.FromBase64String(info);
I am following the examples based of the Google docs that either stream from a file or an audio input (such as a microphone.) Where my use case is different is, I already have the bytes for each chunk of audio. The examples I referenced can be seen here. Transcribing audio from streaming input
Below is my implementation of the latter although using the raw audio bytes. This Task below is hit when the Twilio websocket connection hits the media event. I pass the payload directly into it. From my console logging I am getting to the Print Responses hit... console log, but it will NOT get into the while (await responseStream.MoveNextAsync()) block and log the transcript to the console. I do not get any errors back (that break the application.) Is this possible to even do? I have also tried loading the bytes into a memorystream object and passing them in as the Google doc examples do as well.
static async Task<object> StreamingRecognizeAsync(byte[] audioBytes)
{
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Mulaw,
SampleRateHertz = 8000,
LanguageCode = "en",
},
InterimResults = true,
SingleUtterance = true
}
}); ;
// Print responses as they arrive.
Task printResponses = Task.Run(async () =>
{
Console.WriteLine("Print Responses hit...");
var responseStream = streamingCall.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
StreamingRecognizeResponse response = responseStream.Current;
Console.WriteLine("Response stream moveNextAsync Hit...");
foreach (StreamingRecognitionResult result in response.Results)
{
foreach (SpeechRecognitionAlternative alternative in result.Alternatives)
{
Console.WriteLine("Google transcript " + alternative.Transcript);
}
}
}
});
//using (MemoryStream memStream = new MemoryStream(audioBytes))
//{
// var buffer = new byte[32 * 1024];
// int bytesRead;
// while ((bytesRead = await memStream.ReadAsync(audioBytes, 0, audioBytes.Length)) > 0)
// {
// await streamingCall.WriteAsync(
// new StreamingRecognizeRequest()
// {
// AudioContent = Google.Protobuf.ByteString
// .CopyFrom(buffer, 0, bytesRead),
// });
// }
//}
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString
.CopyFrom(audioBytes),
});
await streamingCall.WriteCompleteAsync();
await printResponses;
return 0;
}
After all this, I discovered that this code works fine, just needs to be broken up and called in different events in the Twilio stream lifecycle.
The config section needs to be placed during the connected event.
The print messages task needs to be placed in the media event.
Then, the WriteCompleteAsync needs to be placed in the stop event when the websocket is closed from Twilio.
One other important item to consider are the number of requests being sent to Google STT to ensure that too many requests aren't overloading the quota which seems to be (for now) 300 requests / minute.

Using ZXingScannerPage with XF, my content page has weird behavior

I am making an app in xamarin forms of which I will have a login similar to that of whatapp web, an on-screen qr that will be scanned by the phone, in the emulator with visual studio 2017 I have no problems, but when I export the app to an apk and the I install on a mobile device, the app reads the qr and returns to the previous login screen, not showing any reaction, which should be to go to the next screen where I have a dashboard.
What can be? I enclose my code used.
btnScanQRCode.IsEnabled = false;
var scan = new ZXingScannerPage();
scan.OnScanResult += (result) =>
{
scan.IsScanning = false;
Device.BeginInvokeOnMainThread(async () =>
{
await Application.Current.MainPage.Navigation.PopAsync();
var resultado = JsonConvert.DeserializeObject<QrCode>(result.Text);
JObject qrObject = JObject.Parse(JsonConvert.SerializeObject(resultado));
JsonSchema schema = JsonSchema.Parse(SettingHelper.SchemaJson);
bool valid = qrObject.IsValid(schema);
if (valid == true)
{
App.Database.InsertQrCode(resultado);
QrCode qr = App.Database.GetQrCode();
await _viewModel.Login();
await Navigation.PushAsync(new Organization());
}
else
{
await DisplayAlert("False", JsonConvert.SerializeObject(resultado), "ok");
}
});
};
await Application.Current.MainPage.Navigation.PushAsync(scan);
btnScanQRCode.IsEnabled = true;
This was originally a comment, but through the writing i realized this is the answer.
You need to debug your code. Attach a device and deploy the app in Debug config. Step through your code and see where it fails.
It sounds like it's crashing silently and probably on the line where you Deserialize result.Text in a QrCode. result.Text is just a string and will never deserialize into an object. You probably need a constructor that takes a string like QrCode(result.Text).
First scan then use the result to do other things in your app.
var scanner = new ZXing.Mobile.MobileBarcodeScanner();
var result = await scanner.Scan();
Check for proper camera permissions. I bet your problem is there.

Firefox file downloading process structure figure

For my project, I need to study some info like "FireFox/Gecko file downloading structure overview"(if any), or somewhat "file downloading process flow chart of FireFox/Gecko". I couldn't find something like that in the Internet so far. Is there any info about it? Thanks a lot.
PS: It must include the paths about all file downloading through FireFox browser, which are via the network connection info APIs and file handling APIs, just like "httpOpenRequest" or "DoFileDownload" API(if any).
What would be the Firefox downloading process API paths?? Is there any figure or chart?
Please help me...
You are probably going to need to look at the code to get the information you desire. You will need to build the flowchart yourself.
There are a couple of different ways downloading is done in the code.
If you are talking about a Firefox add-on performing a download, then it is probably being done using Downloads.jsm (although there is an older method for doing so). The source code for that JavaScript module is at resource://gre/modules/Downloads.jsm (This URL is only valid in Firefox). There appear to be several files all located in the jsloader\resource\gre\modules directory within the zip format file called omni.ja in the root of the Firefox distribution. You can just copy that file and change the name to omni.zip and access it as a normal .zip file.
If you are wanting to know how Firefox saves a page when it is requested by the user: It is defined in the context menu with the oncommand value being gContextMenu.saveLink();. saveLink() is defined in: chrome://browser/content/nsContextMenu.js. It does some housekeeping and then calls saveHelper() which is in the same file.
The saveHelper() code is the following:
// Helper function to wait for appropriate MIME-type headers and
// then prompt the user with a file picker
saveHelper: function(linkURL, linkText, dialogTitle, bypassCache, doc) {
// canonical def in nsURILoader.h
const NS_ERROR_SAVE_LINK_AS_TIMEOUT = 0x805d0020;
// an object to proxy the data through to
// nsIExternalHelperAppService.doContent, which will wait for the
// appropriate MIME-type headers and then prompt the user with a
// file picker
function saveAsListener() {}
saveAsListener.prototype = {
extListener: null,
onStartRequest: function saveLinkAs_onStartRequest(aRequest, aContext) {
// if the timer fired, the error status will have been caused by that,
// and we'll be restarting in onStopRequest, so no reason to notify
// the user
if (aRequest.status == NS_ERROR_SAVE_LINK_AS_TIMEOUT)
return;
timer.cancel();
// some other error occured; notify the user...
if (!Components.isSuccessCode(aRequest.status)) {
try {
const sbs = Cc["#mozilla.org/intl/stringbundle;1"].
getService(Ci.nsIStringBundleService);
const bundle = sbs.createBundle(
"chrome://mozapps/locale/downloads/downloads.properties");
const title = bundle.GetStringFromName("downloadErrorAlertTitle");
const msg = bundle.GetStringFromName("downloadErrorGeneric");
const promptSvc = Cc["#mozilla.org/embedcomp/prompt-service;1"].
getService(Ci.nsIPromptService);
promptSvc.alert(doc.defaultView, title, msg);
} catch (ex) {}
return;
}
var extHelperAppSvc =
Cc["#mozilla.org/uriloader/external-helper-app-service;1"].
getService(Ci.nsIExternalHelperAppService);
var channel = aRequest.QueryInterface(Ci.nsIChannel);
this.extListener =
extHelperAppSvc.doContent(channel.contentType, aRequest,
doc.defaultView, true);
this.extListener.onStartRequest(aRequest, aContext);
},
onStopRequest: function saveLinkAs_onStopRequest(aRequest, aContext,
aStatusCode) {
if (aStatusCode == NS_ERROR_SAVE_LINK_AS_TIMEOUT) {
// do it the old fashioned way, which will pick the best filename
// it can without waiting.
saveURL(linkURL, linkText, dialogTitle, bypassCache, false,
doc.documentURIObject, doc);
}
if (this.extListener)
this.extListener.onStopRequest(aRequest, aContext, aStatusCode);
},
onDataAvailable: function saveLinkAs_onDataAvailable(aRequest, aContext,
aInputStream,
aOffset, aCount) {
this.extListener.onDataAvailable(aRequest, aContext, aInputStream,
aOffset, aCount);
}
}
function callbacks() {}
callbacks.prototype = {
getInterface: function sLA_callbacks_getInterface(aIID) {
if (aIID.equals(Ci.nsIAuthPrompt) || aIID.equals(Ci.nsIAuthPrompt2)) {
// If the channel demands authentication prompt, we must cancel it
// because the save-as-timer would expire and cancel the channel
// before we get credentials from user. Both authentication dialog
// and save as dialog would appear on the screen as we fall back to
// the old fashioned way after the timeout.
timer.cancel();
channel.cancel(NS_ERROR_SAVE_LINK_AS_TIMEOUT);
}
throw Cr.NS_ERROR_NO_INTERFACE;
}
}
// if it we don't have the headers after a short time, the user
// won't have received any feedback from their click. that's bad. so
// we give up waiting for the filename.
function timerCallback() {}
timerCallback.prototype = {
notify: function sLA_timer_notify(aTimer) {
channel.cancel(NS_ERROR_SAVE_LINK_AS_TIMEOUT);
return;
}
}
// set up a channel to do the saving
var ioService = Cc["#mozilla.org/network/io-service;1"].
getService(Ci.nsIIOService);
var channel = ioService.newChannelFromURI(makeURI(linkURL));
if (channel instanceof Ci.nsIPrivateBrowsingChannel) {
let docIsPrivate = PrivateBrowsingUtils.isWindowPrivate(doc.defaultView);
channel.setPrivate(docIsPrivate);
}
channel.notificationCallbacks = new callbacks();
let flags = Ci.nsIChannel.LOAD_CALL_CONTENT_SNIFFERS;
if (bypassCache)
flags |= Ci.nsIRequest.LOAD_BYPASS_CACHE;
if (channel instanceof Ci.nsICachingChannel)
flags |= Ci.nsICachingChannel.LOAD_BYPASS_LOCAL_CACHE_IF_BUSY;
channel.loadFlags |= flags;
if (channel instanceof Ci.nsIHttpChannel) {
channel.referrer = doc.documentURIObject;
if (channel instanceof Ci.nsIHttpChannelInternal)
channel.forceAllowThirdPartyCookie = true;
}
// fallback to the old way if we don't see the headers quickly
var timeToWait =
gPrefService.getIntPref("browser.download.saveLinkAsFilenameTimeout");
var timer = Cc["#mozilla.org/timer;1"].createInstance(Ci.nsITimer);
timer.initWithCallback(new timerCallback(), timeToWait,
timer.TYPE_ONE_SHOT);
// kick off the channel with our proxy object as the listener
channel.asyncOpen(new saveAsListener(), null);
}

Filtering a loaded kml file in OpenLayers

I'm trying to create an interactive search engine (for finding event tickets) of which one of its features is a visual map that shows related venues using OpenLayers. I have a plethora of venues (3000+) in a kml file that I would like to selectively show a filtered subsection of. Below is the code I have but when I try to run it has a JavaScript error. Running firebug and chrome developer tools makes me think that it is not getting passed the parameters I give because it says that the variables are null. However, I cannot figure out why they are not getting passed. Any insight is greatly appreciated.
var map, drawControls, selectControl, selectedFeature, select;
$('#kml').load('venuesComplete.kml');
kml=$('#kml').html();
function showVenues(state, city, venue){
filterStrategy = new OpenLayers.Strategy.Filter({});
var kmllayer = new OpenLayers.Layer.Vector("KML", {
strategies: [filterStrategy,
new OpenLayers.Strategy.Fixed()],
protocol: new OpenLayers.Protocol.HTTP({
url: "venuesComplete.kml",
format: new OpenLayers.Format.KML({
extractStyles: true,
extractAttributes: true
})
})
});
select = new OpenLayers.Control.SelectFeature(kmllayer);
kmllayer.events.on({
"featureselected": onFeatureSelect,
"featureunselected": onFeatureUnselect
});
map.addControl(select);
select.activate();
filter = new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.LIKE,
property: "",
value: ""
});
function clearFilter(){
filterStrategy.setFilter(null);
}
function setFilter(property, value){
filter.value = value;
filter.property = property;
filterStrategy.setFilter(filter);
}
var vector_style = new OpenLayers.Style();
if(venue!=""){
setFilter('name', venue);
}else if(city!=""){
setFilter('description', city);
}else if(state!=""){
setFilter('description', state);
}
map.addLayer(kmllayer);
function onPopupClose(evt) {
select.unselectAll();
}
function onFeatureSelect(event) {
var feature = event.feature;
var selectedFeature = feature;
var popup = new OpenLayers.Popup.FramedCloud("chicken",
feature.geometry.getBounds().getCenterLonLat(),
new OpenLayers.Size(100,100),
"<h2>"+feature.attributes.name + "</h2>" + feature.attributes.description +'<br>'+feature.attributes,
null,
true,
onPopupClose
);
document.getElementById('venueName').value=feature.attributes.name;
document.getElementById("output").innerHTML=event.feature.id;
feature.popup = popup;
map.addPopup(popup);
}
function onFeatureUnselect(event) {
var feature = event.feature;
if(feature.popup) {
map.removePopup(feature.popup);
feature.popup.destroy();
delete feature.popup;
}
}
}
function init() {
map = new OpenLayers.Map('map');
var google_map_layer = new OpenLayers.Layer.Google(
'Google Map Layer',
{type: google.maps.MapTypeId.HYBRID}
);
map.addLayer(google_map_layer);
state="";
state+=document.getElementById('stateProvDesc').value;
city="";
city+=document.getElementById('cityZip').value;
venue="";
venue+=document.getElementById('venueName').value;
showVenues(state,city,'Michie Stadium');
map.addControl(new OpenLayers.Control.LayerSwitcher({}));
map.zoomToMaxExtent();
}
IF I UNDERSTAND CORRECTLY, your kml does not load properly. if this is not the case, please disconsider my answer.
it is very important to check if your kml layer was properly loaded. i have a map that loads multiple dynamic (from php) kml layers and it is not uncommon to have a large layer simply not load. when that happens, the operation is aborted, but, as far as openlayers is concerned, the layer was properly loaded.
so i do 2 things: i check if the amount of loaded data meets the expected number of features in my orginal php kml parser (i use a jquery or ajax call for that) and then, in case there is a discrepancy, i try reloading (since this is a loop, i limit it to 5 attempts, so as not to loop infinitely).
check out some of my code here

Resources