CAF: Audio tracks changing issue - chromecast

We've created a custom CAF receiver. When we try to switch audio tracks, the receiver player works properly just for the first request. If we try to make other requests, we are able to see the correct EDIT_TRACK_INFO_REQUEST (with the correct audio TrackId) on the receiver side, but the audio doesn't change.
We replicate the same behavior on web/IOS/ANDROID senders with different assets.
Does anyone have any suggestion?
Thanks in advance.
---Additional details:
Smooth Streaming manifest audio track snippet, notice that Language doesn't follow RFC specification
<StreamIndex Name="audio101_spa" Language="spa" Type="audio" Subtype="AACL" QualityLevels="1" Chunks="0" Url=".../QualityLevels({bitrate})/Fragments(audio101_spa={start time})">
<QualityLevel Bitrate="96000" Index="0" FourCC="AACL" SamplingRate="22050" Channels="2" BitsPerSample="16" PacketSize="4" AudioTag="255" CodecPrivateData="1390"/>
<StreamIndex Name="audio102_eng" Language="eng" Type="audio" Subtype="AACL" QualityLevels="1" Chunks="0" Url=".../QualityLevels({bitrate})/Fragments(audio102_eng={start time})">
<QualityLevel Bitrate="96000" Index="0" FourCC="AACL" SamplingRate="22050" Channels="2" BitsPerSample="16" PacketSize="4" AudioTag="255" CodecPrivateData="1390"/>
On receiver side actually on PLAYER_LOAD_COMPLETE we perform custom handling of tracks:
//custom RFC mapping from values that comes from
const tracksLabelsObj = {
"spa": { name: "Español", lang: "es" },
"eng": { name: "Inglés", lang: "en" },
"ita": { name: "Italiano", lang: "it" }
...}
and we perform mapping and handling of audio track
for (i = 0; i < request.media.tracks.length; i++) {
trackLanguage = tracksLabelsObj[request.media.tracks[i].language];
if (((request.media.tracks[i].type == 'AUDIO') || (request.media.tracks[i].type == 'TEXT')) && (trackLanguage != undefined)) {
//change labels and code
request.media.tracks[i].name = trackLanguage.name;
request.media.tracks[i].language = trackLanguage.lang;
}
}
We founded on documentation also another way to handle this, using:
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.EDIT_AUDIO_TRACKS, request => {...
if (request.media != null) {
console.log("CHROMECAST: EDIT AUDIO TRACKS - Changing media tracks");
for (i = 0; i < request.media.tracks.length; i++) {
trackLanguage = tracksLabelsObj[request.media.tracks[i].language];
if (((request.media.tracks[i].type == 'AUDIO') || (request.media.tracks[i].type == 'TEXT')) && (trackLanguage != undefined)) {
//Cambio labels lingua
request.media.tracks[i].name = trackLanguage.name;
request.media.tracks[i].language = trackLanguage.lang;
}
}
}
...}
But we can't handle it correctly, any suggestion also on this?
For internal testing, we also used this clear streaming that has the same behavior - http://harmonic.e2e.purpledrm.com.edgesuite.net/Content/SS/VOD/yjO9VXw7-ElephantsDreamH264720p/ElephantsDream.ism/Manifest
Many thanks.

Related

Usage of AlternativeLanguageCodes in Google Cloud Speech to Text API v1p1beta1 RPC

I am working on Google Cloud Speech to Text API in RPC v1p1beta1 with its go client. The API works as expected but if alternativeLanguageCodes are set in the RecognitionConfig it does not answer.
GoogleRecognitionConfig: &speech.StreamingRecognitionConfig{
SingleUtterance: c.SingleUtterance,
InterimResults: false,
Config: &speech.RecognitionConfig{
Encoding: speech.RecognitionConfig_LINEAR16,
SampleRateHertz: 8000,
LanguageCode: lang,
// AlternativeLanguageCodes: []string("en-US"),
SpeechContexts: []*speech.SpeechContext{
{Phrases: c.Phrases},
},
},
},
I am aware it's in beta but I am wondering if anyone else is having issues as well or it's just a bug in my code.
Thanks
I have tried this today (c#, 1.0.0-beta02) but I never get alternative language codes results, only for primary language code.
ENGINE = SpeechClient.Create();
ENGINE_CONFIG = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = settings.ArchiveSampleRate,
LanguageCode = firstLanguageCode,
ProfanityFilter = false,
MaxAlternatives = Constants.MASTER_SETTINGS.SpeechRecognitionAlternatives,
SpeechContexts = { new HintsManager(settings).GetHintsBasedOnContext(Contexts) }
},
InterimResults = Constants.MASTER_SETTINGS.RecognitionConfigSettings.InterimResultsReturned
};
// NOTE: 10062019 - ADD ALTERNATIVE LANGUAGE CODES HERE
// NOTE: 10062019 - ADD ALTERNATIVE LANGUAGE CODES HERE
// NOTE: 10062019 - ADD ALTERNATIVE LANGUAGE CODES HERE
foreach (var alternativeCode in otherAlternativeLanguageCodes)
{
ENGINE_CONFIG.Config.AlternativeLanguageCodes.Add(alternativeCode);
}
EDIT: After upgrading yesterday to new Beta, Nuget:
Install-Package Google.Cloud.Speech.V1P1Beta1 -Version 1.0.0-beta03
Everything seems to be working ok. The only thing I noticed was that interim results are never returned?

Not getting location in Android 8.0.0 Oreo

I am building an app in React-Native and need to have location access as per the requirement.
I have tried using react-native-fused-location for the same, as below.
FusedLocation.setLocationInterval(20000);
FusedLocation.setFastestLocationInterval(15000);
FusedLocation.setSmallestDisplacement(10);
FusedLocation.setLocationPriority(
FusedLocation.Constants.HIGH_ACCURACY
);
FusedLocation.startLocationUpdates();
FusedLocation.getFusedLocation().then(location => {
if (location != null) {
let initialPosition = JSON.stringify(location);
this.state.latitude = location.latitude;
this.state.longitude = location.longitude;
this.state.timestamp = location.timestamp;
this.state.initialPosition = initialPosition;
} else {
alert("Location unavailable, please try later");
}
}).catch(error => { // fused location catch
console.log("location retrieval failed");
});
The only output, I am receiving with the above code, in case of Oreo 8.0.0 is E/request: 100.
also tried the other way as below
navigator.geolocation.watchPosition(
location => {
console.log("inside watchPosition location");
if (location != null) {
console.log("location is not null");
let initialPosition = JSON.stringify(location.coords);
this.state.latitude = location.coords.latitude;
this.state.longitude = location.coords.longitude;
this.state.timestamp = location.coords.timestamp;
this.state.initialPosition = initialPosition;
} else {
alert("Location unavailable, please try later");
}
},
error => {
console.log("calling ShowHideActivityIndicator, getLocationWithNavigate (ios)");
this.ShowHideActivityIndicator(false);
alert("location retrieval failed");
console.log(error);
},
{ timeout: 20000, enableHighAccuracy: true, distanceFilter: 10 }
);
But getting same output in both of the above codes, that is unable to get the location specifically in Android Oreo 8.0.0. Even the location retrieving callback is not even called. Though in other versions, including Oreo 8.1.0, and lower version devices, including Marshmallow and Nougat, it seems working fine.
Though, if I turn on the fake GPS in Oreo 8.0.0, then it seems to be able to get the location. I am unable to figure out, what I am missing.
mention to google document:
In an effort to reduce power consumption, Android 8.0 (API level 26) limits how frequently background apps can retrieve the user's current location. Apps can receive location updates only a few times each hour.Background Location Limits

Firefox - mediaDevices.getUserMedia throws AbortError

I get the following error in firefox (no problems in Chrome / Edge / Safari):
MediaStreamError { name: "AbortError", message: "Starting video failed", constraint: "", stack: "" }
Browser console only shows < unavailable > when this error is thrown.
I am using adapter-latest.js from webrtc.github.io and the code works perfectly well on other pages within my application but not in one particular page. Is there a possibility to find out, what interferes with getUserMedia? I allready tried commenting out all other libraries and includes.
My code is:
var video = document.getElementById('recorder');
video.onloadedmetadata = function(e) {
$("#takePicture").show();
if($("#customerImage").attr("src") == ""){
$("#recorder").show();
}
};
navigator.mediaDevices.getUserMedia({ video: true})
.then(stream => {
video.srcObject = stream;
})
.catch(e => console.log(e));
I was facing this issue because I had Chrome also running the same app and using the webcam. So basically the webcam was already in use and I was trying to access it via Firefox too.
Are you really sure that you need adapter? In my experience it makes more problems than it solves.
Can you try to use constraints?
Example (I removed some things, but it works like a charm here):
constraints = {
width: 1280,
height: 720,
frameRate: 10, //mobile
facingMode: {
exact: "environment"
} //mobile
}
navigator.mediaDevices.getUserMedia({
audio: false,
video: constraints
}).
then(handleSuccess).catch(handleError);
function handleSuccess(stream) {
video.src = URL.createObjectURL(stream);
video.play();
}
function handleError(error) {
console.log('navigator.getUserMedia error: ', error);
}

Limit intended purposes of newly created self signed certificate

I'm creating a self signed certificate using CertCreateSelfSignCertificate. This works and I can encrypt/sign/decrypt/verify data with it.
I would like to limit the intended purposes of the certificate, but I always end up with a certificate that has "<All>" intended purposes enabled. This is the code I'm using to prepare the pExtensions parameter to the CertCreateSelfSignCertificate call:
BYTE key_usage_value = CERT_DATA_ENCIPHERMENT_KEY_USAGE |
CERT_DIGITAL_SIGNATURE_KEY_USAGE;
CERT_KEY_USAGE_RESTRICTION_INFO key_usage = {
0, NULL,
{ sizeof(key_usage_value), &key_usage_value }
};
auto key_usage_data = EncodeObject(szOID_KEY_USAGE_RESTRICTION, &key_usage);
CERT_EXTENSION extension[] = {
{ szOID_KEY_USAGE_RESTRICTION, TRUE, {
key_usage_data.size(), key_usage_data.data()
} }
};
CERT_EXTENSIONS extensions = {
elemsof(extension),
extension
};
EncodeObject simply calls CryptEncodeObject and returns the result as a std::vector.
I have not found much documentation on this so I'm not actually sure this is what I'm supposed to do. Can anyone point out to me what I'm doing wrong?
I guess the Extended Key Usage of your certificate is beeing build empty, that means that all purposes are allowed, if you want to limit those, you will need to define them including the specific OIDs of each one, for instance, A certificate capable only for:
Smartcardlogon, Digital Signature and Non-Repudiation
will have Extended Key Usage field filled with
1.3.6.1.4.1.311.20.2.2
2.5.29.37.3
2.5.29.37
Hope it helps
After looking into szOID_ENHANCED_KEY_USAGE according to srbob's answer I managed to change the key usage field.
Here is the (simplified) code I'm using to create the extensions on the certificate, again, this is the code I'm using to prepare the pExtensions parameter to the CertCreateSelfSignCertificate call:
BYTE key_usage_value = CERT_DATA_ENCIPHERMENT_KEY_USAGE |
CERT_DIGITAL_SIGNATURE_KEY_USAGE;
CERT_KEY_USAGE_RESTRICTION_INFO key_usage = {
0, NULL,
{ sizeof(key_usage_value), &key_usage_value }
};
auto key_usage_data = EncodeObject(szOID_KEY_USAGE_RESTRICTION, &key_usage);
LPSTR enh_usage_value[] = { szOID_KP_DOCUMENT_SIGNING };
CERT_ENHKEY_USAGE enh_usage = {
elemsof(enh_usage_value),
enh_usage_value
};
auto enh_usage_data = EncodeObject(szOID_ENHANCED_KEY_USAGE, &enh_usage);
CERT_EXTENSION extension[] = {
{ szOID_KEY_USAGE_RESTRICTION, TRUE, {
key_usage_data.size(), key_usage_data.data() } },
{ szOID_ENHANCED_KEY_USAGE, TRUE, {
enh_usage_data.size(), enh_usage_data.data() } },
};
CERT_EXTENSIONS extensions = {
elemsof(extension),
extension
};
Note that the code above still adds the szOID_KEY_USAGE_RESTRICTION extension as well.

Keep a Play 2 application private on heroku

I'm using Heroku to host a Play 2 application for the purpose of testing and playing around. I'd like the application to be "private" at this point which means that every aspect of the application should only be visible to certain users.
Normally, I would just use an htaccess file with one single user/password, but that's a specific thing of Apache of course and doesn't help me in this case.
The protection doesn't have to be "strong". The main aim is to keep away bots and random visitors
It would be great if I didn't have to "pollute" the code of my play application. I'd prefer to have some external mechanism to achieve that. If there is no other way than to realize it using play itself, the solution should be loosely coupled from the rest of my play application.
How could I achieve that?
edit: to emphasize it: what I want to achieve won't be part of the final application in production mode. So it neither has to be super secure, nor super engineered.
Adreas example is correct but it is from play 2.1 and in play 2.2 the signature of Filter.apply has changed a little bit, this should work better with 2.2:
class BasicAuth extends Filter {
val username = "stig"
val password = "secretpassword"
override def apply(next: RequestHeader => Future[SimpleResult])(request: RequestHeader): Future[SimpleResult] = {
request.headers.get("Authorization").flatMap { authorization =>
authorization.split(" ").drop(1).headOption.filter { encoded =>
new String(org.apache.commons.codec.binary.Base64.decodeBase64(encoded.getBytes)).split(":").toList match {
case u :: p :: Nil if u == username && password == p => true
case _ => false
}
}.map(_ => next(request))
}.getOrElse {
Future.successful(Results.Unauthorized.withHeaders("WWW-Authenticate" -> """Basic realm="MyApp Staging""""))
}
}
}
I dont think Heroku offers a solution for this. I ended up implementing a Basic access authentication filter and used it in the Global object. It looks something like this
class HerokuHttpAuth extends Filter {
object Conf {
val isStaging = true // read a config instead of hard coding
val user = "theusername"
val password = "thepassword"
}
override def apply(next: RequestHeader => Result)(request: RequestHeader): Result = {
if (Conf.isStaging) {
request.headers.get("Authorization").flatMap { authorization =>
authorization.split(" ").drop(1).headOption.filter { encoded =>
new String(org.apache.commons.codec.binary.Base64.decodeBase64(encoded.getBytes)).split(":").toList match {
case u :: p :: Nil if u == Conf.user && Conf.password == p => true
case _ => false
}
}.map(_ => next(request))
}.getOrElse {
Results.Unauthorized.withHeaders("WWW-Authenticate" -> """Basic realm="MyApp Staging"""")
}
} else {
next(request)
}
}
}

Resources