Looking at this code, I'm wondering what triggers onNotificationGCM? It is triggered when the app is registered but when does it get triggered again, say, when when I want to push a message to the user? I have a chat app that I'd like to push a message when chats come in. So I understand that I register the device but then this code needs to run again, I assume, with the new event. I just need understand part flow and part code.
// handle GCM notifications for Android
$window.onNotificationGCM = function (event) {
switch (event.event) {
case 'registered':
if (event.regid.length > 0) {
// Your GCM push server needs to know the regID before it can push to this device
// here is where you might want to send it the regID for later use.
var device_token = event.regid;
RequestsService.register(device_token).then(function(response){
alert('registered!');
});
//send device reg id to server
}
break;
case 'message':
// if this flag is set, this notification happened while we were in the foreground.
// you might want to play a sound to get the user's attention, throw up a dialog, etc.
if (event.foreground) {
console.log('INLINE NOTIFICATION');
var my_media = new Media("/android_asset/www/" + event.soundname);
my_media.play();
} else {
if (event.coldstart) {
console.log('COLDSTART NOTIFICATION');
} else {
console.log('BACKGROUND NOTIFICATION');
}
}
navigator.notification.alert(event.payload.message);
console.log('MESSAGE -> MSG: ' + event.payload.message);
//Only works for GCM
console.log('MESSAGE -> MSGCNT: ' + event.payload.msgcnt);
//Only works on Amazon Fire OS
console.log('MESSAGE -> TIME: ' + event.payload.timeStamp);
break;
case 'error':
console.log('ERROR -> MSG:' + event.msg);
break;
default:
console.log('EVENT -> Unknown, an event was received and we do not know what it is');
break;
}
};
Have a look at this example:
case 'message':
/*
if (e.foreground) {
window.alert("Message recieved");
}
else {
if (e.coldstart) {
window.alert("Coldstart Notification");
} else {
window.alert("Background Notification");
}
}
window.alert("Notification message: " + e.payload.message
+ "\n\n Time: " + e.payload.conversation);
*/
var data = e.payload;
if (data.conversation){
window.history.replaceState(null, '', '#/chats/');
ProjectName.conversation.load(data.conversation);
}
if (data.product){
window.history.replaceState(null, '', '#/product/' + data.product);
}
break;
case 'error':
// window.alert("Notification error: " + e.msg);
break;
default:
// window.alert("Notification - Unknown event");
break;
}
The e.payload contains all data that you send to your application, including message. You can access your other variables in case 'message'
Related
Trying to use Alexa presentation Language features in AWS hosted custom lambda function. Intent handler are firing but when I add the
Alexa.getSupportedInterfaces it is failing .
Message is "Error handled: Alexa.getSupportedInterfaces is not a function"
// 1. Intent Handlers =============================================
const LaunchRequest_Handler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'LaunchRequest';
},
handle(handlerInput) {
let responseBuilder = handlerInput.responseBuilder;
let speakOutput = 'Welcome to test Bot. ';
// let skillTitle = capitalize(invocationName);
// Add APL directive to response
if (Alexa1.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
// Add the RenderDocument directive to the responseBuilder
responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
token: Echo_Token,
document: Customer
});
// Tailor the speech for a device with a screen.
speakOutput += " You should now also see my greeting on the screen."
} else {
// User's device does not support APL, so tailor the speech to this situation
speakOutput += " This example would be more interesting on a device with a screen, such as an Echo Show or Fire TV.";
}
return responseBuilder
.speak(speakOutput)
.withShouldEndSession(false)
.reprompt('try again, ' + speakOutput)
.withSimpleCard("CustomerSupport!", "CustomerSupport)")
// .reprompt('add a reprompt if you want to keep the session open for the user to respond')
//.withStandardCard('Welcome!',
// 'Hello!\nThis is a card for your skill, ' + skillTitle,
// welcomeCardImg.smallImageUrl, welcomeCardImg.largeImageUrl)
.getResponse();
},
};
Instead of using the below condition:
Alexa1.getSupportedInterfaces(handlerInput.requestEnvelope['Alexa.Presentation.APL]
you can use, below condition to check if the device supports APL:
if (supportsAPL(handlerInput))
Make sure you include below functions definition in your index file:
function supportsAPL(handlerInput) {
const supportedInterfaces = handlerInput.requestEnvelope.context.System.device.supportedInterfaces;
const aplInterface = supportedInterfaces['Alexa.Presentation.APL'];
return aplInterface != null && aplInterface != undefined;
}
function supportsAPLT(handlerInput) {
const supportedInterfaces = handlerInput.requestEnvelope.context.System.device.supportedInterfaces;
const aplInterface = supportedInterfaces['Alexa.Presentation.APLT'];
return aplInterface != null && aplInterface != undefined;
}
Hope that helps as it worked for me.
My custom v3 CAF receiver app is successfully playing the first few live & vod assets. After that, it gets into a state were media commands are being queued because "Load is in progress". It is still (successfully) fetching manifests, but MEDIA_STATUS remains "buffering". The log then shows:
[ 4.537s] [cast.receiver.MediaManager] Load is in progress, media command is being queued.
[ 5.893s] [cast.receiver.MediaManager] Buffering state changed, isPlayerBuffering: true old time: 0 current time: 0
[ 5.897s] [cast.receiver.MediaManager] Sending broadcast status message
CastContext Core event: {"type":"MEDIA_STATUS","mediaStatus":{"mediaSessionId":1,"playbackRate":1,"playerState":"BUFFERING","currentTime":0,"supportedMediaCommands":12303,"volume":{"level":1,"muted":false},"currentItemId":1,"repeatMode":"REPEAT_OFF","liveSeekableRange":{"start":0,"end":20.000999927520752,"isMovingWindow":true,"isLiveDone":false}}}
CastContext MEDIA_STATUS event: {"type":"MEDIA_STATUS","mediaStatus":{"mediaSessionId":1,"playbackRate":1,"playerState":"BUFFERING","currentTime":0,"supportedMediaCommands":12303,"volume":{"level":1,"muted":false},"currentItemId":1,"repeatMode":"REPEAT_OFF","liveSeekableRange":{"start":0,"end":20.000999927520752,"isMovingWindow":true,"isLiveDone":false}}}
Fetch finished loading: GET "(manifest url)".
No errors are shown.
Even after closing and restarting the cast session, the issue remains. The cast device itself has to be rebooted to resolve it. It looks like data is kept between sessions.
It could be important to note that the cast receiver app is not published yet. It is hosted on a local network.
My questions are:
What could be the cause of this stuck behavior?
Is there any session data kept between session?
How to fully reset the cast receiver app, without the necessity to restart the cast device.
The receiver app itself is very basic. Other than license wrapping it resembles the vanilla example app:
const { cast } = window;
const TAG = "CastContext";
class CastStore {
static instance = null;
error = observable.box();
framerate = observable.box();
static getInstance() {
if (!CastStore.instance) {
CastStore.instance = new CastStore();
}
return CastStore.instance;
}
get debugLog() {
return this.framerate.get();
}
get errorLog() {
return this.error.get();
}
init() {
const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
playerManager.addEventListener(
cast.framework.events.category.CORE,
event => {
console.log(TAG, "Core event: " + JSON.stringify(event));
}
);
playerManager.addEventListener(
cast.framework.events.EventType.MEDIA_STATUS,
event => {
console.log(TAG, "MEDIA_STATUS event: " + JSON.stringify(event));
}
);
playerManager.addEventListener(
cast.framework.events.EventType.BITRATE_CHANGED,
event => {
console.log(TAG, "BITRATE_CHANGED event: " + JSON.stringify(event));
runInAction(() => {
this.framerate.set(`bitrate: ${event.totalBitrate}`);
});
}
);
playerManager.addEventListener(
cast.framework.events.EventType.ERROR,
event => {
console.log(TAG, "ERROR event: " + JSON.stringify(event));
runInAction(() => {
this.error.set(`Error detailedErrorCode: ${event.detailedErrorCode}`);
});
}
);
// intercept the LOAD request to be able to read in a contentId and get data.
this.loadHandler = new LoadHandler();
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD,
loadRequestData => {
this.framerate.set(null);
this.error.set(null);
console.log(TAG, "LOAD message: " + JSON.stringify(loadRequestData));
if (!loadRequestData.media) {
const error = new cast.framework.messages.ErrorData(
cast.framework.messages.ErrorType.LOAD_CANCELLED
);
error.reason = cast.framework.messages.ErrorReason.INVALID_PARAM;
return error;
}
if (!loadRequestData.media.entity) {
// Copy the value from contentId for legacy reasons if needed
loadRequestData.media.entity = loadRequestData.media.contentId;
}
// notify loadMedia
this.loadHandler.onLoadMedia(loadRequestData, playerManager);
return loadRequestData;
}
);
const playbackConfig = new cast.framework.PlaybackConfig();
// intercept license requests & responses
playbackConfig.licenseRequestHandler = requestInfo => {
const challenge = requestInfo.content;
const { castToken } = this.loadHandler;
const wrappedRequest = DrmLicenseHelper.wrapLicenseRequest(
challenge,
castToken
);
requestInfo.content = wrappedRequest;
return requestInfo;
};
playbackConfig.licenseHandler = license => {
const unwrappedLicense = DrmLicenseHelper.unwrapLicenseResponse(license);
return unwrappedLicense;
};
// Duration of buffered media in seconds to start/resume playback after auto-paused due to buffering; default is 10.
playbackConfig.autoResumeDuration = 4;
// Minimum number of buffered segments to start/resume playback.
playbackConfig.initialBandwidth = 1200000;
context.start({
touchScreenOptimizedApp: true,
playbackConfig: playbackConfig,
supportedCommands: cast.framework.messages.Command.ALL_BASIC_MEDIA
});
}
}
The LoadHandler optionally adds a proxy (I'm using a cors-anywhere proxy to remove the origin header), and stores the castToken for licenseRequests:
class LoadHandler {
CORS_USE_PROXY = true;
CORS_PROXY = "http://192.168.0.127:8003";
castToken = null;
onLoadMedia(loadRequestData, playerManager) {
if (!loadRequestData) {
return;
}
const { media } = loadRequestData;
// disable cors for local testing
if (this.CORS_USE_PROXY) {
media.contentId = `${this.CORS_PROXY}/${media.contentId}`;
}
const { customData } = media;
if (customData) {
const { licenseUrl, castToken } = customData;
// install cast token
this.castToken = castToken;
// handle license URL
if (licenseUrl) {
const playbackConfig = playerManager.getPlaybackConfig();
playbackConfig.licenseUrl = licenseUrl;
const { contentType } = loadRequestData.media;
// Dash: "application/dash+xml"
playbackConfig.protectionSystem = cast.framework.ContentProtection.WIDEVINE;
// disable cors for local testing
if (this.CORS_USE_PROXY) {
playbackConfig.licenseUrl = `${this.CORS_PROXY}/${licenseUrl}`;
}
}
}
}
}
The DrmHelper wraps the license request to add the castToken and base64-encodes the whole. The license response is base64-decoded and unwrapped lateron:
export default class DrmLicenseHelper {
static wrapLicenseRequest(challenge, castToken) {
const wrapped = {};
wrapped.AuthToken = castToken;
wrapped.Payload = fromByteArray(new Uint8Array(challenge));
const wrappedJson = JSON.stringify(wrapped);
const wrappedLicenseRequest = fromByteArray(
new TextEncoder().encode(wrappedJson)
);
return wrappedLicenseRequest;
}
static unwrapLicenseResponse(license) {
try {
const responseString = String.fromCharCode.apply(String, license);
const responseJson = JSON.parse(responseString);
const rawLicenseBase64 = responseJson.license;
const decodedLicense = toByteArray(rawLicenseBase64);
return decodedLicense;
} catch (e) {
return license;
}
}
}
The handler for cast.framework.messages.MessageType.LOAD should always return:
the (possibly modified) loadRequestData, or
a promise for the (possibly modified) loadRequestData
null to discard the load request (I'm not 100% sure this works for load requests)
If you do not do this, the load request stays in the queue and any new request is queued after the initial one.
In your handler, you return an error if !loadRequestData.media, which will get you into that state. Another possibility is an exception in the load request handler, which will also get you in that state.
I guess we have a different approach and send everything possible through sendMessage, when we loading stuff we create a new cast.framework.messages.LoadRequestData() which we load with playerManager.load(loadRequest).
But I guess that you might be testing this on an integrated Chromecast, we see this problems as well!?
I suggest that you do one or more
Enable gzip compression on all responses!!!
Stop playback playerManager.stop() (maybe in the interseptor?)
Change how the licenseUrl is set
How we set licenseUrl
playerManager.setMediaPlaybackInfoHandler((loadRequestData, playbackConfig) => {
playbackConfig.licenseUrl = loadRequestData.customData.licenseUrl;
return playbackConfig;
}
);
I am using Linphone SDK in Xamarin.forms project for the sip calling. I am able to make the connection using following code:
var authInfo = Factory.Instance.CreateAuthInfo(username.Text,
null, password.Text, null, null,domain.Text);
LinphoneCore.AddAuthInfo(authInfo);
String proxyAddress ="sip:"+username.Text+"#192.168.1.180:5160";
var identity = Factory.Instance.CreateAddress(proxyAddress);
var proxyConfig = LinphoneCore.CreateProxyConfig();
identity.Username = username.Text;
identity.Domain = domain.Text;
identity.Transport = TransportType.Udp;
proxyConfig.Edit();
proxyConfig.IdentityAddress = identity;
proxyConfig.ServerAddr = domain.Text + ":5160;transport=udp";
proxyConfig.Route = domain.Text;
proxyConfig.RegisterEnabled = true;
proxyConfig.Done();
LinphoneCore.AddProxyConfig(proxyConfig);
LinphoneCore.DefaultProxyConfig = proxyConfig;
LinphoneCore.RefreshRegisters();
After Successful connection, I am using the code for placing the code.
if (LinphoneCore.CallsNb == 0)
{
string phoneCall = "sip:"+address.Text+ "#192.168.1.180:5160";
var addr = LinphoneCore.InterpretUrl(phoneCall);
LinphoneCore.InviteAddress(addr);
}
else
{
Call call = LinphoneCore.CurrentCall;
if (call.State == CallState.IncomingReceived)
{
LinphoneCore.AcceptCall(call);
}
else
{
LinphoneCore.TerminateAllCalls();
}
}
And the listener that is listening to call state changed event is as:
private void OnCall(Core lc, Call lcall, CallState state, stringmessage)
{
call_status.Text = "Call state changed: " + state;
if (lc.CallsNb > 0)
{
if (state == CallState.IncomingReceived)
{
call.Text = "Answer Call (" + lcall.RemoteAddressAsString + ")";
}
else
{
call.Text = "Terminate Call";
}
if (lcall.CurrentParams.VideoEnabled) {
video.Text = "Stop Video";
} else {
video.Text = "Start Video";
}
}
else
{
call.Text = "Start Call";
call_stats.Text = "";
}
}
The call status is giving 'Internal Server Error'. I am able to receive the calls using Linphone or X-lite Soft Phone in my code, But I am not able to place the calls. I don't know whether this issue is related to server or it is related to my code. Please suggest.
Internal Server Error (HTTP Status code 500) means that an unexpected error occurred on the server. So I would suspect the problem is rather there than with your app's code.
500 - A generic error message, given when an unexpected condition was encountered and no more specific message is suitable.
It could be that your request doesn't satisfy the expectations of the endpoint you are calling, but even then, the server should then respond with a more meaningful error, than crashing with 500.
Basically, I'm trying to check the status of my WebSocket Server.ws. However, when I query Server.ws.readyState, the only response I ever get is WebSocket.OPEN. How do I check if a WebSocket is disconnected if it always returns WebSocket.OPEN?
For example, I've tried to turn off the WiFi of the device used to test the Flutter app. Normally, after one second, the WebSocket is assumed disconnected and the connection is closed with a WebSocketStatus.GOING_AWAY close code. I assumed it would also change the WebSocket.readyState, but that doesn't seems to be the case.
So, how do I properly check the status of my WebSocket?
How I'm currently checking :
/// Connection status
IconButton _status() {
IconData iconData;
switch (Server.ws?.readyState) {
case WebSocket.CONNECTING:
print("readyState : CONNECTING");
iconData = Icons.wifi;
break;
case WebSocket.OPEN:
print("readyState : OPEN");
iconData = Icons.signal_wifi_4_bar;
break;
case WebSocket.CLOSING:
print("readyState : CLOSING");
iconData = Icons.signal_wifi_4_bar_lock;
break;
case WebSocket.CLOSED:
print("readyState : CLOSED");
iconData = Icons.warning;
break;
default:
print("readyState : " + Server.ws.readyState.toString());
break;
}
return new IconButton(
icon: new Icon(iconData),
tooltip: 'Connection Status', // TODO:Localize
onPressed: () {
setState(() {
Server.ws.close();
});
},
);
}
Additional info about the WebSocket :
/// Should be called when the IP is validated
void startSocket() {
try {
WebSocket.connect(Server.qr).then((socket) {
// Build WebSocket
Server.ws = socket;
Server.ws.listen(
handleData,
onError: handleError,
onDone: handleDone,
cancelOnError: true,
);
Server.ws.pingInterval = new Duration(
seconds: Globals.map["PingInterval"],
);
send(
"CONNECTION",
{
"deviceID": Globals.map["UUID"],
},
);
});
} catch (e) {
print("Error opening a WebSocket : $e");
}
}
/// Handles the closing of the connection.
void handleDone() {
print("WebSocket closed.");
new Timer(new Duration(seconds: Globals.map["PingInterval"]), startSocket);
}
/// Handles the WebSocket's errors.
void handleError(Error e) {
print("WebSocket error.");
print(e);
Server.ws.close();
}
I've gone ahead and taken a look at the source code for the WebSocket implementation. It appears that when the WebSocket is being closed with the status GOING_AWAY, the internal socket stream is being closed. However, it is possible that this event does not propagate to the transformed stream which handles the readyState of the instance. I would recommend filing a bug report at dartbug.com.
try setting the pingInterval, this checks for connection status every said interval, then the closeCode will update
I create a simple video calling app by using web Rtc and websockets.
But when i run the code, the following error occured.
DOMException [InvalidStateError: "setRemoteDescription needs to called before addIceCandidate"
code: 11
I don't know how to resolve this error.
Here is my code below:
enter code here
var localVideo;
var remoteVideo;
var peerConnection;
var uuid;
var localStream;
var peerConnectionConfig = {
'iceServers': [
{'urls': 'stun:stun.services.mozilla.com'},
{'urls': 'stun:stun.l.google.com:19302'},
]
};
function pageReady() {
uuid = uuid();
console.log('Inside Page Ready');
localVideo = document.getElementById('localVideo');
remoteVideo = document.getElementById('remoteVideo');
serverConnection = new WebSocket('wss://' + window.location.hostname +
':8443');
serverConnection.onmessage = gotMessageFromServer;
var constraints = {
video: true,
audio: true,
};
if(navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia(constraints)
.then(getUserMediaSuccess).catch(errorHandler);
}else
{
alert('Your browser does not support getUserMedia API');
}
}
function getUserMediaSuccess(stream) {
localStream = stream;
localVideo.src = window.URL.createObjectURL(stream);
}
function start(isCaller) {
console.log('Inside isCaller');
peerConnection = new RTCPeerConnection(peerConnectionConfig);
peerConnection.onicecandidate = gotIceCandidate;
peerConnection.onaddstream = gotRemoteStream;
peerConnection.addStream(localStream);
if(isCaller) {
console.log('Inside Caller to create offer');
peerConnection.createOffer().
then(createdDescription).catch(errorHandler);
}
}
function gotMessageFromServer(message) {
console.log('Message from Server');
if(!peerConnection)
{
console.log('Inside !Peer Conn');
start(false);
}
var signal = JSON.parse(message.data);
// Ignore messages from ourself
if(signal.uuid == uuid) return;
if(signal.sdp) {
console.log('Inside SDP');
peerConnection.setRemoteDescription(new
RTCSessionDescription(signal.sdp)).then(function() {
// Only create answers in response to offers
if(signal.sdp.type == 'offer') {
console.log('Before Create Answer');
peerConnection.createAnswer().then(createdDescription)
.catch(errorHandler);
}
}).catch(errorHandler);
} else if(signal.ice) {
console.log('Inside Signal Ice');
peerConnection.addIceCandidate(new
RTCIceCandidate(signal.ice)).catch(errorHandler);
}
}
function gotIceCandidate(event) {
console.log('Inside Got Ice Candi');
if(event.candidate != null) {
serverConnection.send(JSON.stringify({'ice': event.candidate,
'uuid': uuid}));
}
}
function createdDescription(description) {
console.log('got description');
peerConnection.setLocalDescription(description).then(function() {
console.log('Inside Setting ');
serverConnection.send(JSON.stringify({'sdp':
peerConnection.localDescription, 'uuid': uuid}));
}).catch(errorHandler);
}
function gotRemoteStream(event) {
console.log('got remote stream');
remoteVideo.src = window.URL.createObjectURL(event.stream);
}
function errorHandler(error) {
console.log(error);
}
// Taken from http://stackoverflow.com/a/105074/515584
// Strictly speaking, it's not a real UUID, but it gets the job done here
function uuid() {
function s4() {
return Math.floor((1 + Math.random()) *
0x10000).toString(16).substring(1);
}
return s4() + s4() + '-' + s4() + '-' + s4() + '-' + s4() + '-' + s4() +
s4() + s4();
}
This is my code, I don't know how to arrange the addIceCandidate and addRemoteDescription function.
You need to make sure that
peerConnection.addIceCandidate(new RTCIceCandidate(signal.ice))
is called after description is set.
You have sitution where you receive ice candidate and try to add it to peerConnection before peerConnection has completed with setting description.
I had similar situation, and I created array for storing candidates that arrived before setting description is completed, and a variable that checks if description is set. If description is set, I would add candidates to peerConnection, otherwise I would add them to array. (when you set your variable to true, you can also go through array and add all stored candidates to peerConnection.
The way WebRTC works (as much as i understand) is you have to make two peers to have a deal to how to communicate eachother in the order of give an offer to your peer get your peers answer and select an ICE candidate to communicate on then if you want to send your media streams for an video conversation
for you to have a good exampe to look on how to implemet those funcitons and in which order you can visit https://github.com/alexan1/SignalRTC he has a good understading of how to do this.
you might already have a solution to your problem at this time but im replying in case you do not.
EDIT: As I have been told, this solution is an anti-pattern and you should NOT implement it this way. For more info on how I solved it while still keeping a reasonable flow, follow this answer and comment section: https://stackoverflow.com/a/57257449/779483
TLDR: Instead of calling addIceCandidate as soon as the signaling information arrives, add the candidates to a queue. After calling setRemoteDescription, go through candidates queue and call addIceCandidate on each one.
--
From this answer I learned that we have to call setRemoteDescription(offer) before we add the Ice Candidates data.
So, expanding on #Luxior answer, I did the following:
When signaling message with candidate arrives:
Check if remote was set (via a boolean flag, ie: remoteIsReady)
If it was, call addIceCandidate
If it wasn't, add to a queue
After setRemoteDescription is called (in answer signal or answer client action):
Call a method to go through the candidates queue and call addIceCandidate on each one.
Set boolean flag (remoteIsReady) to true
Empty queue