getting incoming imageUri null from an chooser intent - nativescript

I'm trying to access the image from whats app on sharing the image content. so i have already written code for chooser intent in android manifest. now to handle the incoming image i wrote code like below
application.android.on(application.AndroidApplication.activityStartedEvent, function (args) {
console.log("Event: " + args.eventName + ", Activity: " + args.activity);
// Event: activityStarted, Activity: com.tns.NativeScriptActivity#8a3b9cc
let intent = args.activity.getIntent();
let action = intent.getAction();
let type = intent.getType();
console.log(intent);
//Intent { act=android.intent.action.SEND typ=image/* flg=0x1b080001 cmp=org.myapp.new_app/com.tns.NativeScriptActivity clip={image/* U:content://com.whatsapp.fileprovider/external/WhatsApp/.Shared/photo.jpg} (has extras) }
console.log(action);
//android.intent.action.SEND
console.log(type);
// image/*
let imageUri = intent.getParcelableExtra(intent.EXTRA_STREAM);
console.log(imageUri);
//null
}
I expect the output not to be null

The log already shows you what exactly you have to read from intent (clip).
intent.getClipData().getItemAt(0).getUri()

Related

Play custom sound in Xamarion IOS local notification

I am trying to play custom mp3 sound file of (10-25 seconds) with local notification. I have placed the custom sound file in iOS project folder (same level as Resources) with BundleResource as BuildAction. I have also placed the sound file in Resources folder with same build action. It seems both doesn't work.
var content = new UNMutableNotificationContent
{
Title = title,
//Subtitle = "Notification Subtitle",
Body = body,
Badge = 1,
Sound = UNNotificationSound.GetSound("music.mp3"), //play custom sound
UserInfo = NSDictionary.FromObjectAndKey(NSObject.FromObject(id), NSObject.FromObject(notificationKey))
};
var trigger = UNTimeIntervalNotificationTrigger.CreateTrigger(10, false);
var requestID = "request_" + id;
var request = UNNotificationRequest.FromIdentifier(requestID, content, trigger);
UNUserNotificationCenter.Current.AddNotificationRequest(request, (err) =>
{
if (err != null)
{
throw new Exception($"Failed to schedule notification: {err}");
}
});
Any suggestion?
The code works fine, my phone sound was mute.

Alexa Custom skill in AWS lambda not recognizing Alexa.getSupportedInterfaces[Error handled: Alexa.getSupportedInterfaces is not a function]

Trying to use Alexa presentation Language features in AWS hosted custom lambda function. Intent handler are firing but when I add the
Alexa.getSupportedInterfaces it is failing .
Message is "Error handled: Alexa.getSupportedInterfaces is not a function"
// 1. Intent Handlers =============================================
const LaunchRequest_Handler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'LaunchRequest';
},
handle(handlerInput) {
let responseBuilder = handlerInput.responseBuilder;
let speakOutput = 'Welcome to test Bot. ';
// let skillTitle = capitalize(invocationName);
// Add APL directive to response
if (Alexa1.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
// Add the RenderDocument directive to the responseBuilder
responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
token: Echo_Token,
document: Customer
});
// Tailor the speech for a device with a screen.
speakOutput += " You should now also see my greeting on the screen."
} else {
// User's device does not support APL, so tailor the speech to this situation
speakOutput += " This example would be more interesting on a device with a screen, such as an Echo Show or Fire TV.";
}
return responseBuilder
.speak(speakOutput)
.withShouldEndSession(false)
.reprompt('try again, ' + speakOutput)
.withSimpleCard("CustomerSupport!", "CustomerSupport)")
// .reprompt('add a reprompt if you want to keep the session open for the user to respond')
//.withStandardCard('Welcome!',
// 'Hello!\nThis is a card for your skill, ' + skillTitle,
// welcomeCardImg.smallImageUrl, welcomeCardImg.largeImageUrl)
.getResponse();
},
};
Instead of using the below condition:
Alexa1.getSupportedInterfaces(handlerInput.requestEnvelope['Alexa.Presentation.APL]
you can use, below condition to check if the device supports APL:
if (supportsAPL(handlerInput))
Make sure you include below functions definition in your index file:
function supportsAPL(handlerInput) {
const supportedInterfaces = handlerInput.requestEnvelope.context.System.device.supportedInterfaces;
const aplInterface = supportedInterfaces['Alexa.Presentation.APL'];
return aplInterface != null && aplInterface != undefined;
}
function supportsAPLT(handlerInput) {
const supportedInterfaces = handlerInput.requestEnvelope.context.System.device.supportedInterfaces;
const aplInterface = supportedInterfaces['Alexa.Presentation.APLT'];
return aplInterface != null && aplInterface != undefined;
}
Hope that helps as it worked for me.

create local notification in xamarin ios with http request

i have xamarin forms app that support notification, i have done it in android with broadcast receiver now i have to do notification in ios ! , my service is depending on API REST so i want every 60 second ios app run HTTP request and get data then show it as notification, i searched for many days but i can't reach to my approach ?
if this is impossible can i use nuget or something like that in ios project only "in xamarin forms solution " or not ?
content = new UNMutableNotificationContent();
content.Title = "Notification Title";
content.Subtitle = "Notification Subtitle";
content.Body = "This is the message body of the notification.";
content.Badge = 1;
content.CategoryIdentifier = "message";
var trigger = UNTimeIntervalNotificationTrigger.CreateTrigger(60, true);
var requestID = "sampleRequest";
var request = UNNotificationRequest.FromIdentifier(requestID, content, trigger);
UNUserNotificationCenter.Current.AddNotificationRequest(request, (err) =>
{
if (err != null)
{
// Do something with error...
}
});
Here is my code for generating a local notification on iOS
var alertsAllowed = false;
UNUserNotificationCenter.Current.GetNotificationSettings((settings) =>
{
alertsAllowed = (settings.AlertSetting == UNNotificationSetting.Enabled);
});
if (alertsAllowed)
{
var content = new UNMutableNotificationContent();
content.Title = "Incident Recorder";
content.Subtitle = "Not Synchronised";
content.Body = "There are one or more new incidents that have not been synchronised to the server.";
var trigger = UNTimeIntervalNotificationTrigger.CreateTrigger(5, false);
var requestID = "sampleRequest";
var request = UNNotificationRequest.FromIdentifier(requestID, content, trigger);
UNUserNotificationCenter.Current.AddNotificationRequest(request, (err) =>
{
if (err != null)
{
Console.WriteLine(err.LocalizedFailureReason);
}
});
}
The first parameter in CreateTrigger is how long before the notification is generated. I notice you have 60 in yours. Also bear in mind a notification will not appear if your app is foregrounded.

implementation of touch ID in our app

Anyone explain replyHandler and InvokeOnMainThread works in this code enter code here. I have copied this code form a sample project I need to implement this thing in my project
partial void UIButton7_TouchUpInside (UIButton sender)
{
var context = new LAContext ();
var error = new NSError ();
if (context.CanEvaluatePolicy (LAPolicy.DeviceOwnerAuthenticationWithBiometrics,out error)) {
var replyHandler = new LAContextReplyHandler((success, err) => {
this.InvokeOnMainThread(() => {
if(success){
Console.WriteLine("You Logged in");
} else {
var errorAlertView = new UIAlertView("Login Error", err.LocalizedDescription, null, "Close");
errorAlertView.Show();
}
});
});
context.EvaluatePolicy(LAPolicy.DeviceOwnerAuthenticationWithBiometrics, "You need to login", replyHandler);
}
}
The reply handler is basically a callback to manage feedback when get the result from the touch id.
InvokeOnMainThread is to allow show an ui change when gets this result back. It forces to be on the ui thread to be able to reflect an ui change.

how to get inputs from wearable devices

I'm implementing a notification system using Xamarin platform, which extends to wearable devices to send the notification. I also want to get the input of user from the wear notification and i have programed it in such away that user can select text or use voice. i followed the following tutorial
http://developer.android.com/training/wearables/notifications/voice-input.html
my code is:
void SendWearNotification (string message, string from)
{
var valuesForActivity = new Bundle();
valuesForActivity.PutString ("message", message);
String groupkey = "group_key_emails";
var intent = new Intent (this, typeof(MyMainActivity));
intent.PutExtras (valuesForActivity);
intent.AddFlags (ActivityFlags.ClearTop);
var pendingIntent = PendingIntent.GetActivity (this, 0, intent, PendingIntentFlags.OneShot);
var builder = new NotificationCompat.Builder (this)
.SetAutoCancel (true)
.SetContentIntent (pendingIntent)
.SetContentTitle (from)
.SetSmallIcon (Resource.Drawable.Iconlogo)
.SetContentText (message) //message is the one recieved from the notification
.SetTicker(from)
.SetGroup (groupkey) //creates groups
.SetPriority((int)NotificationPriority.High);
//
//for viewing the message in second page
var pagestyle= new NotificationCompat.BigTextStyle();
pagestyle.SetBigContentTitle (from)
.BigText (messagefromapp); //message from app is the one rerieved from the wcf app
//second page
var secondpagenotification = new NotificationCompat.Builder (this)
.SetStyle (pagestyle)
.Build ();
//intent for voice input or text selection
var wear_intent = new Intent (Intent.ActionView);
var wear_pending_intent = PendingIntent.GetActivity (this,0,wear_intent,0);
// Create the reply action and add the remote input
setRemoteInput ();
var action = new NotificationCompat.Action.Builder (Resource.Drawable.ic_mes,
GetString (Resource.String.messages), wear_pending_intent)
.AddRemoteInput (remoteinput)
.Build ();
//add it to the notification builder
Notification notification = builder.Extend (new NotificationCompat.WearableExtender ()
.AddPage (secondpagenotification).AddAction(action)).Build ();
//create different notitfication id so that we can as list
if(notification_id<9){
notification_id += 1;
}else{
notification_id=0;
}
var notificationManager = (NotificationManager)GetSystemService(Context.NotificationService);
notificationManager.Notify (notification_id+2, notification);
}
this method is implmented inside GCMListnerService class.
According to the tutorial from the above link, i can retreive the input data user selected or spoke uing the following code:
private void getResponse(Intent intent){
Bundle remoteInput = RemoteInput.GetResultsFromIntent(intent);
if (remoteInput != null) {
Toast.MakeText(this, remoteInput.GetCharSequence(EXTRA_VOICE_REPLY), ToastLength.Short);
}
//return null;
}
My question is when do i call this method, how do i know if user have selected a text en send from the wearable device. if there is any event which i can use.
I got the solution. the method the gets the remote input (the "getresponse" in my case) should be called from the "Oncreate" method of an activity that is used when the notification is created. In my case the actvity i used is "MyMainActivity" when i create the intent of the notification as u can see it in the code. So this means the method will be called twice, when the application runs, and when user reponds from the wear. but ony in the second case will the "remoteinput.getResultfromIntent" will have a value. I hope it will help for someone with same issues.

Resources