How can i handle this error?
please help me out of this situation.
private void previewVideo(){
try{
var path = Android.Net.Uri.Parse(App._file.AbsolutePath);
preview.SetVideoURI (path);
preview.Start ();
}
catch(Exception e){
e.GetBaseException ();
}
}
Your'e lucky that I was following your previous question. Please try to have your questions as detailed as possible so it's easier for us to analyze and possible replicate the error.
To be able to set an error listener on the VideoView, the VideoView needs an object that implements the Android.Media.MediaPlayer.IOnErrorListener interface.
You can accomblish that by letting your Activity implement the previous mentioned interface, and setting the Activity as the ErrorListener for the VideoView
public class MainActivity : Activity, Android.Media.MediaPlayer.IOnErrorListener
{
...
protected override void OnCreate(Bundle bundle)
{
...
preview = FindViewById<VideoView> (Resource.Id.SampleVideoView);
preview.SetOnErrorListener(this); // <- Set the error listener
...
}
...
//The implementation of MediaPlayer.IOnErrorListener
public bool OnError(MediaPlayer player, MediaError error, int extra)
{
// Do Something here because error happened
}
...
}
By doing this, when error occurs in the VideoView the VideoView will call the public OnError method.
From the Android Docs of OnErrorListener you can see what the OnError method should return.
Returns:
True if the method handled the error, false if it didn't. Returning false, or not having an OnErrorListener at all, will cause the OnCompletionListener to be called.
Related
I am creating a custom renderer, that needs to display whatever I have rendered in my Vulkan engine. For this I have a VulkanSurfaceView, which inherits from MetalKit.MTKView on iOS, and from Android.Views.SurfaceView and ISurfaceHolderCallback on Android.
For iOS I can simply do this, which will draw a new frame continually, as long as the view is in focus:
public class VulkanSurfaceView : MTKView, IVulkanAppHost
{
...
public override void Draw()
{
Renderer.Tick();
base.Draw();
}
}
However, on Android I have to do this, where I call Invalidate() from within the OnDraw method, else it is only called once. I think this code smells a bit, and I am not sure, if this is the "good" way of doing it. Is my solution okay? If not, does anyone have a better idea?
public class VulkanSurfaceView : SurfaceView, ISurfaceHolderCallback, IVulkanAppHost
{
...
protected override void OnDraw(Canvas? canvas)
{
Renderer.Tick();
base.OnDraw(canvas);
Invalidate();
}
}
Did you try calling setWillNotDraw(false) in your surfaceCreated method ?
Refer the link
Thank you to #ToolmakerSteve.
I created a Timer where I call Invalidate() if a new frame has been requested (by me via a simple bool). For anyone interested I do it like so:
protected override void OnDraw(Canvas? canvas) // Just to show the updated OnDraw-method
{
Renderer.Tick();
base.OnDraw(canvas);
}
public void SurfaceCreated(ISurfaceHolder holder)
{
TickTimer = new System.Threading.Timer(state =>
{
AndroidApplication.SynchronizationContext.Send(_ => { if (NewFrameRequested) Invalidate(); }, state);
try { TickTimer.Change(0, Timeout.Infinite); } catch (ObjectDisposedException) { }
}, null, 0, Timeout.Infinite);
}
For now it is very simple, but it works and will probably grow. The reason for my initial bad framerate with this method was a misunderstanding of the "dueTime" of the Timer (see Timer Class), which I though was the framerate sought. This is actually the time between frames, which seems obvious now.
As #Bhargavi also kindly mentioned you need to set "setWillNotDraw(false)" if OnDraw is not being called when invalidating the view.
I am using Android Studio 3
I am following this article to learn how to use Google Recaptcha in Android Studio.
Installed the package using this: implementation 'com.google.android.gms:play-services-safetynet:12.0.1'
API keys are also registered.
I saw there is onClick event handler but where is it mentioned about rendering the recaptcha?
Update 1
When I wrote the button click code as mentioned in the link...I got a complication error: inconvertible types cannot cast anonymous android.view.view.onclicklistener to java.util.concurrent.executor
Code as asked in comment
btn_Login.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(final View view) {
SafetyNet.getClient(this).verifyWithRecaptcha("")
.addOnSuccessListener((Executor) this,
new OnSuccessListener<SafetyNetApi.RecaptchaTokenResponse>() {
#Override
public void onSuccess(SafetyNetApi.RecaptchaTokenResponse response) {
// Indicates communication with reCAPTCHA service was
// successful.
String userResponseToken = response.getTokenResult();
if (!userResponseToken.isEmpty()) {
// Validate the user response token using the
// reCAPTCHA siteverify API.
}
}
})
.addOnFailureListener((Executor) this, new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
if (e instanceof ApiException) {
// An error occurred when communicating with the
// reCAPTCHA service. Refer to the status code to
// handle the error appropriately.
ApiException apiException = (ApiException) e;
int statusCode = apiException.getStatusCode();
} else {
}
}
});
}
});
I used below code and everything is work fine now.
Make sure to implement Executor in the activity
btn_Login.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(final View view) {
SafetyNet.getClient(Activity.this).verifyWithRecaptcha("")
.addOnSuccessListener((Activity) MyActivity.this,
new OnSuccessListener<SafetyNetApi.RecaptchaTokenResponse>() {
#Override
public void onSuccess(SafetyNetApi.RecaptchaTokenResponse response) {
// Indicates communication with reCAPTCHA service was
// successful.
String userResponseToken = response.getTokenResult();
if (!userResponseToken.isEmpty()) {
// Validate the user response token using the
// reCAPTCHA siteverify API.
}
}
})
.addOnFailureListener((Activity) MyActivity.this, new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
if (e instanceof ApiException) {
// An error occurred when communicating with the
// reCAPTCHA service. Refer to the status code to
// handle the error appropriately.
ApiException apiException = (ApiException) e;
int statusCode = apiException.getStatusCode();
} else {
}
}
});
}
});
According to the article, in your button click handler you must call the method SafetyNet.getClient(this).verifyWithRecaptcha(...) to show reCAPTCHA and handle success or error. Passing this, you give the SDK handle to your current view which should be shown after solving reCAPTCHA. Most probably the rendering will be done by the SDK itself given that it’s a part of the OS. And most probably it will be full-screen in a separate top-level view blocking access to your app before solving the riddle.
You should try to implement it in your app as described in the article and see how it goes. Then you can ask a more specific question.
EDIT: You combined 2 techniques in your code: copy-pasting the code from Google and implementing anonymous class from it. So the problem you asked in the comment is that using (Executor) this in line 5 refers now not to your View (as it was there in the original tutorial) but to the instance of the anonymous interface implementation new View.OnClickListener() that you created. Ypu can refer to this answer to see how it can be implemented not interfering with already complex reCAPTCHA code.
I need to subscribe an event to handle incoming phone call. Since iOS version 11.0 CTCallCenter is deprecated we have to use CXCallObserver. I successfully implemented solution for CTCallCenter, but I am not able to subscribe event for CXCallObserver. Does anyone have working solution for CXCallObserver?
Here is my code to subscribe event for CTCallCenter..
_callCenter = new CTCallCenter();
_callCenter.CallEventHandler += CallEvent;
private void CallEvent(CTCall call)
{
CoreFoundation.DispatchQueue.MainQueue.DispatchSync(() =>
{
if(call.CallState.Equals(call.StateIncoming))
//Do something
});
}
Implement the delegate for CXCallObserver:
public class MyCXCallObserverDelegate : CXCallObserverDelegate
{
public override void CallChanged(CXCallObserver callObserver, CXCall call)
{
Console.WriteLine(call);
}
}
Then in your code, create a instance of CXCallObserver (maintain a strong reference to this) and then assign the delegate:
cXCallObserver = new CXCallObserver();
cXCallObserver.SetDelegate(new MyCXCallObserverDelegate(), null);
I'm trying to implement google sign in using this component for xamarin.ios: Google Sign-in for iOS
It works great on emulator but when it comes to actual device it's crashing once i tap signin button. (iOS 10.2 - emulator is also using same OS)
I have a custom button which calls SignInUser method on SignIn.SharedInstance
It's crashing with below error (only when the app is deployed on device)
Objective-C exception thrown. Name: NSInvalidArgumentException Reason: uiDelegate must either be a |UIViewController| or implement the |signIn:presentViewController:| and |signIn:dismissViewController:| methods from |GIDSignInUIDelegate|.
I'm calling function below to initialize GoogleSignIn on FinishedLaunching method of AppDelegate.cs
public void Configure()
{
NSError configureError;
Context.SharedInstance.Configure(out configureError);
if (configureError != null)
{
// If something went wrong, assign the clientID manually
Console.WriteLine("Error configuring the Google context: {0}", configureError);
SignIn.SharedInstance.ClientID = googleClientId;
}
SignIn.SharedInstance.Delegate = this;
SignIn.SharedInstance.UIDelegate = new GoogleSignInUIDelegate();
}
Here's my implementation of ISignInUIDelegate():
class GoogleSignInUIDelegate : SignInUIDelegate
{
public override void WillDispatch(SignIn signIn, NSError error)
{
}
public override void PresentViewController(SignIn signIn, UIViewController viewController)
{
UIApplication.SharedApplication.KeyWindow.RootViewController.PresentViewController(viewController, true, null);
}
public override void DismissViewController(SignIn signIn, UIViewController viewController)
{
UIApplication.SharedApplication.KeyWindow.RootViewController.DismissViewController(true, null);
}
}
So the emulator seems to know the methods are implemented, but not the device. Any idea what i am doing wrong here?
After some debugging i found where the actual issue was.
Somehow, the UIDelegate i assigned during initialization was lost when i was calling my login method. So i moved the line below from my initialization step to login
SignIn.SharedInstance.UIDelegate = new GoogleSignInUIDelegate();
Here's how my login method looks like now:
public void Login()
{
SignIn.SharedInstance.UIDelegate = new GoogleSignInUIDelegate(); //moved this here from Configure
SignIn.SharedInstance.SignInUser();
}
This took care of the issue for me but i am still not sure why this is only an issue on the device and not the emulator. Any Ideas?
Add a PreserveAttribute to your GoogleSignInUIDelegate class to prevent the Linker from removing the methods that can not be determined via static analysis.
Add the following class to your project:
public sealed class PreserveAttribute : System.Attribute {
public bool AllMembers;
public bool Conditional;
}
Apply the class attribute:
[Preserve (AllMembers = true)]
class GoogleSignInUIDelegate : SignInUIDelegate
{
~~~~
}
Re: https://developer.xamarin.com/guides/ios/advanced_topics/linker/
Setting PresentingViewController helped me to resolve the issue.
SignIn.SharedInstance.PresentingViewController = this;
Have found such fix here:
https://github.com/googlesamples/google-signin-unity/issues/169#issuecomment-791305225
I'm writing a Chromecast receiver to play different kind of contents (including embedded flash videos). I'd like to use my own JS library to create the player canvas, not to rely on the html video element.
I'm currently blocked because I can't get a media to be loaded using a custom behaviour :
Receiver :
Nothing fancy in the HTML, I just load my library in the #mediaWrapper div. Then I create a MediaManager from it.
var node = $( "#mediaWrapper" )[0];
var phiEngine = new phi.media.Player( node );
window.mediaManager = new cast.receiver.MediaManager( phiEngine );
window.castReceiverManager = cast.receiver.CastReceiverManager.getInstance();
/* Override Load method */
window.mediaManager['origOnLoad'] = window.mediaManager.onLoad;
window.mediaManager.onLoad = function (event) {
console.log('### Application Load ', event);
/* Custom code (load lib, set metadata, create canvas ...) */
window.mediaManager.sendLoadComplete(); // Doesn't seem to do anything
// window.mediaManager['origOnLoad'](event);
// -> Fails 'Load metadata error' since url is not a video stream
// -> ex: youtube url
}
/* Will never be called */
window.mediaManager['origOnMetadataLoaded'] = window.mediaManager.onMetadataLoaded;
window.mediaManager.onMetadataLoaded = function (event) {
...
}
Sender :
I use my own android application to cast to the device. I can't use the Companion library because it will be a Titanium module.
private void createMediaPlayer() {
// Create a Remote Media Player
mRemoteMediaPlayer = new RemoteMediaPlayer();
mRemoteMediaPlayer.setOnStatusUpdatedListener(
new RemoteMediaPlayer.OnStatusUpdatedListener() {
#Override
public void onStatusUpdated() {
Log.e(TAG, "onStatusUpdated");
}
}
});
mRemoteMediaPlayer.setOnMetadataUpdatedListener(
new RemoteMediaPlayer.OnMetadataUpdatedListener() {
#Override
public void onMetadataUpdated() {
Log.e(TAG, "onMetadataUpdated");
}
});
try {
Cast.CastApi.setMessageReceivedCallbacks(mApiClient,
mRemoteMediaPlayer.getNamespace(), mRemoteMediaPlayer);
} catch (IOException e) {
Log.e(TAG, "Exception while creating media channel", e);
}
mRemoteMediaPlayer
.requestStatus(mApiClient)
.setResultCallback(
new ResultCallback<RemoteMediaPlayer.MediaChannelResult>() {
#Override
public void onResult(MediaChannelResult result) {
Log.e(TAG, "Request status : ", result.toString());
if (!result.getStatus().isSuccess()) {
Log.e(TAG, "Failed to request status.");
}
}
});
}
private void loadMedia( MediaInfo mediaInfo, Boolean autoplay ) {
try {
mRemoteMediaPlayer.load(mApiClient, mediaInfo, autoplay)
.setResultCallback(new ResultCallback<RemoteMediaPlayer.MediaChannelResult>() {
#Override
public void onResult(MediaChannelResult result) {
Log.e(TAG, "loadMedia ResultCallback reached");
if (result.getStatus().isSuccess()) {
Log.e(TAG, "Media loaded successfully");
} else {
Log.e(TAG, "Error loading Media : " + result.getStatus().getStatusCode() );
}
}
});
} catch (Exception e) {
Log.e(TAG, "Problem opening media during loading", e);
}
}
Expected behaviour :
I basically first call createMediaPlayer() once, then call loadMedia(...). The first call to loadMedia will show nothing in the log : nor success or fail. Next calls issue with errorCode 4.
I get the load event on the receiver side. But, back to the sender side, I can't manage to end the load phase and get a media session to be created.
I was expecting sendLoadComplete() to do so but I might be wrong. How can I have the media status to update and the loadMedia ResultCallback to be reached ?
My goal is to use RemoteMediaPlayer.play(), pause(), ... but for now I get stuck with 'No current media session' because the media isn't loaded yet.
Also, I'd really enjoy to be able to log any message the Sender receives, before being processed. Is it possible ?
Hope I did not forget any information,
Thanks for your help !
edit: I solved this by using a custom message channel since it seems that I can't use RemoteMediaPlayer the way I want to.
I believe the error code 4 you are receiving is bogus; see https://plus.google.com/u/0/+JimRenkel2014/posts/aY5RP7X3QhA . As noted there, I created a Chromecast issue for this (https://code.google.com/p/google-cast-sdk/issues/detail?id=305&thanks=305&ts=1403833532). Additional support for this issue will help it get fixed faster! :-)
.sendLoadComplete(true)
The boolean value helped me to receive the loaded event on sender. Might help you as well.