Pebble AppMessage crash when sending string containing accents from the phone to the watch - pebble-watch

I'm building an API based application. I use the Pebble.SendAppMessage function to send data from the API to the watch. Each time I try to send data containing accentued characters and log them, the console print this following empty message:
[xx:xx:xx] javascript>
and no more interaction between the phone and the watch will work.
I first tried to encode each API data to UTF8 encoding but the crash still appears.
Can we send strings containing accents using the Pebble.SendAppMessage function? What exactly did I miss?
C code :
#include <pebble.h>
static Window *window;
static TextLayer *text_layer;
static void message_provider(DictionaryIterator* iterator, void* context) {
Tuple* tuple = dict_find(iterator, 0);
text_layer_set_text(text_layer, tuple->value->cstring);
}
static void window_load(Window *window) {
Layer *window_layer = window_get_root_layer(window);
GRect bounds = layer_get_bounds(window_layer);
text_layer = text_layer_create((GRect) { .origin = { 0, 72 }, .size = { bounds.size.w, 20 } });
text_layer_set_text(text_layer, "Loading...");
text_layer_set_text_alignment(text_layer, GTextAlignmentCenter);
layer_add_child(window_layer, text_layer_get_layer(text_layer));
}
static void window_unload(Window *window) {
text_layer_destroy(text_layer);
}
static void init(void) {
window = window_create();
window_set_window_handlers(window, (WindowHandlers) {
.load = window_load,
.unload = window_unload,
});
const bool animated = true;
window_stack_push(window, animated);
app_message_register_inbox_received(message_provider);
app_message_open(app_message_inbox_size_maximum(), app_message_outbox_size_maximum());
}
static void deinit(void) {
window_destroy(window);
}
int main(void) {
init();
app_event_loop();
deinit();
}
JS code :
Pebble.addEventListener("ready", function() {
Pebble.sendAppMessage({"dummy" : "Gérard example is always the best."}, function() {
console.log("message sent successfully !");
}, function() {
console.log("Cannot send message with accent.");
});
});

Related

Xamarin Forms Cross and Camera control

For my studying project, I need to realize an application that has a CameraView or a CameraPage, with a special design. However, I’m not able to figure out how to realize it.
I found a lot of information, to be honest, but they are either obsolete or incomplete, so, I would like to make a point about it, through this thread!
How to implement a Camera?
Well, two solutions can be considered based on what I read.
Camera Page
Let’s say that it’s the first “official” solution. It’s proposed by Xamarin itself, with the Customizing a ContentPage tutorial/documentation. It explains you, through a web page how to implement the camera service with a cross-platform solution.
I then tried the UWP solution:
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
x:Class="CameraPreviewProject.Sources.Pages.CameraPage">
<ContentPage.Content>
<AbsoluteLayout>
<Button Text="Click me !" AbsoluteLayout.LayoutBounds="0.5, 0.5, 0.1, 0.1" AbsoluteLayout.LayoutFlags="All" />
</AbsoluteLayout>
</ContentPage.Content>
</ContentPage>
Finally, the C# side gives us this:
public partial class CameraPage : ContentPage
{
public CameraPage()
{
InitializeComponent();
}
}
Then, we create a renderer in the UWP side :
using CameraPreviewProject.Sources.Pages;
using CameraPreviewProject.UWP.Sources.PageRenderers;
using System;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
using Windows.ApplicationModel;
using Windows.Devices.Enumeration;
using Windows.Devices.Sensors;
using Windows.Foundation;
using Windows.Graphics.Display;
using Windows.Graphics.Imaging;
using Windows.Media;
using Windows.Media.Capture;
using Windows.Media.MediaProperties;
using Windows.Storage;
using Windows.Storage.FileProperties;
using Windows.Storage.Streams;
using Windows.System.Display;
using Windows.UI.Core;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;
using Xamarin.Forms.Platform.UWP;
[assembly: ExportRenderer(typeof(CameraPage), typeof(CameraPageRenderer))]
namespace CameraPreviewProject.UWP.Sources.PageRenderers
{
public class CameraPageRenderer : PageRenderer
{
private readonly DisplayInformation displayInformation = DisplayInformation.GetForCurrentView();
private readonly SimpleOrientationSensor orientationSensor = SimpleOrientationSensor.GetDefault();
private readonly DisplayRequest displayRequest = new DisplayRequest();
private SimpleOrientation deviceOrientation = SimpleOrientation.NotRotated;
private DisplayOrientations displayOrientation = DisplayOrientations.Portrait;
// Rotation metadata to apply to preview stream (https://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh868174.aspx)
private static readonly Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1"); // (MF_MT_VIDEO_ROTATION)
private StorageFolder captureFolder = null;
private readonly SystemMediaTransportControls systemMediaControls = SystemMediaTransportControls.GetForCurrentView();
private MediaCapture mediaCapture;
private CaptureElement captureElement;
private bool isInitialized;
private bool isPreviewing;
private bool externalCamera;
private bool mirroringPreview;
private Page page;
private AppBarButton takePhotoButton;
private Application app;
protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Page> e)
{
base.OnElementChanged(e);
if (e.OldElement != null || Element == null)
{
return;
}
try
{
app = Application.Current;
app.Suspending += OnAppSuspending;
app.Resuming += OnAppResuming;
SetupUserInterface();
SetupCamera();
this.Children.Add(page);
}
catch (Exception ex)
{
Debug.WriteLine(#" ERROR: ", ex.Message);
}
}
protected override Size ArrangeOverride(Size finalSize)
{
page.Arrange(new Rect(0, 0, finalSize.Width, finalSize.Height));
return finalSize;
}
private void SetupUserInterface()
{
takePhotoButton = new AppBarButton
{
VerticalAlignment = VerticalAlignment.Center,
HorizontalAlignment = HorizontalAlignment.Center,
Icon = new SymbolIcon(Symbol.Camera)
};
var commandBar = new CommandBar();
commandBar.PrimaryCommands.Add(takePhotoButton);
captureElement = new CaptureElement();
captureElement.Stretch = Stretch.UniformToFill;
var stackPanel = new StackPanel();
stackPanel.Children.Add(captureElement);
page = new Page();
page.BottomAppBar = commandBar;
page.Content = stackPanel;
page.Unloaded += OnPageUnloaded;
}
private async void SetupCamera()
{
await SetupUIAsync();
await InitializeCameraAsync();
}
#region Event Handlers
private async void OnSystemMediaControlsPropertyChanged(SystemMediaTransportControls sender, SystemMediaTransportControlsPropertyChangedEventArgs args)
{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, async () =>
{
// Only handle event if the page is being displayed
if (args.Property == SystemMediaTransportControlsProperty.SoundLevel && page.Frame.CurrentSourcePageType == typeof(MainPage))
{
// Check if the app is being muted. If so, it's being minimized
// Otherwise if it is not initialized, it's being brought into focus
if (sender.SoundLevel == SoundLevel.Muted)
{
await CleanupCameraAsync();
}
else if (!isInitialized)
{
await InitializeCameraAsync();
}
}
});
}
private void OnOrientationSensorOrientationChanged(SimpleOrientationSensor sender, SimpleOrientationSensorOrientationChangedEventArgs args)
{
// Only update orientatino if the device is not parallel to the ground
if (args.Orientation != SimpleOrientation.Faceup && args.Orientation != SimpleOrientation.Facedown)
{
deviceOrientation = args.Orientation;
}
}
private async void OnDisplayInformationOrientationChanged(DisplayInformation sender, object args)
{
displayOrientation = sender.CurrentOrientation;
if (isPreviewing)
{
await SetPreviewRotationAsync();
}
}
private async void OnTakePhotoButtonClicked(object sender, RoutedEventArgs e)
{
await TakePhotoAsync();
}
/*async void OnHardwareCameraButtonPressed(object sender, CameraEventArgs e)
{
await TakePhotoAsync();
}*/
#endregion Event Handlers
#region Media Capture
private async Task InitializeCameraAsync()
{
if (mediaCapture == null)
{
var devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
var cameraDevice = devices.FirstOrDefault(c => c.EnclosureLocation != null && c.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);
// Get any camera if there isn't one on the back panel
cameraDevice = cameraDevice ?? devices.FirstOrDefault();
if (cameraDevice == null)
{
Debug.WriteLine("No camera found");
return;
}
mediaCapture = new MediaCapture();
try
{
await mediaCapture.InitializeAsync(new MediaCaptureInitializationSettings
{
VideoDeviceId = cameraDevice.Id,
AudioDeviceId = string.Empty,
StreamingCaptureMode = StreamingCaptureMode.Video,
PhotoCaptureSource = PhotoCaptureSource.Photo
});
isInitialized = true;
}
catch (UnauthorizedAccessException)
{
Debug.WriteLine("Camera access denied");
}
catch (Exception ex)
{
Debug.WriteLine("Exception initializing MediaCapture - {0}: {1}", cameraDevice.Id, ex.ToString());
}
if (isInitialized)
{
if (cameraDevice.EnclosureLocation == null || cameraDevice.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Unknown)
{
externalCamera = true;
}
else
{
// Camera is on device
externalCamera = false;
// Mirror preview if camera is on front panel
mirroringPreview = (cameraDevice.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front);
}
await StartPreviewAsync();
}
}
}
private async Task StartPreviewAsync()
{
// Prevent the device from sleeping while the preview is running
displayRequest.RequestActive();
// Setup preview source in UI and mirror if required
captureElement.Source = mediaCapture;
captureElement.FlowDirection = mirroringPreview ? FlowDirection.RightToLeft : FlowDirection.LeftToRight;
// Start preview
await mediaCapture.StartPreviewAsync();
isPreviewing = true;
if (isPreviewing)
{
await SetPreviewRotationAsync();
}
}
private async Task StopPreviewAsync()
{
isPreviewing = false;
await mediaCapture.StopPreviewAsync();
// Use dispatcher because sometimes this method is called from non-UI threads
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
// UI cleanup
captureElement.Source = null;
// Allow device screen to sleep now preview is stopped
displayRequest.RequestRelease();
});
}
private async Task SetPreviewRotationAsync()
{
// Only update the orientation if the camera is mounted on the device
if (externalCamera)
{
return;
}
// Derive the preview rotation
int rotation = ConvertDisplayOrientationToDegrees(displayOrientation);
// Invert if mirroring
if (mirroringPreview)
{
rotation = (360 - rotation) % 360;
}
// Add rotation metadata to preview stream
var props = mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
props.Properties.Add(RotationKey, rotation);
await mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
}
private async Task TakePhotoAsync()
{
var stream = new InMemoryRandomAccessStream();
await mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), stream);
try
{
var file = await captureFolder.CreateFileAsync("photo.jpg", CreationCollisionOption.GenerateUniqueName);
var orientation = ConvertOrientationToPhotoOrientation(GetCameraOrientation());
await ReencodeAndSavePhotoAsync(stream, file, orientation);
}
catch (Exception ex)
{
Debug.WriteLine("Exception when taking photo: " + ex.ToString());
}
}
private async Task CleanupCameraAsync()
{
if (isInitialized)
{
if (isPreviewing)
{
await StopPreviewAsync();
}
isInitialized = false;
}
if (mediaCapture != null)
{
mediaCapture.Dispose();
mediaCapture = null;
}
}
#endregion Media Capture
#region Helpers
private async Task SetupUIAsync()
{
// Lock page to landscape to prevent the capture element from rotating
DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;
/*// Hide status bar
if (ApiInformation.IsTypePresent("Windows.UI.ViewManagement.StatusBar"))
{
await Windows.UI.ViewManagement.StatusBar.GetForCurrentView().HideAsync();
}*/
displayOrientation = displayInformation.CurrentOrientation;
if (orientationSensor != null)
{
deviceOrientation = orientationSensor.GetCurrentOrientation();
}
RegisterEventHandlers();
var picturesLibrary = await StorageLibrary.GetLibraryAsync(KnownLibraryId.Pictures);
// Fallback to local app storage if no pictures library
captureFolder = picturesLibrary.SaveFolder ?? ApplicationData.Current.LocalFolder;
}
private async Task CleanupUIAsync()
{
UnregisterEventHandlers();
/*if (ApiInformation.IsTypePresent("Windows.UI.ViewManagement.StatusBar"))
{
await Windows.UI.ViewManagement.StatusBar.GetForCurrentView().ShowAsync();
}*/
// Revert orientation preferences
DisplayInformation.AutoRotationPreferences = DisplayOrientations.None;
}
private void RegisterEventHandlers()
{
/*if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
HardwareButtons.CameraPressed += OnHardwareCameraButtonPressed;
}*/
if (orientationSensor != null)
{
orientationSensor.OrientationChanged += OnOrientationSensorOrientationChanged;
}
displayInformation.OrientationChanged += OnDisplayInformationOrientationChanged;
systemMediaControls.PropertyChanged += OnSystemMediaControlsPropertyChanged;
takePhotoButton.Click += OnTakePhotoButtonClicked;
}
private void UnregisterEventHandlers()
{
/*if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
HardwareButtons.CameraPressed -= OnHardwareCameraButtonPressed;
}*/
if (orientationSensor != null)
{
orientationSensor.OrientationChanged -= OnOrientationSensorOrientationChanged;
}
displayInformation.OrientationChanged -= OnDisplayInformationOrientationChanged;
systemMediaControls.PropertyChanged -= OnSystemMediaControlsPropertyChanged;
takePhotoButton.Click -= OnTakePhotoButtonClicked;
}
private static async Task ReencodeAndSavePhotoAsync(IRandomAccessStream stream, StorageFile file, PhotoOrientation orientation)
{
using (var inputStream = stream)
{
var decoder = await BitmapDecoder.CreateAsync(inputStream);
using (var outputStream = await file.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateForTranscodingAsync(outputStream, decoder);
var properties = new BitmapPropertySet
{
{
"System.Photo.Orientation", new BitmapTypedValue(orientation, Windows.Foundation.PropertyType.UInt16)
}
};
await encoder.BitmapProperties.SetPropertiesAsync(properties);
await encoder.FlushAsync();
}
}
}
#endregion Helpers
#region Rotation
private SimpleOrientation GetCameraOrientation()
{
if (externalCamera)
{
// Cameras that aren't attached to the device do not rotate along with it
return SimpleOrientation.NotRotated;
}
var result = deviceOrientation;
// On portrait-first devices, the camera sensor is mounted at a 90 degree offset to the native orientation
if (displayInformation.NativeOrientation == DisplayOrientations.Portrait)
{
switch (result)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
result = SimpleOrientation.NotRotated;
break;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
result = SimpleOrientation.Rotated90DegreesCounterclockwise;
break;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
result = SimpleOrientation.Rotated180DegreesCounterclockwise;
break;
case SimpleOrientation.NotRotated:
result = SimpleOrientation.Rotated270DegreesCounterclockwise;
break;
}
}
// If the preview is mirrored for a front-facing camera, invert the rotation
if (mirroringPreview)
{
// Rotating 0 and 180 ddegrees is the same clockwise and anti-clockwise
switch (result)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return SimpleOrientation.Rotated270DegreesCounterclockwise;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return SimpleOrientation.Rotated90DegreesCounterclockwise;
}
}
return result;
}
private static int ConvertDeviceOrientationToDegrees(SimpleOrientation orientation)
{
switch (orientation)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return 90;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
return 180;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return 270;
case SimpleOrientation.NotRotated:
default:
return 0;
}
}
private static int ConvertDisplayOrientationToDegrees(DisplayOrientations orientation)
{
switch (orientation)
{
case DisplayOrientations.Portrait:
return 90;
case DisplayOrientations.LandscapeFlipped:
return 180;
case DisplayOrientations.PortraitFlipped:
return 270;
case DisplayOrientations.Landscape:
default:
return 0;
}
}
private static PhotoOrientation ConvertOrientationToPhotoOrientation(SimpleOrientation orientation)
{
switch (orientation)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return PhotoOrientation.Rotate90;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
return PhotoOrientation.Rotate180;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return PhotoOrientation.Rotate270;
case SimpleOrientation.NotRotated:
default:
return PhotoOrientation.Normal;
}
}
#endregion Rotation
#region Lifecycle
private async void OnAppSuspending(object sender, SuspendingEventArgs e)
{
var deferral = e.SuspendingOperation.GetDeferral();
await CleanupCameraAsync();
await CleanupUIAsync();
deferral.Complete();
}
private async void OnAppResuming(object sender, object o)
{
await SetupUIAsync();
await InitializeCameraAsync();
}
private async void OnPageUnloaded(object sender, RoutedEventArgs e)
{
await CleanupCameraAsync();
await CleanupUIAsync();
}
#endregion Lifecycle
}
}
This idea is pretty logic, you have a basic page, but which have renderer that preview the camera in the background, I mean, this is the idea I understood, however, it only gives you a white screen that throws an exception… (x86)
Exception initializing MediaCapture - \\?\USB#VID_045E&PID_0779&MI_00#6&2E9BBB25&0&0000#{e5323777-f976-4f5b-9b55-b94699c46e44}\GLOBAL: System.Runtime.InteropServices.COMException (0xC00DABE6): The current capture source does not have an independent photo stream.
The current capture source does not have an independent photo stream.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at CameraPreviewProject.UWP.Sources.PageRenderers.CameraPageRenderer.<InitializeCameraAsync>d__25.MoveNext()
Then I click the button of the downside woft menu and get:
Exception thrown: 'System.Runtime.InteropServices.COMException' in System.Private.CoreLib.ni.dll
WinRT information: This object needs to be initialized before the requested operation can be carried out.
I’m a Xamarin Fan, but on that part, I’m not. This link about MediaCapture can be interesting though!
CameraView
To be honest, it’s so way easier to have a control as a button!
<Camera/>
Well, let’s have a look at it! I found a couple of solutions:
Moment MVVM logic - It seems to work only with Android & iOS
Xlabs Camera Unable to try since I can’t start VS2017 from the .sln. Also, I couldn't test the UWP side because it's an MVVM logic..
Xam.Plugin.Media This solution works on UWP !! But run a new activity/instance/page with a native design, so this isn't the solution searched
So, my question is “Does someone could create an element public class Camera() that can be used and declared as a simple xamarin forms button?”
Because, I saw as well 2 others projects about it, one I can’t remember, but the second one is Barcode Scanning but I’m not able to understand or take back the code to implement it as I would like…
It seems so easy and it’s so hard to get, why? Because finally, we’re talking about a view/image that displays a stream from a camera? A camera is just a service where you have methods such as TakePictureAsync() or anything like that? Rotate(), Switch(ViewSide vs), etc etc?
So, I searched about getting a frame view or display the stream of the camera into an image or a view.. I began from those links:
UWP get live webcam video stream by David Pine
UWP stream Webcam over socket to mediaElement I just made some changes
because the subject is a bit different, but.. I couldn't make it work
To be honest, I don’t know what to try now… I’m lost because, at the same time, I tried some Xamarin Forms solution, but also some proper UWP solutions and … nothing…. Maybe my point of view is not good, maybe my idea and just on the side, maybe I should try another approach, I don’t know at all..
I was also thinking about creating a class with some interface that I redefine in each platform renderer, but, still nothing…
Do you have please, any idea or any approach?
Note I have cross-posed this to the Xamarin forums.

Vaadin - run client side javascript after image fully loaded

I need to print a picture on client side. I used this as a template. My PrintUI looks like this:
#Override
protected void init(VaadinRequest request) {
Item item = ..get item ..
StreamResource imageStream = ... build image dynamically ...
Image image = new Image(item.getName(), imageStream);
image.setWidth("100%");
setContent(image);
setWidth("100%");
// Print automatically when the window opens
JavaScript.getCurrent().execute("setTimeout(function() {print(); self.close();}, 0);");
}
This works so far in IE but in chrome it opens the printing preview showing an empty page. The problem is that the image is loaded in some way that chrome does not wait for it and starts the printing preview immideatly.
To verify this, I tried: (setting a 5sec timeout)
JavaScript.getCurrent().execute("setTimeout(function() {print(); self.close();}, 0);");
Then it works in IE and Chrome, but its of course an ugly hack, and if the connection is slower than 5sec, then again it will fail.
In pure JS it would work like this, but Im not sure how to reference the element from vaadin in cient-side js. Any ideas?
You can use AbstractJavascriptExtension.
Example extension class:
#JavaScript({ "vaadin://scripts/connector/wait_for_image_load_connector.js" })
public class WaitForImageLoadExtension extends AbstractJavaScriptExtension {
private List<ImageLoadedListener> imageLoadedListeners = new ArrayList<>();
public interface ImageLoadedListener {
void onImageLoaded();
}
public void extend(Image image) {
super.extend(image);
addFunction("onImageLoaded", new JavaScriptFunction() {
#Override
public void call(JsonArray arguments) {
for (ImageLoadedListener imageLoadedListener : imageLoadedListeners) {
if (imageLoadedListener != null) {
imageLoadedListener.onImageLoaded();
}
}
}
});
}
public void addImageLoadedListener(ImageLoadedListener listener) {
imageLoadedListeners.add(listener);
}
}
and javascript connector (placed in wait_for_image_load_connector.js) with the waiting method you have linked:
window.your_package_WaitForImageLoadExtension = function() {
var connectorId = this.getParentId();
var img = this.getElement(connectorId);
if (img.complete) {
this.onImageLoaded();
} else {
img.addEventListener('load', this.onImageLoaded)
img.addEventListener('error', function() {
alert('error');
})
}
}
Then you can do something like that:
Image image = new Image(item.getName(), imageStream);
WaitForImageLoadExtension ext = new WaitForImageLoadExtension();
ext.extend(image);
ext.addImageLoadedListener(new ImageLoadedListener() {
#Override
public void onImageLoaded() {
JavaScript.eval("print()");
}
});
In your case, when calling print() is the only thing you want to do after the image is loaded, you can also do it without server-side listener by just calling it in the connector:
if (img.complete) {
print();
} else {
img.addEventListener('load', print)
img.addEventListener('error', function() {
alert('error');
})
}

Xamarin - Make rotations with CoreGraphics.CGAffineTransform

I want to make a UI Image rotate while content is downloading and I can't seem to make it rotate.
I have this event:
public override void ViewDidAppear(bool animated)
{
NoConnectionView.Hidden = CheckConnectivityStatus();
Task.Run(async () => await StartSpinningAnimation(true));
}
Which then fires this method:
protected async Task StartSpinningAnimation(bool IsSpinning)
{
do
{
UpdateActiveImage.Transform = CoreGraphics.CGAffineTransform.MakeRotation((float)Math.PI / 4);
}
while (IsSpinning);
return;
}
The page will eventually change after files are downloaded so I just want it to spin forever. It does animate at all. Does anyone have any ideas?
Using C# and Xamarin.iOS to develop iOS is a letter bit different from the Native language, Whenever you want to invoke some UI element in your background thread(like you need make some UI do an animation), you must use:
InvokeOnMainThread(delegate {
//Do something related UI stuff
});
If you add a try catch for it, you will get the exception like that "You are calling a method that can only be invoked in UI thread";
And by the way, you can not just use a do while to make an animation, I write a sample for you, you can take a look:
public partial class ViewController : UIViewController
{
bool needAnimate = true;
int count = 0;
UIView animationView = new UIView();
public ViewController (IntPtr handle) : base (handle)
{
}
public override void ViewDidLoad ()
{
base.ViewDidLoad ();
// Perform any additional setup after loading the view, typically from a nib.
animationView.Frame = new CoreGraphics.CGRect (50, 50, 100, 100);
animationView.BackgroundColor = UIColor.Red;
this.Add (animationView);
this.View.AddGestureRecognizer (new UITapGestureRecognizer (() => {
Task.Run(async () => await StartAnimation());
}));
}
private async Task StartAnimation()
{
do
{
Console.WriteLine ("count = " + count++);
InvokeOnMainThread(delegate {
UIView.Animate(0.25,delegate {
animationView.Transform = CoreGraphics.CGAffineTransform.MakeRotation((float)Math.PI / 4 * (count/4 + 1));
});
});
System.Threading.Thread.Sleep(250);
}
while (needAnimate);
return;
}
public override void DidReceiveMemoryWarning ()
{
base.DidReceiveMemoryWarning ();
// Release any cached data, images, etc that aren't in use.
}
}
The animation is not smoothly, you need to optimize it yourself.
If you still have some questions, just leave here, I will check it latter.
Hope it can help you and welcome to Xamarin.
I would use UIView.AnimateNotify to perform the animation with CoreGraphics and totally avoid spin-loops to do something like this.
Example that spins an image 180 degrees and back repeating forever until stopped:
bool _animateStopping = false;
protected void SpinningAnimation(bool animate)
{
if (_animateStopping) return;
_animateStopping = !animate;
if (animate)
{
UIView.AnimateNotify(1,
() =>
{
image.Transform = CGAffineTransform.MakeRotation((nfloat)(Math.PI)); // in one second, spin 180 degrees
},
(bool finished) =>
{
UIView.AnimateNotify(1,
() =>
{
image.Transform = CGAffineTransform.MakeRotation(0); // in one second, spin it back
},
(bool finished2) =>
{
SpinningAnimation(true);
});
});
}
else {
UIView.AnimateNotify(1,
() =>
{
image.Alpha = 0; // fade away in one second
},
(bool finished) =>
{
image.RemoveFromSuperview();
_animateStopping = false;
}
);}
}
Usage:
image = new UIImageView(new CGRect(100, 100, 100, 100));
image.Image = UIImage.FromFile("wag.png");
Add(image);
SpinningAnimation(true);
await Task.Delay(5000); // simulate performing downloads, when done stop the animiation
SpinningAnimation(false);

Custom Event Handling in GStreamer 1.0

I am having a hard time wrapping my head around GStreamer event sending and handling. I understand the process, but can not achieve my desired outcome. I am developing a series of GStreamer plugins in tandem with a GStreamer main application. I have 3 plugins my_src which inherits from GstPushSrc, my_transform which inherits from GstBaseTransform and my_sink which inherits from GstBaseSink, and a main application my_app.
I am trying to send a custom event from my_app to all elements in the pipeline telling them to reconfigure the processing parameters. This is different from GST_EVENT_RECONFIGURE because it does not involve any renegotiation of caps. I am sending the event from my_app with the following:
// my_app.c
GstStructure *reconfigureStructure = gst_structure_new("reconfigure", NULL);
GstEvent *reconfigureEvent = gst_event_new_custom(GST_EVENT_CUSTOM_DOWNSTREAM,
reconfigureStructure);
gst_element_send_event(pipeline, reconfigureEvent);
I have overridden the GstBaseSrc event() method as follows:
// my_src.c
static gboolean my_src_event(GstBaseSrc *bs, GstEvent *event);
static void
my_src_class_init(MySrcClass *msc)
{
GstBaseSrcClass *bsc = GST_BASE_SRC_CLASS(msc);
bsc->event = my_src_event;
}
static gboolean
my_src_event(GstBaseSrc *bs, GstEvent *event)
{
switch (GST_EVENT_TYPE(event)) {
case GST_EVENT_CUSTOM_DOWNSTREAM: {
const GstStructure *structure = gst_event_get_structure(event);
if (gst_structure_has_name(structure, "reconfigure")) {
g_print("MY SRC RECONFIGURE\n");
// do reconfigure things
}
break;
}
}
return GST_BASE_SRC_CLASS(parent_class)->event(bs, event);
}
Similarly, I have overridden the GstBaseSink event handler as follows:
// my_sink.c
static gboolean my_sink_event(GstBaseSink *bs, GstEvent *event);
static void
my_sink_class_init(MySinkClass *msc)
{
GstBaseSinkClass *bsc = GST_BASE_SINK_CLASS(msc);
bsc->event = my_sink_event;
}
static gboolean
my_sink_event(GstBaseSink *bs, GstEvent *event)
{
switch (GST_EVENT_TYPE(event)) {
case GST_EVENT_CUSTOM_DOWNSTREAM: {
const GstStructure *structure = gst_event_get_structure(event);
if (gst_structure_has_name(structure, "reconfigure")) {
g_print("MY SINK RECONFIGURE\n");
// do reconfigure things
}
break;
}
}
return GST_BASE_SINK_CLASS(parent_class)->event(bs, event);
}
Lastly, I have overridden the GstBaseTransform sink_event() method as follows:
// my_transform.c
static gboolean my_transform_sink_event(GstBaseTransform *bt, GstEvent *event);
static void
my_transform_class_init(MyTransformClass *mtc)
{
GstBaseTransformClass *btc = GST_BASE_TRANDFORM_CLASS(mtc);
btc->sink_event = my_transform_sink_event;
}
static gboolean
my_transform_sink_event(GstBaseTransfor *bt, GstEvent *event)
{
switch (GST_EVENT_TYPE(event)) {
case GST_EVENT_CUSTOM_DOWNSTREAM: {
const GstStructure *structure = gst_event_get_structure(event);
if (gst_structure_has_name(structure, "reconfigure")) {
g_print("MY TRANSFORM RECONFIGURE\n");
// do reconfigure things
}
break;
}
}
return GST_BASE_TRANSFORM_CLASS(parent_class)->sink_event(bt, event);
}
When I run my_app I would expect the output to be:
MY SRC RECONFIGURE
MY TRANSFORM RECONFIGURE
MY SINK RECONFIGURE
However, I am only getting:
MY SINK RECONFIGURE
Any ideas what I am doing wrong here?

How can i use VALA delegates in GTK3 button callback?

I'm trying to understand Vala delegates with Gtk3.
I tested callback and lambda with no problem.
I wanna test a delegate callback, here my code :
using Gtk;
delegate void typeDelegate(Button button);
int main (string[] args) {
Gtk.init (ref args);
typeDelegate cb = cbLabelf;
var window = new Window ();
window.title = "First GTK+ Program";
window.border_width = 10;
window.window_position = WindowPosition.CENTER;
window.set_default_size (350, 70);
window.destroy.connect (Gtk.main_quit);
var button = new Button.with_label ("Click me!");
//button.clicked.connect (cb);
//button.clicked+= cb;
button.clicked.connect+=cb;
window.add (button);
window.show_all ();
Gtk.main ();
return 0;
}
void cbLabelf(Button button)
{
button.label = "tank yu";
}
I also red generated C code ( when i use lambda) to understand.
Here the compil error :
GTKsampleDelegate.vala:20.5-20.30: error: Arithmetic operation not supported for types Gtk.Button.clicked.connect' andtypeDelegate'
button.clicked.connect+=cb;
Well,
Seems that you want to get the intrinsic variable that holds the instance that emitted the signal, I find strange that vala doesn't let you use a delegate variable to obtain it via parameter, yet, you can use one of the forms below: using no delegation variable (A) or bypassing the error with a closure (B).
public class FooSignalClass : Object {
/* Gtk Button.clicked signal has the void f(void) signature */
public signal void on_foo ();
public void foo() {
on_foo();
}
}
public delegate void FooSignalFunc (FooSignalClass fooer);
void on_foo_handler (FooSignalClass fooer) {
long fooer_memory_address = (long)fooer;
GLib.message(#"fooer exists? $(fooer!=null).");
GLib.message(#"address=$fooer_memory_address.");
}
int main () {
var foo_signal = new FooSignalClass();
long fooer_memory_address = (long)foo_signal;
GLib.message(#"foo_signal address=$fooer_memory_address.");
/* Option A: Connect directly without the delegate variable */
foo_signal.on_foo.connect(on_foo_handler);
/* Option B: You cant use a delegate directly, bypass it with a closure */
FooSignalFunc func = on_foo_handler;
foo_signal.on_foo.connect((instance) => {
func(instance);
});
foo_signal.foo();
return 0;
}

Resources