[I found this question asked once before on SO about a year ago on but it went unanswered so I am asking again (wasn't sure whether best practice was to create a new question or "bump" the existing one).]
I have a Xamarin Forms application which is receiving a stream of JPEGs over HTTP and I want to keep updating a single Image placeholder with their contents. The frame rate at which I receive the images is very slow by design (maybe 5 fps at most) as this is for a time-lapse photography project. I want to be able to show the stills as an animation (think: taking a photo of a plant once per hour and then "playing" all of the stills to create a lively animation of said plant).
The problem I am having is with "flicker" while swapping one image out for the next. I've tried a variety of approaches, including having two Images (one visible at a time) and only changing the visibility once I'm done loading the latest image (I think this is maybe a naive form of double-buffering?). Have also looked into Motion JPEG, HTTP Keep-Alive, and even using a WebView (I have a valid Motion JPEG endpoint that I can read from). Anyway, nothing that I've found so far has helped reduce the flicker and therefore the "animation" of the stills remains very "jerky" for lack of a better way to put it, with a momentary blank (white) pane between stills. Frankly speaking, it looks like crap.
Here is the gist of the code which fetches the next "frame" and updates the "player" image:
var url = $"{API_BASE_URL}/{SET_ID}/{_filenames[_currentFilenameIndex]}";
var imageBytes = await Task.Run(() => _httpClient.GetByteArrayAsync(url));
var imageBytesStream = new MemoryStream(imageBytes);
var timeComponents = _filenames[_currentFilenameIndex].Split('T')[1].Split('.')[0].Split('-');
var timeText = $"{timeComponents[0]}:{timeComponents[1]}:{timeComponents[2]}";
Device.BeginInvokeOnMainThread(
() =>
{
TimePlaceholder.Text = $"{timeText} (Image {_currentFilenameIndex + 1}/{_filenames.Count})";
ImagePlaceholder.Source = ImageSource.FromStream(() => imageBytesStream);
});
Any suggestions would be greatly appreciated.
Related
I am considering drawing a network.
For example, as shown in the demo, we can use the dashboard to get the chart, but there is no "save button" on the right side, as is often the case.
cux_df = cuxfilter.DataFrame.load_graph((nodes, edges))
chart0 = cuxfilter.charts.dashader.graph(node_pixel_shade_type='linear', unselected_alpha=0.2)
d = cux_df.dashboard([chart0], layout=cuxfilter.layouts.double_feature)
chart0.view()
Since we are using large data, we would like to take advantage of cuxfilter's quick drawings. If holoviews, for example, it takes too long to compute. Screen captures, etc. are possible, but is there any way to save the resulting figure?
Drawing with datashader took too long. I could create a view screen with cuxfilter.
The only way right now is to use the dashboard preview() function, which screen captures the dashboard in it's initial state and saves it as a png file. The way to do that is as follows:
cux_df = cuxfilter.DataFrame.load_graph((nodes, edges))
chart0 = cuxfilter.charts.dashader.graph(node_pixel_shade_type='linear', unselected_alpha=0.2)
d = cux_df.dashboard([chart0], layout=cuxfilter.layouts.double_feature)
await d.preview()
This would only work in a jupyter lab/notebook environment though, and is restrictive in capturing current state.
Based on your suggestion, it was as easy as adding an extra tool to the chart using bokeh, so we ended up adding it as a new feature, for all the bokeh and datashader based charts, the progress can be tracked here. To try it out once the changes are merged, you would have to install the cuxfilter nightly version (23.02). Once the changes are merged, this is how the toolbar would look:
I have this project:
my codepen
I want to be able to move forward when the user walks, so it feels like they are walking thru the floor plan in VR as they are in real life.
my goal is get the geolocation of the user and show them the room matching theirs location and have them walk around the room while viewing the AR on the phone they would see paintings on the walls.
my challenges are:
walk in real life and move in VR (right now I have it auto walking forward in the meantime)
var speed = 0.0;
var iMoving = false;
var velocityDelta;
AFRAME.registerComponent("automove-controls", {
init: function() {
this.speed = 0.1;
this.isMoving = true;
this.velocityDelta = new THREE.Vector3();
},
isVelocityActive: function() {
return this.isMoving;
},
getVelocityDelta: function() {
this.velocityDelta.z = this.isMoving ? -this.speed : 0;
return this.velocityDelta.clone();
}
});
capture the user geo location so the moment they open the site they are placed relative to their location on the floor plan
this is my first attempt so any feed back would be appreciated.
As far as i know argon.js is more about geoposition than spatial/marker based augmented reality.
moreover It's quite worrying, that their repo for aframe was not touched for a while.
Argon seems like a library for creating scenes in certain points around the user, even their examples base on positioning stuff around, reason being the GPS/phone accelerometers are way too bad to provide useful data for providing spatial positioning. Thats why VIVE needs two towers, and other devices at least a camera/IR device, to get information about the HMD device.
Positioning the person inside a point depending where are they in a room is quite a difficult task, You would need to get a point of reference and position the user accordingly. It seems impossible, since the user can be anywhere in the world.
I would try to do this using jerome-etienne's marker based AR.js. The markers would be the points of reference You need, and although image processing seems like a difficult task, AR.js is surprisingly stable with multiple markers, which help in creating complex scenes.
The markers seems like a good idea, for they can help You with the positioning, moreover simple scenes have no problem with achieving 60+fps, making the experience quite comfortable.
I would start there, since AR.js seems to be updated frequently.
I have a problem to depict many mini-versions of my InkCanvas. In my app it is possible to write or draw on a InkCanvas. Now I want to depict all created InkCanvas in a GridView.
But with mini versions in this GridView i can not create enough mini versions.
I tested it with 36 mini versions and after I show one and navigate back, the App crashs everytime by rendering the same mini InkCanvas with the error: Insufficient Memory. So I searched an found this post:
Insufficient memory to continue the execution of the program when trying to initialize array of InkCanvas in UWP
I checked the Memory workload:
var AppUsageLevel = MemoryManager.AppMemoryUsageLevel;
var AppMemoryLimit = MemoryManager.AppMemoryUsageLimit;
and the memory has enough free space. (is this a bug?)
So I tried to render a image from my grid with a InkCanvas but the strokes were not rendered and all pictures were empty. ( can I save Memory this way?)
So now my question is:
Can someone tell me how to solve this problem? And what is the best way?
Thank you very much in advance!
Agredo
If you want to preview your drawings, better way is to render them to bitmap and show this bitmaps in grid instead of multiple complex controls InkCanvas is.
Here is some code to render inks to bitmap from another SO Answer:
CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, (int)inkCanvas.ActualWidth, (int)inkCanvas.ActualHeight, 96);
using (var ds = renderTarget.CreateDrawingSession())
{
ds.Clear(Colors.White);
ds.DrawInk(inkCanvas.InkPresenter.StrokeContainer.GetStrokes());
}
using (var fileStream = await file.OpenAsync(FileAccessMode.ReadWrite))
await renderTarget.SaveAsync(fileStream, CanvasBitmapFileFormat.Jpeg, 1f);
You also need to add Win2D.uwp nuget package to your project.
Issue which i am facing is OS takes time to generate thumbnail, if i try to access the thumbnail it is throwing error. Any workaround for this? Can't specify Task.Delay as timings could be different for different phones. I want to show the thumbnail instantaneously.
You can't really speed up processes that take a while to complete. You're at the mercy of the OS to provide you with the thumbnail when it can, but do make sure that you kick off the request as soon as you can.
Make sure all your processes are asynchronous, and the UI will remain responsive during this call. While it processes, you should be showing some sort of an activity indicator to the user, possibly in the form of a TextBlock with the word "Loading..." near a ProgressRing that has its IsActive property set to true.
Home it will help someone. The below code will generate thumb image for video file
var recordedFile =//get StorageFile
var clip = await MediaClip.CreateFromFileAsync(recordedFile);
var comp = new MediaComposition();
comp.Clips.Add(clip);
var thumbstream = await comp.GetThumbnailAsync(TimeSpan.Zero, 320, 240, VideoFramePrecision.NearestKeyFrame);
See this link to get more info about MediaComposition class.
I'm building a very simple video player in Silverlight (using Expression Blend 4), and I want to have an Image that comes from the same image file as another Image. (The image files are just in the root of my project.) If someone selects my first video thumbnail in one area, the detail view for the selected video also includes this thumbnail.
I'd love to just do this (in the C# code-behind):
videoDetailImage.Source = _currentThumb.Source;
But it seems to have no effect. I've played around with:
videoDetailImage.Source = new BitmapImage(new Uri("t01-artifact.jpg", UriKind.Relative));
And that doesn't work either. I've even tried grabbing images from the web and going UriKind.Absolute - still nothing shows up.
Update: I actually gave up on this and did a very lazy thing of just putting all the images I needed into the app and toggling their Visible / Collapsed states. It's a piece of junk, but all that mattered was that it work.