I was looking for expressions for zoom in/out , pan.
Basically the use case is this: Consider a rectangle of 1280x720 and I need to zoom in it
to 640x480. The zoom time is configurable, consider x seconds. The output of the expression should be all the intermediate rectangles (format = x,y,w,h) till 640x480 # 30 fps. which means if the zoom time is 5 seconds, then I should get 150 output rectangles well spaced and smooth. (#30 fps, total rectangles = 30 x 5).
Further which, I'll crop them & then rescale them all to a constant resolution and finally feed to the encoder.
The same requirement goes to zoom out & pan-scan.
Thanks.
If you are using a mobile development platform (xcode, android SDK) then gestures are built in functions of the OS and are configurable through drag and drop.
If you're on a web development platform I recommend jquery plugins such as hammer.js or touchpunch. You can find links to them on this question.
If you give more information on your platform i'd be happy to give you more specific examples!
Related
Just a straight forward question. I´m trying to make the best possible choice here and there is too much information for a "semi-beginner" like me.
Well, at this point, I´m trying with screen size values for my layout (activity_main.xml (normal, large, small)) and with different densities (xhdpi, xxhdpi, mhdpi) and, if a can say so myself, it is a mess. Do I have to create every possible option to support all screen sizes and densities? Or am I doing something really wrong here? what is the best approach for this?
My layouts are now activity_main(normal_land_xxhdpi) and I have serious doubts about it.
I´m using last version of android studio of course. My app is a single activity with buttons, textview and others. Does not have any fragments or intents whatsoever, and for that reason I think this has to be an easy task, but not for me.
Hope you guys can help. I don't think i need to put any code here, but if needed, i can add it.
If you want to make a responsive UI for every device you need to learn about some things first:
-Difference between PX, DP:
https://developer.android.com/training/multiscreen/screendensities
Here you can understand that dp is a standard measure which android uses to calculate how much pixels, lets say a line, should have to keep the proportions between different screensizes with different densities.
-Resolution, Density and Ratio:
The resolution is how much pixels a screen has over height and width. This pixels can be smaller or bigger, so for instance, if you have a screen A with 10x10 px whose pixels are two times smaller than other screen B with 10 x 10 pixels too, A is two times smaller than B even when both have 10 x 10 px.
For that reason exists the meaning Density, which is how much pixels your screen has for every inch, so you can measure the quality of a screen where most pixels per inch (ppi) is better.
Ratio tells you how much pixels are for height as well as width, for example the ratio of a screen of 1000 x 2000 px is 1:2, a full hd screen of 1920 x 1080 is 16:9 (16 pixels height for every 9 pixels width). A 1:1 ratio is a square screen.
-Standard device's resolutions
You can find the most common measurements on...
https://material.io/resources/devices/
When making a UI, you use the DP measurements. You will realize that even when resolution between screens are different, the DP is the same cause they have different densities.
Now, the right way is going with constraint layout using dp measures to put your views on screen, with correct constraints the content will adapt to other screen sizes
Anyway, you will need to make additional XML for some cases:
-Different orientation
-Different ratio
-Different DP resolution (not px)
For every activity, you need to provide a portrait and landscape design. If other device has different ratio, maybe you will need to adjust the height or width due to the proportions of the screens aren't the same. Finally, even if the ratio is the same, the DP resolution could be different, maybe you designed an activity for a 640x360dp and other device has 853x480dp, which means you will have more vertical space.
You can read more here:
https://developer.android.com/training/multiscreen/screensizes
And learn how to use constraintLayout correctly:
https://developer.android.com/training/constraint-layout?hl=es-419
Note:
Maybe it seems to be so much work for every activity, but you make the first design and then you just need to copy the design to other xml with some qualifiers and change the dp values to adjust the views as you wants (without making from scratch) which is really faster.
WP 7/8 + xna games by default set to 30 FPS, I would like to know If i set it to 60 FPS , any disadvntages / Performence issues / bugs / any other. becuase i always wants to play/develop the game in 60 fps.
I just want to add a few points to Strife's otherwise excellent answer.
Physics simulations are generally more stable at higher FPS. Even if your game runs at 30 FPS, you can run your physics simulation at 60 FPS by doing two 1/60th of a second updates each frame. This gives you a better simulation, but at a high CPU cost. (Although, for physics, a fixed frame-rate is more important than a high frame rate.)
If you poll input 30 times per second, instead of 60, you will miss extremely fast changes in input state, losing some input fidelity.
Similarly, your frame rate affects the input-output latency. At a lower frame-rate it will take longer for input change to be reflected on-screen, making your game feel less responsive. This can also be an issue for audio feedback - particularly in musical applications.
Those last two points are only really important if you have a very twitchy game (Super Hexagon is a fantastic example). Although I must admit I don't know how fast the touch input on WP7/8 is actually refreshed - it's difficult information to find.
Windows Phone 7 SDK sets XNA to 30 FPS because the screen on Windows Phone 7 Devices has a 30hz refresh rate. This means the screen refreshes at 30 times a second. If you are drawing 30 times a second and you refresh 30 times a second your at the optimal rate of smoothness for that device.
The reason most people aim for 60 (or on my gaming PC, 120) is because most monitors have a 60hz refresh rate (some are now 120hz). If your FPS is HIGHER than your refresh rate you won't notice see anything else except for possible an effect known as "Screen-Tearing" which is what happens when you render more frames in a second than your screen refreshes.
In other words imagine you draw to the screen two times and then your screen refreshes once, why did you bother drawing the second time? You waste battery life, cpu usage, and gpu usage when you render faster than the refresh rate of a device. So my advice to you if your sticking with XNA is that you stick with 30 FPS because the older devices won't get any benefit by having more frames rendered and if anything you'll get graphical anomalies like screen tearing.
If you plan to target higher-end (and newer) windows phone 8 devices, drop XNA, go the Direct3D route and use Microsoft's "DirectX Toolkit" because it includes XNA's "graphics functions" like spritebatch but in C++ instead of C#.
I hope this helps.
I want to display the kinect color frame in wpf with full screen , but when i am trying it ,
I got only very less quality video frames.
How to do this any idea??
The Kinect camera doesn't have great resolutions. Only 640x480 and 1280x960 are supported. Forcing these images to take up the entire screen, especially if you're using a high definition monitor (1920x1080, for example), will cause the image to be stretched, which generally looks awful. It's the same problem you run into if you try to make any image larger; each pixel in the original image has to fill up more pixels in the expanded image, causing the image to look blocky.
Really, the only thing to minimize this is to make sure you're using the Kinect's maximum color stream resolution. You can do that by specifying a ColorImageFormat when you enable the ColorStream. Note that this resolution has a significantly lower number of frames per second than the 640x480 stream (12 FPS vs 30 FPS). However, it should look better in a fullscreen mode than the alternative.
sensor.ColorStream.Enable(ColorImageFormat.RgbResolution1280x960Fps12);
I'm new to OpenGL development for MacOS.
I make game 1024x768 resolution. In the fullscreen mode on widescreen monitors my game looks streched, it's not good.
Is there any function in OpenGL to get pixel per inch value? If I find it, I can decide whether to add bars to the sides of the screen.
OpenGL is a graphics library, which means that it is not meant to perform such tasks, its only for rendering something on to the screen. It is quite low level. You could use the Cocoa API NSScreen in order to get the correct information about the connected screens of your Mac.
I make game 1024x768 resolution.
That's the wrong approach. Never hardcode any resolutions. If you want to make a fullscreen game, use the fullscreen resolution. If you want to adjust the rendering resolution, switch the screen resolution and let the display do the proper scaling. By using the resolutions offered to you by the display and OS you'll always get proper aspect ratio.
Note that it still may be neccessary to take pixel aspect ratio into account. However neither switching the display resolution, nor determining the pixel aspect ratio is part of OpenGL. Those are facilities provided by the OS.
I need to render a QuickTime movie into a GWorld that may not be the movie's natural size, on Windows. When the movie is scaled up, say by a factor of 1.5, I see some jaggies. Whereas if I open the same movie in QuickTime Player (version 7.6.6 on Windows Vista) and stretch the window, I see jaggies while stretching, but when I release the mouse, the image smooths out. It must be using some smarter scaling algorithm or antialiasing. What do I need to do to render at a bigger size besides SetMovieGWorld and SetMovieBox?
Here's a little of the smooth version:
(source: frameforge3d.com)
And here's the slightly jaggy counterpart:
(source: frameforge3d.com)
(Although this shows text, it's not a text track, just part of the image.)
I tried calling SetMoviePlayHints with various flags such as hintsHighQuality, with no effect.
Here's the big picture, in case you might have a suggestion for a whole different approach. The movie is side by side stereo. The left and right halves of the image need to be composited (in one of several ways) and then drawn to the screen. For instance a movie I'm testing with has a natural size of 2560 x 720, but is displayed at a natural size of 1280 x 720. The way I'm currently doing it is rendering to a GWorld, and then in a SetMovieDrawingCompleteProc callback, uploading the left and right halves to OpenGL textures with glTexSubImage2D, and rendering to the screen using a GLSL shader.
I tried using an Apple Developer Technical Support incident to get help with this, but their reply was basically "we don't do Windows".
Can you use DirectX and more specifically, DirectShow to display your movie instead of using Apple's SDK? I know that DirectX can play QuickTime movies.
If you can use DirectShow then you can search for or create your own video transform filters to do antialiasing and smoothing.