Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am new to OpenGL ES. I was doing iOS development. Now I actually want to animate a 3d character. Can anyone help me out how I can animate it? I have some idea that it needs frames to animate. Can someone give me some sort of demo so I can work it out?
This is a great tutorial for animating graphics & making games.
http://www.lynda.com/tutorials/Building-and-Monetizing-Game-Apps-for-iOS/82407-2.html
There are different techniques for character animations but the skeletal-animation technique should be the best for characters. Using this technique requires some work:
Load animation frames
Interpolate animations
Create animation matrices
This not gonna be easy, especially when you want to use facial animations you need techniques like morph-targets additional to the skeletal animation ( skeletal works here too, but it's hard to use ).
Side-note:
Animations are CPU expensive and should be used carefully when creating apps for iOS.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have recently taken up film photography. Part of the workflow is to scan the images using a flatbed scanner. Unfortunately this process is very slow. Using some software (Silverfast) you make a prescan, zoom in make a more detailed pre scan, click ad drag around a rectangle which highlights the frame, do this for 12 frames, then set the software to do the full res scans.
I want to automate this process. Rather than layout where each frame is, I want to scan the whole film strip, and then use ML.Net to find each frame (X,Y coordinates of the top left corner) which I will then pass to ImageMagick to extract the actual image.
I want to use ML.Net because I am a .Net developer and may have the opportunity to use this experience later. So although example using OpenCV would be welcome, ML.Net would be preferable.
I am a bit of noob when it comes to ML stuff. My first thought is to try train a neural net, inputting the scan image and outputting the X and Y values. However that seems naive (as the image is 100s of MB in size). I imagine the there are better tool then just a raw neural net.
My searching on 'ML object recognition' didn't seem to help as the examples I found were about finding the Dog or Person in an image not a 'frame'; which could be a dog or a person.
Even a pointer in the right direction, of the correct name for this problem would be a great help.
So, what are the type of tool/functions I should I be using to try and solve this type of problem using ML.net?
This is not so much a machine learning problem as it is an image processing problem. I would think ML.Net is quite overkill.
What you probably want is an image processing library and utilize some form of edge detection or "region of interest" detection.
For example, look at this question:
Detect display corners with Emgu
Maybe I misunderstand what you want to do and you actually would benefit from machine learning; then you probably should pre process your images with an image processing library before feeding them to your model.
Hope it helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm trying to add few animations to my game. I searched a bit for some animations software but everything I found was too complicated for me. Is there any simple animation software which I can use with Unity or should I just stick to default Unity animation tool?
If you want to roll it as you said, you can accomplish this in several ways without exiting Unity.
For instance, here are some :
Using an Animator component in the cube and applying a premade Animation (made within Unity via Animation window, just change transform rotation properties)
Using Physics to apply a constant rotation force (torque)
Using scripting to modify transform rotation properties on each frame update (c# or javascript)
Probably more exist but those are the simplest and easiest.
If I were to choose one, Animator + Animations would be my choice. Also have in mind that this component (Animator) is much better performance wise than any other solution when used on several instances in the scene (lots of cubes).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am starting development on a HTML5 game using the canvas. Things are going alright so far, but I have some things I am a little puzzled on, mostly having to do with performance of such game. So instead of me running into a bunch of problem deep down the development process, I would be grateful to find out now.
I have before worked in languages such as OpenGL and learned how important it is to render things in an efficient order, this is something that can give a factor of hundreds better performance than just randomly drawing stuff to the screen switching back and forth between textures/shaders etc. Is this something I should keep in mind with a canvas game, or will things automatically be queued up and rendered in an efficient order?
It is going to be a 2D game, but with quite a few objects on the screen, and most of them dynamically desaturated and changed in brightness (filters). Is performance going to be a serious problem?
What are the alternatives, any javascript game-engine that can help performance? Am I going to get a performance boost by switching to WebGL even though standard canvas has hardware-acceleration?
Yes, you'll get a big performance boost with WebGL.
Consider using the excellent 2D rendering system called Pixi.
It renders sprites to WebGL with a fallback to Canvas.
Or, you can make your own low-level WebGL sprite rendering system using game-shell and gl-modules. gl-now is a good entry point into these modules. You can use them to build your own game engine.
Phaser, is an complete HTML5 game engine that currently has a lot of traction, and uses Pixi under the hood for rendering. A better place than Stackoverflow to look for help about all these issues is http://www.html5gamedevs.com.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have questions about both sift and phash
First of all, I'm using SIFT to identify similar image in real-time service.
Like pictures by phone-camera, small amount of rotation and blurred effect could be.
And I found Phash. So, I test phash on its demo page. But result made me to sigh.
This is result of above test:
In this test, two images are fixed on x-axis. So they don'
t have rotation. But right images' logo were removed and person was moved to left side. In my eye, This is 'Very Similar'. In addition, SIFT catch this completely.
Now, This is question.
pHash is faster than SIFT?
Is pHash's accuracy reliable?
SIFT's output was too big to use in real-time service. So I must use hash to make output smaller size like LSH(Locality-sensitive hashing). Any other way to I try?
Ok, I got it.
pHash can't recognize rotation and critical movement as same thing.
In case of data space, pHash was dramatically good for using. It is very small size: one image to one hash. SIFT, however, need 128 bytes to get feature point. And there are many feature points in one image.
Eventually, SIFT can identify similar image well than pHash. But more and more size was needed.
In speed bench, I can't test yet. But I think, pHash was faster than SIFT because SIFT have to operate for many features on one images.
If you have another answers for above question, tell me please.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am trying to import R plots for editing in inkscape (tried bot svg and pdf). Once I do so inkscape becomes too slow. Is there any solution to this problem? I tried to simplify the paths through PATH/simplify but there was no significant improvement
Thanks
Unfortunately, probably not. My guess is that you are trying to edit R plots with many data points, which then translates to many SVG elements in Inkscape.
Inkscape handles well relatively small to moderately complex/involved graphics, but consumes large amounts of RAM once many SVG elements are rendered to the canvas, even if each element is small.
You can try a couple of things: (a) try opening Inkscape on a more powerful computer with lots of RAM; (b) see if you can create an R plot which conveys the same meaning, but has fewer data points/elements; (c) try to dig around the web to find an unofficial 64-bit version of Inkscape. Alternatively, if editing the plots is a one-off, consider trialing Adobe Illustrator 64-bit and creating SVG plots this way.
Sorry I could not be of more help.