motion capture with a Kinect v1 in processing - processing

Hello there I was wondering if anyone could help me with something
I have recently been giving a task to do from teachers at college and. I hope to achieve this is through motion capture.
The other lecturers' teacher sound art and film art, so I plan to create a program that will track the participant's movements and displaying the movement on screen with ether set or random colours.
I would also like use to the sound part of this project through the participant's movements, but by either changing the pitch of noise through movement or by changing the speed of the sound through movement.
I have manged to get a 360 xbox Kinect 1414 to work in processing and what played around with the motion tracking but can’t seem to figure out how to attach an ellipse to the hands. I hope someone can help me and that it doesn’t seem much of a hellish task.
if you can help here is my email address (alicebmcgettigan#gmail.com)
(if this is impossible I would understand as I tend to make life difficult for myself haha)

You will need a middleware library that can provide skeleton tracking data from depth data.
One option on Windows is the Kinect for Windows Processing library which uses the Kinect SDK.
There is another library called SimpleOpenNI which works on multiple operating systems.
The official version is not longer updated for Processing 3 (does work with Processing 2.2.1 though.). Fortunately you can find an updated fork of the SimpleOpenNI library on github
To manually install the library:
select the version of the library for your version of Processing (e.g. for Processing 3.5.3 go to SimpleOpenni Processing_3.5.3). It should be one of 3.5.3, 3.5.2, 3.4, 3.3.7, 3.3.6, or 2.2.1 (otherwise you may to install one of these Processing versions)
Click Clone or download > Download ZIP (on the top right side of the repo)
Unzip and the contents and within the folder select the SimpleOpenNI folder that contains a folder named library:
Move this nested SimpleOpenNI folder (containing the library folder) to Documents/Processing/libraries
Restart Processing (if it was already running)
Go to Processing > Examples > Contributed Libraries > SimpleOpenNI > OpenNI and start playing with the examples
Other notes:
To track a user start with the User and User3d examples
notice context.getCoM() returns the centre of mass: a single point while context.getJointPositionSkeleton() can get you position of a hand in 3D
you can use context.convertRealWorldToProjective() to convert from a 3D position to a project 2D position on screen
Once the skeleton tracking is locked to a person you can get the joint position for each hand, but it's worth noting there's a separate hand tracker functionality: checkout the Hands / Hands3d examples. Depending on how you want to track participants / what the environment is / what the motions choose the option that works the best
Speaking of the environment bare in mind the Xbox 360 kinect is susceptible to infrared light interference (for example bright incandescent lights, direct sunlight, etc.): this will deterioriate the depth map quality which in turn affects skeleton tracking. You would want to have as much control over lighting as possible and have ideal lighting conditions.
test ! test ! test ! :) think of the interaction and the environment (sketching on paper first can be useful), for each assumption run a basic test to prove that it works or not. Use iterations to learn how to change either the environment or interaction to make it work.
Check out the RecorderPlay example: it records a .oni file which contains both RGB and depth data. This is super useful because it allows you to record on site in areas where you might have limited time access and it will save you time not having to go back and forth between your computer and in front of the kinect. (Once you initialize SimpleOpenNI with the path to the .oni file (e.g. context = new SimpleOpenNI(this,recordPath);) you can run the skeleton tracking and everything using the recording
If you want to see more about Kinect and Processing check out Daniel Shiffman's Getting started with Kinect and Processing page
Have fun!

Related

How can i create a controlled stand-alone 3D viewer?

I'm trying to create a custom 3D viewer that can show up a rigged model created with blender. Particulary, the model will be a mechanical one, so in need to move them with an IK solver: i ll move/rotate a single bone (which will be the motor axes) and all other components must follow it.
I've already done the Blender part (see link below), but now i'm stuck on how to make a stand-alone viewer (ideally a single executable file) that can communicate with another application/process (maybe with TCP socket?) and move the 3d model based on the information sent by it. I'm working on Win10.
Does anyone have any ideas?
(sorry my bad english, I hope explanation was thorough)
https://youtu.be/5rR7BKrGzFg

How can I programmatically interact with a video game GUI

Before I get shot down on this one, I realize that the 'how' answer for this question might be slightly debatable, however I'm more interested in the 'what'.
In a nut shell I want to know which methods I can use to interact with a PC video game interface. I want to create a program that can extract data from a video game market interface.
My first initial thought was that I would need to programmatically take screen shots and then use some Optical Character Recognition software to extract the text. Then run whatever operation on the extracted text to derive my incites.
Then I was thinking it might just be easier to have a bunch of mini screen shots that I just use to find matches on certain sections of the screen. When a match is found, I would then know what the text is on the screen, without having to actually 'extract' it.
For those out there whom have done this, can you point me in one direction or the other? Perhaps there is a method that I am completely unaware of.
If its the case that this question is not suitable for this forum. It would be much appreciated if you could direct me elsewhere.
Edit: I should probably add that I'm not looking to spend a fortune on this project... so any free software would be the best. Perhaps that's a tall order.
I'm starting to think Sikuli is the direction I'm going to go. Open Source image recognition software, integrates with Python, Ruby, Java, JDBC, JavaScript and more.
-- Expanding on the question --
There are basically 3 categories of tools:
Recorder while you manually work along your workflow, a recorder tracks your mouse and keyboard actions. After stopping the recording, you might playback (autorun your worflow). The recordings can usually be edited and augmented with additional features.
GUI aware the tool allows to programmatically operate on GUI elements like buttons. This is based on the knowledge of internal structures and names of the GUI elements and their features. Some of these tools also have a recording feature.
Visually the tool “sees” images (usually retangular pixel areas) on the screen and allows to act on these images using mouse and keyboard simulation. There might be some recorder feture as well with such a tool.
SikuliX belongs to the 3rd category and currently does not have a recorder feature.
Answer in progress...
In games with moddable UIs, like many MMOs, you could create a mod that streams data through a series of black and white squares that could be read with optical sensors. From there, a microcontroller could deliver the data back to the PC via USB or wifi.
My approach as a noob. First determine if OCR 100% needed, I think this plays a role in speed.
if possible:
-run game in window (allows for trouble shooting and easy troubleshooting)
-is there a high contrast option for game? Will help Sikuli find things
then you plan out your scenarios:
You have to create different functions for different situations. A lot of gaming is "do you see this?" Then "do this" until that is gone.
Start with small parts you want to automate then build on them. Making sure your parts can scale in case small change need to happen, they will. For instance you want to open the menu if you see an object, lets say a tree.
Assume you have some sort of walking algorithm.
setROI(region1) #focus here for tree
if exists(tRee):
click(loCation) #you could hit the shortcut key to opening the menu
click(iTem) #if the item moves in the menu then you may need to scroll to find it first or you can change the ROI and start seeing if sikuli can differentiate your item from one you dont want to click.
You would get that to loop into other actions and proceed. Goodluck.

Auto-cropping image with detection of crop-lines

I am working on a project, which is Android app that uses camera to capture a photo of some ticket and does OCR recognition for only a part of it. I have no previous experience in image processing, but I know it must be some kind of tricky way, because Android applications have small RAM limits.
I have not enough reputation points to post images so I give URLs to it.
Below, I attach image before any processing:
My aim is to automatically detect these lines of (---) and crop it so that final image look like this one:
What's more - it's important to stay open-source and do it without sending photo to some external image processing service.
You can try using Hough Transform to find the lines. OpenCV has a implementation that is open source and works on Android.
HoughLineP is a very efficient Version of the HoughTransform to find Line Segments.
Olena is definitely the way to go!. It's a generic image processing library, but the interesting part is an module that's called Scribo.
Scribo will do document analysis on the picture to extract text and/or image regions, and optionally send text regions to tesseract for recognition.
Being feasible for Android or not is something that I couldn't tell. I've tried it on OSX and Linux systems and it shows great potential.

Using images in Matlab GUI

I'm working on a small image processing project in MATLAB. I have worked with MATLAB before, but never created a GUI. The GUI I want to create could be pretty advanced, so I need some hints on how to get started.
The purpose of the GUI would be to load an image and have it shown to the user. The user then has to click on two points in the image, of which the coordinates are stored (in pixels) in a variable. If possible, a colored dot is shown where the user has clicked. After the user finished with the current image, he can load a next one.
I have some experience with Java, and I think this wouldn't be too hard in Swing. But MATLAB seems like not having the purpose of creating such an advanced GUI. However, the whole project until now is in MATLAB, so it would be nice if I could manage to do it. Any help? Hints? Things I should look at?
Thanks a lot.
This is not a very complex task to be done in MATLAB.
For simple instructions about adding a picture to a GUI, take a look at this post:
http://blogs.mathworks.com/pick/2007/10/16/matlab-basics-setting-a-background-image-for-a-gui/
For instructions on various interactions between GUI axes and the mouse pointer, check this video (keep in mind that your picture in the GUI lies within normal MATLAB axes):
http://blogs.mathworks.com/pick/2008/05/27/advanced-matlab-capture-mouse-movement/
In general, Doug's tutorial videos are great for MATLAB beginners, and I'd advise you to take a look at more of them.

Qt, CEGUI or wxWidgets for a text game GUI?

I tried to sign up, but I was unable; perhaps a problem from my side. Hopefully I'll get an answer as anonymous.
I apologize for the grammar/syntax, but English isn't my native language.
Recently I lost my job, so I have enough spare time to try something fun. I decided to create a simple text RPG game for me and some friends. It will very close to the board games like Talisman, Dungeon Run, and HeroQuest, using dice and a simple attribute/skill system. So no 3d graphics. The only 2d element, if I decide to include it, will be a map
that will allow the hero to move between locations. Currently I'm using Windows XP SP3, for the game I use wxDev-C++, and although cross platform would be cool, I don't really care.
I have some experience in C++ (currently using wxDev-C++), but I'm far from being called an expert or even a great programmer. I was about to start writing parts of the code, but I decided to check if creating a GUI for the game is possible. In some forums, many suggested I use Qt, CEGUI or wxWidgets, but most examples I saw are grey boxes that are
indifferent at best, when I want something that fits better in a fantasy setting. I don't claim I would do better, but I want a GUI that is more fantasy related.
What I want from the GUI:
1. A "cool" Gui with decent graphics. I could even create an image to serve as a mask in Photoshop, but the GUI builder will have to support imported images.
2. A relatively large textbox in the middle (with a scrollbar) that will display die rolls, damage and options.
3. The ability to display dynamically values (like the change in the health after each action without requiring to refresh manually)
4. Display an icon or a small image of the character in the area where I display stats/abilities.
5. Open new windows created with tha same GUI builder to allocate points, buy/sell things and open a map.
About the map in the game: I decided to create a map in photoshop. When the hero decides to move to another location, a new window will open showing the map. I thought of 2 possible ways to move between locations: 1) Create hotspots on the image and select one by clicking on the name of the location.(I dare not think about the complexity of this so we
move to idea #2) and 2) Have the image as a backgroung to a grid with vertical and horizontal coordinates. When the hero selects a new area to visit, he clicks on the area, but what he really does is click on the grid, which returns the two values (x,y) of the location and informs the game about the area the hero wants to visit.
Yeah, yeah, I know it's too much, so what I'm most interested in are the 1-3. I know that even if they are possible, it will propably take forever, but as I said I have spare time, and I like learning new things. I apologize for the size of the post, but I decided to post as many info as possible so you know what I want.
If any of you has used Qt, CEGUI or wxWidgets could you tell which covers most of my criteria? I saw some great stuff build with CEGUI, but I don't know if it is too hard to learn?
Thank in advance.
I know my answer comes pretty late, I only recently started using stackoverflow fairly recently, but maybe this response will help anybody.
CEGUI fully supports skinning widgets using XML. Our CEED editor (WYSIWYG) fully supports layout editing, but the skinning editor (LNF editor) is not finished as of now (11.11.2014), the development version supports exchanging images however and changing sizes and proportions, but more advanced adjustments have to be done in XML.
CEGUI has an imageset editor, fully supported by the CEED editor. Creating imagesets (sets of named subimages, with position and dimension inside a big texture atlas) is supported there. Additionally there is a way to create imagesets from just a bunch of jpg/png/... files using a tool. You would have to ask for specifics in the forum though because it is not integrated into CEED yet.
So basically with CEGUI you are free to make whatever fantasy GUI you want. Skinning simple elements like buttons and progress bars isn't much work in XML anyways. Without the finished editor, some more advanced widgets are more work to skin, but many skins have already been created done this way and some of them are even publically available in the forum and in the CEGUI stock files.
StaticText widgets supports what you want, you can even use images in there or change fonts and colours in the text if you want. Scrollbars are supported too.
I am not sure what you mean by this. You have to specify this.
A simple "Generic/Image" widget is available in CEGUI for this purpose. You can use precreated images or even RTT textures.
You can create and destroy windows in CEGUI without issues.
Regarding the map: I m not sure what you mean, but getting the position of a click in respect to an image (representing the map) is possible in CEGUI.
CEGUI is not particularly hard to learn. There is always the forums and the chat if you got questions. For an Open Source project it is quite well documented so if you read all of the API docu, and look at the supplied samples in the sample browser, you should already get quite far. And for everything additional there is the forum (search), the IRC chat and a community wiki (mind the targeted versions of an article there though)
For a project like yours, CEGUI seems perfectly suited (this is what it was created for in the first place). Qt is not really optimal for games for numerous reasons. wxWidgets I have never used.

Resources