I want a way to track a user looking at a screen over time.
E.g. in normal use what exact seconds of the day had the user looking at the screen.
I'm wondering what innovative ideas or pre-existing software would allow me to do this.
So for more detail the way I see it is there would be some tolerance levels e.g. distance from screen, angle of head to screen that would be considered "engaged" with the monitor. If the camera on say a mac book pro was used to track this then it would record in a text file/key value store a timestamp and boolean value for each second of the time the program in turned on.
Anyone any experience with this sort of thing?
You can find a good starting point here: http://code.google.com/p/ehci/
It's a software based on OpenCV that tracks head and detect its orientation. It's opensource.
There are facetrackers implemented (and already trained with markers), for example in OpenCV. I suggest you to first start with just tracking faces. Once you have a robust facetracker, and you can generate output telling how long a face has been looking at the screen, etc.
Later you can add improvements. Once you detect a face, you can try to recognize people analizing face pixels.
Another line is to recognize parts of the face, like mouth, eyes, nose, eyebrows...
If you can track face and parts of the face, you can try to recognize facial expression patterns, like happines, sadness, etc..
Face.com has a solution to regonize faces. So just grab the camera input and send it to their servers I guess?
I've built a face detection system to do something like this once using OpenCV, you can see the result here.
The method I used then was two seperate uses of haarTraining with the standard built in OpenCV classifiers. I used the classifier called haarcascade_frontalface_default.xml to see if a user is watching the screen and haarcascade_profileface.xml to see if the user was looking away. The following code should get you started using openCV and C++.
CvHaarClassifierCascade *cascade_face;
CvMemStorage *storage_face;
CvHaarClassifierCascade *cascade_profile;
CvMemStorage *storage_profile;
//profile face
storage_profile = cvCreateMemStorage( 0 );
cascade_profile = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_profileface.xml", 0, 0, 0 );
cvHaarDetectObjects( frm, cascade_profile, storage_profile, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING);
//frontal face
storage_face = cvCreateMemStorage( 0 );
cascade_face = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_frontalface_default.xml", 0, 0, 0 );
cvHaarDetectObjects( frm, cascade_face, storage_face, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING);
//detect profiles
CvSeq *profile = cvHaarDetectObjects(img,cascade_profile, storage_profile, 1.1,3,0,cvSize( 20, 20 ));
for( i = 0 ; i < ( profile ? profile->total : 0 ) ; i++ ) {
CvRect *r = ( CvRect* )cvGetSeqElem( profile, i );
//draw rectangle here, or do other stuff
}
//detect front
CvSeq *faces = cvHaarDetectObjects(img,cascade_face, storage_face, 1.1,3,0,cvSize( 20,20 ));
for( i = 0 ; i < ( faces ? faces->total : 0 ) ; i++ ) {
CvRect *r = ( CvRect* )cvGetSeqElem( faces, i );
//draw rectangle here, or do other stuff
}
Related
I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions
I'm trying to make a small project using Three.js & the physics plugin physijs; just a little dice roller. My approach is to use setGravity to move the dice around, modelling gravity to move the dice around. The issue I'm running into is that once the dice come to a rest, they no longer respond to gravity. Has anyone run into this before?
Whats happening:
Ammo.js, on which Physijs is based, puts resting or very slow moving objects in a sleep state to save performance. So when you change the worlds gravity the sleeping objects dont care, because Physijs doesnt tell them gravity has changed.
You have the ability to modify the sleeping thresholds, set activation states or just quickly activate the rigid bodys before changing gravity.
Please note this code applys to native Ammo.js, I am not sure how to
do this when using physijs but you get the idea.
Solution 1: Loop over your Bodys and activate them, then change gravity:
// dice is an array with your rigid bodys
for ( var i = 0; i < dice.length; i ++ ) {
// hey wake up
dice[ i ].activate();
}
physicsWorld.setGravity( new Ammo.btVector3( 0, -9.81, 0 ) );
Solution 2: Thou shall get no sleep, do this after creating your dice:
var DISABLE_DEACTIVATION = 4;
for ( var i = 0; i < dice.length; i ++ )
// no sleep for you... ever
dice[ i ].setActivationState( DISABLE_DEACTIVATION );
}
I have found a tutorial on parallax scrolling in spritekit using objective-C though I have been trying to port it to swift without much success, very little in fact.
Parallax Scrolling
Does anyone have any other tutorials or methods of doing parallax scrolling in swift.
This is a SUPER simple way of starting a parallax background. WITH SKACTIONS! I am hoping it helps you understand the basics before moving to a harder but more effective way of coding this.
So I'll start with the code that get a background moving and then you try duplicating the code for the foreground or objects you want to put in your scene.
//declare ground picture. If Your putting this image over the top of another image (use a png file).
var groundImage = SKTexture(imageNamed: "background.jpg")
//make your SKActions that will move the image across the screen. this one goes from right to left.
var moveBackground = SKAction.moveByX(-groundImage.size().width, y: 0, duration: NSTimeInterval(0.01 * groundImage.size().width))
//This resets the image to begin again on the right side.
var resetBackGround = SKAction.moveByX(groundImage.size().width, y: 0, duration: 0.0)
//this moves the image run forever and put the action in the correct sequence.
var moveBackgoundForever = SKAction.repeatActionForever(SKAction.sequence([moveBackground, resetBackGround]))
//then run a for loop to make the images line up end to end.
for var i:CGFloat = 0; i<2 + self.frame.size.width / (groundImage.size().width); ++i {
var sprite = SKSpriteNode(texture: groundImage)
sprite.position = CGPointMake(i * sprite.size.width, sprite.size.height / 2)
sprite.runAction(moveBackgoundForever)
self.addChild(sprite)
}
}
//once this is done repeat for a forground or other items but them run at a different speed.
/*make sure your pictures line up visually end to end. Just duplicating this code will NOT work as you will see but it is a starting point. hint. if your using items like simple obstructions then using actions to spawns a function that creates that obstruction maybe a good way to go too. If there are more then two separate parallax objects then using an array for those objects would help performance. There are many ways to handle this so my point is simple: If you can't port it from ObjectiveC then rethink it in Swift. Good luck!
I am creating several THREE.Lines using THREE.BufferGeometry. Initially my app had them all starting at the origin and things worked as expected. Now, I would like to be able to start (and end) them at any point.
This fiddle (http://jsfiddle.net/9nVqU/) illustrates (I hope) how changing one end of the line away from the origin causes unexpected results.
I wondered if it was because any given line follows on from the previous one - switching the start/end order didn't change anything though so if that were true, I'd expect it to break.
Maybe I have the arrays set up incorrectly or the attributes that tell THREE.js how to interpret it - I think I need 2 * 3 verts for each line but changes I made to buffer_geometry.attributes = { seemed to make things worse.
FWIW, the actual effect I'm trying to achieve is to selectively turn on and off the lines based on user input. I can do that already by changing the end position but then I lose that value and I don't want to store it elsewhere. I thought that I could move the start point to the end point to switch it off and then move the start point to the origin again to re-enable it. If there is a way to enable/disable lines individually with BufferGeometry, then that would clearly be better.
First of all, you would have to do this:
var line = new THREE.Line( buffer_geometry, material );
line.type = THREE.LinePieces;
Second, this is not supported in r.58 , but should be.
As a work-around, you can hack WebGLRenderer.renderBufferDirect() like so:
// render lines
setLineWidth( material.linewidth );
var position = geometryAttributes[ "position" ];
primitives = ( object.type === THREE.LineStrip ) ? _gl.LINE_STRIP : _gl.LINES;
_gl.drawArrays( primitives, 0, position.numItems / 3 );
_this.info.render.calls ++;
_this.info.render.points += position.numItems;
three.js r.58
I extracted country outline data from somewhere and successfully managed to convert it into an array of lat-lng coordinates that I can feed to Google maps API to draw polyline or polygons.
The problem is that that there are about 1200+ points in that shape. It renders perfectly in Google maps but I need to reduce the number of points from 1200 to less than 100. I don't need a very smooth outline, i just need to throw away the points that I can live without. Any algorithm or an online tool that can help me reduce the number of points is needed.
Found this simple javascript by Bill Chadwick. Just feed in the LatLng to an array and pass in to the source arguments in a function here Douglas Peucker line simplification routine
it will output an array with less points for polygon.
var ArrayforPolygontoUse= GDouglasPeucker(theArrayofLatLng,2000)
var polygon=new google.maps.Polygon({
path:ArrayforPolygontoUse,
geodesic:true,
strokeColor:"#0000FF",
strokeOpacity:0.8,
strokeWeight:2,
fillColor:"#0000FF",
fillOpacity:0.4,
editable:true
});
theArrayofLatLng is an array of latlng that you collected using google maps api.
The 2000 value is kink in metres. My assumption is, the higher the value, more points will be deleted as an output.
For real beginners:
Make sure you declare the js file on your html page before using it. :)
<script type="text/javascript" src="js/GDouglasPeucker.js"></script>
I think MapShaper can do this online
Otherwise, implement some algorithm
If you can install postgis which i think is easy as they provide an installer then you can import the data and execute snaptogrid() or st_simplify() for which i cannot find an equivalent in mysql.If you decide to go with postgis which i recommend cause it will help you down the road i can provide you with the details.
Now for an easy custom solution you can reduce size by cutting or rounding some of the last digits of the coords and then merge the same coords resulting actually in a simple snaptogrid().
Hope it helps
I was looking for exactly the same thing and found Simplify.js. It does exactly what you want and is incredibly easy to use. You simply pass in your coordinates and it will remove all excess points.
simplify(points, tolerance, highQuality)
The points argument should contain an array of your coordinates formatted as {x: 123, y: 123}. (Afterwards you can convert it back to the format you wish.)
The tolerance should be the precision in decimal degrees. E.g. 0.0001 for 11 meters. Increasing this number will reduce the output size.
Set highQuality to true for better results if you don't mind waiting a few milliseconds longer.
Mostly likely what you want to divide the points into 2 half and want to try my Javascript function:
function shortenAndShow ( polyline, color ) {
var dist = 0, copyPoints = Array ( );
for ( var n = 0, var end = polyline.getVertexCount ( ) - 1; n < end ; n++ ) {
dist += polyline.getVertex ( n ).distanceFrom ( polyline.getVertex ( n +1 ) );
copyPoints.push ( polyline.getVertex (n) );
}
var lastPoint = copyPoints [copyPoints.length-1];
var newLine = new GPolyline (copyPoints, color, 2, 1);
gmap2.addOverlay ( newLine );
}
I agree the Unreason's anwser,The website support GeoJson,I used it in my website,and it cut down my geoJson ,But I think you also need this world country geo Json