I want to use the Cairo graphics library to copy the content of one X11 window to another X11 window. I create two surfaces using cairo_xlib_surface_create(). Now I want to copy a region from the source surface (position xs, ys, size ws, hs) to a given position on the destination surface (position xd, yd), where it should also become visible.
How would I accomplish this? Would I go through a Cairo image surface as in this example: https://stackoverflow.com/a/18290221/3852630? This goes from source X11 surface to image surface; how can I copy back from the image surface to the destination X11 surface? And how do I consider the regions above?
Or is the way to go through cairo_surface_map_to_image() where I would map the X11 surfaces to images surfaces? But how to proceed from there, how to I transfer data between the image surfaces?
Or am I abusing Cairo and should better do it directly via X11, like this one: https://stackoverflow.com/a/4965236/3852630?
Thanks a lot for your help!
Cairo does not care what kind of surfaces you have. The following function should copy a rectangular area between two cairo_surface_ts. Variable names should be just as in your question.
// Untested, treat this as pseudo-code
void copy_some_area(cairo_surface_t* source, int xs, int ys, int ws, int hs,
cairo_surface_t* target, int xd, int yd) {
cairo_t *cr = cairo_create (target);
cairo_set_source_surface (cr, source, xd - xs, yd - ys);
cairo_set_operator (cr, CAIRO_OPERATOR_SOURCE);
cairo_rectangle (cr, xd, yd, ws, hs);
cairo_fill (cr);
cairo_destroy (cr);
}
This function:
creates a cairo context
uses the source surface as the source for the context so that the offset between the two surfaces is as wanted (cairo_set_source_surface gets the coordinates where the top left corner of the surface should appear).
tells cairo to "just copy" without any alpha blending or stuff
informs cairo about the rectangle that should be filled
fills the rectangle
cleans up by destroying the context again
where it should also become visible.
Uhm, perhaps you also want a call to cairo_surface_flush(target); to make really, really, really sure that cairo actually did the drawing and did not just remember it for later.
Related
I'm working on making my application DPI sensitive using this MSDN guide where the technique for scaling uses X and Y logical pixels from a device context.
int _dpiX = 96, _pdiY = 96;
HDC hdc = GetDC(NULL);
if (hdc)
{
_dpiX = GetDeviceCaps(hdc, LOGPIXELSX);
_dpiY = GetDeviceCaps(hdc, LOGPIXELSY);
ReleaseDC(NULL, hdc);
}
Then you can scale X and Y coordinates using
int ScaleX(int x) { return MulDiv(x, _dpiX, 96); }
int ScaleY(int y) { return MulDiv(y, _dpiY, 96); }
Is there ever a situation where GetDeviceCaps(hdc, LOGPIXELSX) and GetDeviceCaps(hdc, LOGPIXELSY) would return different numbers for a monitor. The only device I'm really concerned about is a monitor so do I need to have separate ScaleX(int x) and ScaleY(int y) functions? Could I use just one Scale(int px) function? Would there be a downside to doing this?
Thanks in advance for the help.
It is theoretically possible, but I don't know of any recent monitor that uses non-square pixels. There are so many advantages to square pixels, and so much existing software assumes square pixels, that it seems unlikely for a mainstream monitor to come out with a non-square pixel mode.
In many cases, if you did have a monitor with non-square pixels, you probably could apply a transform to make it appear as though it has square pixels (e.g., by setting the mapping mode).
That said, it is common for printers to have non-square device units. Many of them have a much higher resolution in one dimension than in the other. Some drivers make this resolution available to the caller. Others will make it appear as though it has square pixels. If you ever want to re-use your code for printing, I'd advise you to not conflate your horizontal and vertical scaling.
Hardware pixels of LCD panels are always square. Using CRT, you can have rectangular square, like using 320x200 or 320x400 resolution on 4:3 monitor (these resolution were actualy used). On LCD you can get rectangular pixels by using non-native resolution on monitor - widescreen resolution on 5:4 monitor and vice versa.
How can I change transform the coordinates in a window from 0,0 topleft to 0,0 bottomleft.
I have tried various solutions with
SetMapMode(hdc,MM_TEXT);,
SetViewportExtEx(hdc,0,-clientrect.bottom,NULL);
SetViewPortOrgEx(hdc,0,-clientrect.bottom,NULL);
SetWindowOrgEx(hdc,0,-clientrect.bottom,NULL);
SetWindowExtEx(hdc,0,-clientrect.bottom,NULL);
I have even tried google for a solution but to no prevail, so I turn to you the more experienced people on the internet.
The idea is I'm creating a custom control for linear interpolation and I could reverse the coordinate system by x,y in top right corner but I want it right. At the moment I get a reversed linear interpolation when I try to draw it as I cannot get the coords to be bottomleft.
I'm using win32 api, and I suspect I can skip the code as the screen coordinate system is almost identical on all systems, by that I mean 0,0 is "always" topleft on the screen if you are keeping it to standard 2d window and frames.
I really don't want a whole codesample to ease the typing pain for you guys, but I want some direction as it seems I cannot grasp the simple concept of flipping the coords in win32 api.
Thanks and a merry christmas
EDIT !
I would like to add my own answer to this question as I used simple math to reverse the view so to say.
If for an example I got the valuepair x,y (150,57) and another pair x,y (100,75) then I used this formulae height + (-1 * y) and voila I get a proper cartesian coordinate field :) ofcourse in this example height is undefined variable but in my application its 200px in height.
According to the documentation for SetViewportOrgEx, you generally want to use it or SetWindowOrgEx, but not both. That said, you probably want the viewport origin to be (0, clientrect.bottom), not -clientrect.bottom.
Setting transforms with GDI always made me crazy. I think you're better off using GDI+. With it, you can create a matrix that describes a translation of (0, clientRect.bottom), and a scaling of (1.0, -1.0). Then you can call SetWorldTransform.
See the example at Using Coordinate Spaces and Transformations. For general information about transforms: Coordinate Spaces and Transformations.
Additional information:
I've not tried this with direct Windows API calls, but if I do the following in C# using the Graphics class (which is a wrapper around GDI+), it works:
Graphics g = GetGraphics(); // gets a canvas to draw on
SetTranslateTransform(0, clientRect.Bottom);
SetScaleTransform(1.0f, -1.0f);
That puts the origin at the bottom left, with x increasing to the right and y increasing as you go up. If you use SetWorldTransform as I suggested, the above will work for you.
If you have to use GDI, then you'll want to use SetViewportOrgEx(0, clientRect.bottom), and then set the scaling. I don't remember how to do scaling with the old GDI functions.
Note also that the documentation for SetViewportExtEx says:
When the following mapping modes are set, calls to the SetWindowExtEx
and SetViewportExtEx functions are ignored.
MM_HIENGLISH
MM_HIMETRIC
MM_LOENGLISH
MM_LOMETRIC
MM_TEXT
MM_TWIPS
I'm making a 2D sidescrolling space shooter-type game, where I need a background that can be scrolled infintely (it is tiled or wrapped repeatedly). I'd also like to implement parallax scrolling, so perhaps have one lowest background nebula texture that barely moves, a higher one containing far-away stars that barely moves and the highest background containing close stars that moves a lot.
I see from google that I'd have each layer move 50% less than the layer above it, but how do I implement this in libgdx? I have a Camera that can be zoomed in and out, and in the physical 800x480 screen could show anything from 128x128 pixels (a ship) to a huge area of space featuring the textures wrapped multiple times on their edges.
How do I continuosly wrap a smaller texture (say 512x512) as if it were infinitely tiled (for when the camera is zoomed right out), and then how do I layer multiple textures like these, keep them together in a suitable structure (is there one in the libgdx api?) and move them as the player's coords change? I've looked at the javadocs and the examples but can't find anything like this problem, apologies if it's obvious!
Hey I am also making a parrallax background and trying to get it to scroll.
There is a ParallaxTest.java in the repository, it can be found here.
this file is a standalone class, so you will need to incorporate it into your game how you want. and you will need to change the control input since its hooked up to use touch screen/mouse.
this worked for me. as for repeated bg, i havent gotten that far yet, but i think you just need to basic logic as in, ok one screen away from the end, change the first few screens pos to line up at the end.
I have not much more to say regarding to the Parallax Scrolling than PFG already did. There is indeed an example in the repository under the test folder and several explanations around the web. I liked this one.
The matter with the background is really easy to solve. This and other related problems can be approached by using modular algebra. I won't go into the details because once shown is very easy to understand.
Imagine that you want to show a compass in your screen. You have a texture 1024x16 representing the cardinal points. Basically all you have is a strip. Letting aside the considerations about the real orientation and such, you have to render it.
Your viewport is 300x400 for example, and you want 200px of the texture on screen (to make it more interesting). You can render it perfectly with a single region until you reach the position (1024-200) = 824. Once you're in this position clearly there is no more texture. But since it is a compass, it's obvious that once you reach the end of it, it has to start again. So this is the answer. Another texture region will do the trick. The range 825-1023 has to be represented by another region. The second region will have a size of (1024-pos) for every value pos>824 && pos<1024
This code is intended to work as real example of a compass. It's very dirty since it works with relative positions all the time due to the conversion between the range (0-3.6) to (0-1024).
spriteBatch.begin();
if (compassorientation<0)
compassorientation = (float) (3.6 - compassorientation%3.6);
else
compassorientation = (float) (compassorientation % 3.6);
if ( compassorientation < ((float)(1024-200)/1024*3.6)){
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 200, 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth(), 32 * (float)1.2);
}
else if (compassorientation > ((float)(1024-200)/1024*3.6)) {
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 1024 - (int)(compassorientation/3.6*1024), 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , 32 * (float)1.2);
compass2.setRegion(0, 0, 200 - compass1.getRegionWidth(), 16);
spriteBatch.draw(compass2, compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth() - (compass1.getRegionWidth()/200f * Gdx.graphics.getWidth()) , 32 * (float)1.2);
}
spriteBatch.end();
You can use setWrap function like below:
Texture texture = new Texture(Gdx.files.internal("images/background.png"));
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
It will draw background repeatedly! Hope this help!
Beneath where you initialize your Texture for the object. Then beneath that type in this
YourTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
Where YourTexture is your texture that you want to parallax scroll.
In Your render file type in this code.
batch.draw(YourTexture,0, 0, 0 , srcy, Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
srcy +=10;
It is going to give you an error so make a variable called srcy. It is nothing too fancy.
Int srcy
I want to write a paint program in the style of MS Paint.
For painting things on screen when the user moves the mouse, I have to wait for mouse move events and draw on the screen whenever I receive one. Apparently, mose move events are not sent very often, so I have to interpolate the mouse movement by drawing a line between the current mouse position and the previous one. In pseudocode, this looks something like this:
var positionOld = null
def handleMouseMove(positionNew):
if mouse.button.down:
if positionOld == null:
positionOld = positionNew
screen.draw.line(positionOld,positionNew)
positionOld = positionNew
Now my question: interpolating with straight line segments looks too jagged for my taste, can you recommend a better interpolation method? What method do GIMP or Adobe Photoshop implement?
Alternatively, is there a way to increase the frequency of the mouse move events that I receive? The GUI framework I'm using is wxWidgets.
GUI framework: wxWidgets.
(Programming language: Haskell, but that's irrelevant here)
EDIT: Clarification: I want something that looks smoother than straight line segments, see the picture (original size):
EDIT2: The code I'm using looks like this:
-- create bitmap and derive drawing context
im <- imageCreateSized (sy 800 600)
bitmap <- bitmapCreateFromImage im (-1) -- wxBitmap
dc <- memoryDCCreate -- wxMemoryDC
memoryDCSelectObject dc bitmap
...
-- handle mouse move
onMouse ... sw (MouseLeftDrag posNew _) = do
...
line dc posOld posNew [color := white
, penJoin := JoinRound
, penWidth := 2]
repaint sw -- a wxScrolledWindow
-- handle paint event
onPaint ... = do
...
-- draw bitmap on the wxScrolledWindow
drawBitmap dc_sw bitmap pointZero False []
which might make a difference. Maybe my choices of wx-classes is why I'm getting a rather low frequency of mouse move events.
Live demos
version 1 - more smooth, but more changing while you draw: http://jsfiddle.net/Ub7RV/1/
version 2 - less smooth but more stable: http://jsfiddle.net/Ub7RV/2/
The way to go is
Spline interpolation of the points
The solution is to store coordinates of the points and then perform spline interpolation.
I took the solution demonstrated here and modified it. They computed the spline after you stop drawing. I modified the code so that it draws immediately. You might see though that the spline is changing during the drawing. For real application, you probably will need two canvases - one with the old drawings and the other with just the current drawing, that will change constantly until your mouse stops.
Version 1 uses spline simplification - deletes points that are close to the line - which results in smoother splines but produce less "stable" result. Version 2 uses all points on the line and produces much more stable solution though (and computationally less expensive).
You can make them really smooth using splines:
http://freespace.virgin.net/hugo.elias/graphics/x_bezier.htm
But you'll have to delay the drawing of each line segment until one frame later, so that you have the start and end points, plus the next and previous points available for the calculation.
so, as I see the problem of jagged edge of freehand made curve, when the mouse are moved very fast, is not solved !!! In my opinion there are need to work around with the polling frequency of mousemove event in the system i.e. using different mouse driver or smf.. And the second way is the math.. using some kind of algorithm, to accuratly bend the straight line between two points when the mouse event is polled out.. For clear view you can compare how is drawed free hand line in photoshop and how in mspaint.. thanks folks.. ;)
I think you need to look into the Device Context documentation for wxWidgets.
I have some code that draws like this:
//screenArea is a wxStaticBitmap
int startx, starty;
void OnMouseDown(wxMouseEvent& event)
{
screenArea->CaptureMouse();
xstart = event.GetX();
ystart = event.GetY();
event.Skip();
}
void OnMouseMove(wxMouseEvent& event)
{
if(event.Dragging() && event.LeftIsDown())
{
wxClientDC dc(screenArea);
dc.SetPen(*wxBLACK_PEN);
dc.DrawLine(startx, starty, event.GetX(), event.GetY());
}
startx = event.GetX();
starty = event.GetY();
event.Skip();
}
I know it's C++ but you said the language was irrelevant, so I hope it helps anyway.
This lets me do this:
which seems significantly smoother than your example.
Interpolating mouse movements with line segments is fine, GIMP does it that way, too, as the following screenshot from a very fast mouse movement shows:
So, smoothness comes from a high frequency of mouse move events. WxWidgets can do that, as the example code for a related question demonstrates.
The problem is in your code, Heinrich. Namely, drawing into a large bitmap first and then copying the whole bitmap to the screen is not cheap! To estimate how efficient you need to be, compare your problem to video games: a smooth rate of 30 mouse move events per second correspond to 30fps. Copying a double buffer is no problem for modern machines, but WxHaskell is likely not optimized for video games, so it's not surprising that you experience some jitter.
The solution is to draw only as much as necessary, i.e. just the lines, directly on the screen, for example as shown in the link above.
I agree with harviz - the problem isn't solved. It should be solved on the operating system level by recording mouse movements in a priority thread, but no operating system I know of does that. However, the app developer can also work around this operating system limitation by interpolating better than linear.
Since mouse movement events don't always come fast enough, linear interpolation isn't always enough.
I experimented a little bit with the spline idea brought up by Rocketmagnet.
Instead of putting a line between two points A and D, look at the point P preceding A and use a cubic spline with the following control points B = A + v' and C = D - w', where
v = A - P,
w = D - A,
w' = w / 4 and
v' = v * |w| / |v| / 4.
This means we fall into the second point with the same angle as the line interpolation would, but go out a starting point in the same angle the previous segment came in, making the edge smooth. We use the length of the segment for both control point distances to make the size of the bend fit its proportion.
The following picture shows the result with very few data points (indicated in grey).
The sequence starts at the top left and ends in the middle.
There is still some level of uneasiness here which may be alleviated if one uses both the previous and the next point to adjust for both angles, but that would also mean to draw one point less than what one has got. I find this result already satisfactory, so I didn't try.
I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best.
I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles).
My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions?
* That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.
EDIT: Thank you all so much for your suggestions. Unfortunately, I wasn't able to get all of gtk2hs's required libraries to build, which ruled out a lot of solutions. For a variety of reasons, after I tried all your answers, failed to install a number of libraries and found that others could not do what I wanted, I ended up deciding to just export more of an FFI for libgd and used that instead.
Diagrams looks way cool, but if you want to avoid committing and stay super lightweight, you could generate svg directly. Stealing from Conrad Barski at http://www.lisperati.com/haskell/
type Point = (Float,Float)
type Color = (Int,Int,Int)
type Polygon = [Point]
writePoint :: Point -> String
writePoint (x,y) = (show x)++","++(show y)++" "
writePolygon :: (Color,Polygon) -> String
writePolygon ((r,g,b),p) = "<polygon points=\""++(concatMap writePoint p)++"\" style=\"fill:#cccccc;stroke:rgb("++(show r)++","++(show g)++","++(show b)++");stroke-width:2\"/>"
writePolygons :: [(Color,Polygon)] -> String
writePolygons p = "<svg xmlns=\"http://www.w3.org/2000/svg\">"++(concatMap writePolygon p)++"</svg>"
colorize :: Color -> [Polygon] -> [(Color,Polygon)]
colorize = zip.repeat
rainbow#[red,green,blue,yellow,purple,teal] = map colorize [(255,0,0),(0,255,0),(0,0,255),(255,255,0),(255,0,255),(0,255,255)]
t0 = writeFile "tut0.svg" $ writePolygons (blue [[(100,100),(200,100),(200,200),(100,200)],[(200,200),(300,200),(300,300),(200,300)]])
hexagon c r = translateTo c basicHexagon where
basicHexagon = top ++ (negate r, 0):bottom
top = [(r,0),(r * cos 1,(r * sin 1)),(negate (r * cos 1), r * (sin 1))]
bottom = map (\(x,y)->(x,negate y)) (reverse top)
translateTo (x,y) poly = map f poly where f (a,b)= ((a+x),(b+y))
t1 = writeFile "t1.svg" $ writePolygons (blue [hexagon (100,100) 50] )
hexField r n m = let
mkHex n = hexagon (1.5*n*(r*2),(r*2)) r
row n = map mkHex [1..n]
aRow = row n
in concat [map (offset (r*x)) aRow |x<-[1..m]]
offset r polys = map (oh r) polys where
oh r pt#(x,y) = (x+(1.5*r),y+(r*sin 1))
t2 = writeFile "t2.svg" $ writePolygons (blue $ hexField 50 4 5 )
run t2 and load the file into Firefox or some other program that supports svg.
t2.svg ,exported png http://img30.imageshack.us/img30/2245/93715707.png
I've used HOpenGL before, but the
problem is that (as far as I can tell)
it can't render to a file, but only
the screen; the same (again, as far as
I can tell) seems to be true of SDL
and Wx. I will look into Cairo,
though.
For some reason I can not reply to this post so I have to quote it. You're incorrect about GL and SDL, you can make an off-screen surface/buffer or render-to-texture. Those libraries don't need such a function (and doesn't make much sense either) because you can do it yourself quite easily by accessing pixels in the buffer and writing it out yourself, even with the screen buffers you can access pixel data.
Just the other day I showed somebody how to do this with the Haskell SDL bindings:
http://hpaste.org/fastcgi/hpaste.fcgi/view?id=25047
Use a library that can write out to .PNG files, they will most likely take a raw pointer to pixel buffer which you can get from SDL/GL or copy it to a format which the library needs.
I Just found a Haskell binding for the library DevIL which can output .PNG files. Check out the function called writeImageFromPtr
Cairo is a good bet if you want to generate PNGs. Wumpus also looks promising, though I have never used it. If you just need to see it on the screen, graphics-drawingcombinators is an easy interface to OpenGL that will do what you need in a few lines (see example.hs in the distribution).
Check out Diagrams:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/diagrams
The examples are quite nice.
what's a good Haskell library for creating images and drawing shapes on them?
You have quite a few options, to my mind the following have all been used for games:
haskell-opengl
haskell-sdl
haskell-wx
haskell-gtk-cairo
Those are the most common, and you might just choose based on features/familiarity.