Screen capture in Haskell? - windows

Is it possible to capture the screen (or a window) using Haskell in a Windows environment? (ie, taking a screenshot every few minutes or so). If so, how would one go about doing this (again, in Haskell, for a Windows environment)?
More info:
I'm a beginner to Haskell. A friend wants to cut development costs by having me whip together some programs for his accounting firm, but he insists that I use Haskell. He wants a tool that will allow him to monitor the desktops of different Windows XP workstations. It would likely have to be a client/server type application. He only needs to monitor desktop activity, hence why he doesn't want any of the expensive management software that is already on the market. I have sifted through lots of documentation, and only got as far as finding wxHaskell, but I couldn't find much on capturing the screen, especially for Windows environments.

The Approach Tikhon mentioned is correct. Just to add some code to the answer he gave above
import Graphics.Win32.Window
import Graphics.Win32.GDI.Bitmap
import Graphics.Win32.GDI.HDC
import Graphics.Win32.GDI.Graphics2D
main = do desktop <- getDesktopWindow -- Grab the Hwnd of the desktop, GetDC 0, GetDC NULL etc all work too
hdc <- getWindowDC (Just desktop) -- Get the dc handle of the desktop
(x,y,r,b) <- getWindowRect desktop -- Find the size of the desktop so we can know which size the destination bitmap should be
-- (left, top, right, bottom)
newDC <- createCompatibleDC (Just hdc) -- Create a new DC to hold the copied image. It should be compatible with the source DC
let width = r - x -- Calculate the width
let height = b - y -- Calculate the Height
newBmp <- createCompatibleBitmap hdc width height -- Create a new Bitmap which is compatible with the newly created DC
selBmp <- selectBitmap newDC newBmp -- Select the Bitmap into the DC, drawing on the DC now draws on the bitmap as well
bitBlt newDC 0 0 width height hdc 0 0 sRCCOPY -- use SRCCOPY to copy the desktop DC into the newDC
createBMPFile "Foo.bmp" newBmp newDC -- Write out the new Bitmap file to Foo.bmp
putStrLn "Bitmap image copied" -- Some debug message
deleteBitmap selBmp -- Cleanup the selected bitmap
deleteBitmap newBmp -- Cleanup the new bitmap
deleteDC newDC -- Cleanup the DC we created.
This was just quickly put together, but it saves a screenshot of to a file named Foo.bmp.
Ps. To whomever wrote the Win32 Library, nicely done :)

You can also do it in a cross-platform way with GTK.
That would not be much different from doing it with C: Taking a screenshot with C/GTK.
{-# LANGUAGE OverloadedStrings #-}
import Graphics.UI.Gtk
import System.Environment
import Data.Text as T
main :: IO ()
main = do
[fileName] <- getArgs
_ <- initGUI
Just screen <- screenGetDefault
window <- screenGetRootWindow screen
size <- drawableGetSize window
origin <- drawWindowGetOrigin window
Just pxbuf <-
pixbufGetFromDrawable
window
((uncurry . uncurry Rectangle) origin size)
pixbufSave pxbuf fileName "png" ([] :: [(T.Text, T.Text)])

You should be able to do this with the Win32 API. Based on What is the best way to take screenshots of a Window with C++ in Windows?, you need to get the context of the window and then copy the image from it using GetWindowDC and BitBlt respectively.
Looking around the Haskell Win32 API documentation, there is a getWindowDC function in Graphics.Win32.Window. This returns an IO HDC. There is a bitblt function in Graphics.Win32.GDI.Graphics2D. This function takes an HDC along with a bunch of INTs which presumably correspond to the arguments it takes in C++.
Unfortunately, I don't have a Windows machine handy, so I can't write the actual code. You'll have to figure out how to use the Win32 API functions yourself, which might be a bit of a bother.
When you do, it would be great if you factored it into a library and put it up on Hackage--Windows does not usually get much love in Haskell land (as I myself show :P), so I'm sure other Windows programmers would be grateful for an easy way to take screenshots.

Related

Winapi How do I draw a rectangle to a specific Window handle?

I'm Using a Wrapper Library in GMS2 That was made back in GM6 Days (gamemaker) Someone was able to wrap majority of the win32API to use in GM6-8. There is only 1 odd instance in where the WinAPI system seems to mess up when drawing the controls to the Main Application Window.
The desired goal is to draw an image to an Child window and draw a grid defining it's splitting according to the user input EX: 16x16 and having the user select squares VIA Mouse Click + Dragging over the boxes.
Unfortunately I have little to no experience in win32API so i'm a bit lost as to where to start.
Looking over the documentation it looks like he left majority of the script names of the DLL to mimic the format of that when calling in C++ or C (just my assumptions).
From His Documentation he has things like "Drawing System" Which Contains things like "Move Item","Add Line","Add Graphic Buffer" etc... and then other Graphic Buffer functions. But then theres the "Draw" functions which has things like "Draw Fill Rect , DrawSelectObj" etc... he doesn't really provide examples so i'm unsure as to how to use these things together to get my desired results. What is the difference between a drawing system and a draw function? Do I have to use them in conjunction, along with the Graphics Buffer?
Can Someone point in the right direction of the necessary steps to get it done? An Example without code and just the function equivalent will suffice, I just need to know out of which functions to use and then later bind it to the Child Window.
An Example Code from his demo is something like this
GbGradient2 = API_GB_Create (105,105); //Graphics Buffer
DcGradient2 = API_GB_GetDC (GbGradient2);
API_Draw_Gradient (DcGradient2,0,0,105,105,0,c_yellow,c_lime);
BrGradient2 = API_Draw_CreatePatternBrush (API_GB_GetBitmap (GbGradient2));
API_Draw_Gradient (DcGradient2,0,0,105,105,0,c_red,65535);
BrGradient3 = API_Draw_CreatePatternBrush (API_GB_GetBitmap (GbGradient2));
hRectangle = API_DS_AddRectangle (2,5,5,105,105); // Adds a rectangle(Drawing System)
hEllipse = API_DS_AddEllipse (2,5,5,105,105);
hNoPen = API_Draw_CreatePen (PS_NULL,0,0);
API_DS_SetItemBrush (hRectangle,BrGradient2); // Sets the brush
API_DS_SetItemBrush (hEllipse,BrGradient3);
API_DS_SetItemPen (hRectangle,hNoPen); // Sets the pen
API_DS_SetItemPen (hEllipse,hNoPen);
API_Draw_Gradient (GbGradient2,0,0,16,16,0,c_yellow,c_lime);
Lookin at it a little more it looks like the draw functions are linked to GDI somehow.
since GMS2 is a cross platform tool , its windows-only functionality gas been removed.
you can make a nice GUI for that porpose by using GMS2 objects , as you have a little Xp
about Win32 API,this will be easier than that big stuffy coding
here are some tips ,
creating a window object with a rectangle sprite
creating ui objects at the create event of above object
adding some code to the global mouse event

AccessibleObjectFromPoint and per-monitor DPI

I'm using accessibility with the AccessibleObjectFromPoint function, and I'd like it to work correctly on a per-monitor DPI environment. Unfortunately, I can't get it to work. I tried many things, and the situation for now is:
My app is marked as per-monitor-DPI-aware in the manifest. (True/PM)
I use GetCursorPos and then AccessibleObjectFromPoint.
How can the problem be reproduced:
Have two monitors, one with 100% DPI, the other with 125%.
Run Chrome on the 125% monitor.
Use AccessibleObjectFromPoint on one of the tab names, it won't work.
It works with some apps (DPI-aware, it seems, like explorer), but doesn't work with others. I tried several relevant functions, such as GetPhysicalCursorPos and PhysicalToLogicalPointForPerMonitorDPI, but nothing works.
It's worth noting that Microsoft's inspect.exe works as expected.
I’ve been struggling with this exact same problem for several weeks and can now tell you my findings. Unfortunately I can’t give you more than a hint of code, because the project I am working on, is proprietary.
The issue started at Windows 8.1. The problem did not exist on Windows 7 or Vista, because AccessibleObjectFromPoint always used raw physical coordinates, as documented here: https://msdn.microsoft.com/en-us/library/windows/desktop/dd317984(v=vs.85).aspx .
“Microsoft Active Accessibility does not use logical coordinates. The following methods and functions either return physical coordinates or take them as parameters.” This has not been true since Windows 8.1.
AccessibleObjectFromPoint now uses a flawed calculation that cannot always find the correct window for reasons similar to my question here: High DPI scaling, mouse hooks and WindowFromPoint .
My findings lead me to one conclusion: The API is broken. This does not mean it is not possible though.
Possible solutions I have partially tested that seem to work follow.
Prerequisites are that you
1/. Make your process per monitor DPI aware, NOT USING THE MANIFEST (more on that later).
2/. Determine the hWnd of the window you want to query (WindowFromPoint() variants)
3/. Determine the monitor DPI of the queried hWnd
4/. Determine the DPI of your process
5/. Determine the DPI of the queried hWnd
6/. Determine the monitor origin and offset for the queried hWnd (MonitorFromWindow() and GetMonitorInfo() )
Next, depends on your platform
Windows 10.0.14393+
Write a function that finds the IAccessible (AccessibleObjectFromWindow() ) from the top level window, and then recursively call IAccessible::accHitTest until you reach the bottom-most IAccessible and perhaps ChildID data. Return that as if you would call AccessibleObjectFromPoint.
To call it successfully, you will need to scale the (x,y) co-ordinates into the scale system of the queried hWnd, using the DPIs and co-ordinates fetched in the list above. Watch out for systems where monitors are not the same size or if monitors are partially offset, or above and below.
And now for the important part for 10.0.14393 – Set your thread to the same DPI_AWARENESS_CONTEXT of the hWnd you are querying. Now call your new function. Now revert your thread to monitor DPI aware, and voila, it works, even if the window is not maximised. This is why you must not use the manifest.
If you are on Windows 8.1 to 10.0.10586 you have a tougher task.
Instead of calling accHitTest, as above, you have to recursively call AccessibleChildren and iterate the call IAccessible::accLocation to determine if your test point is within each child. This is tricky and starts to get really messy when you get to e.g. combo boxes in products like Office, which is only system DPI aware.
That’s all I can give you for now.
To do it successfully on multi-platform (mine has to work from Vista to Windows-Current) the only really safe bet is to write a wrapper DLL in C++ that can determine at runtime which OS it is on and change code path accordingly. The reason you want to do it in C++ is to avoid passing IAccessible objects across the .Net/unmanaged marshalling boundary. You can call IUnknown::Release on objects you don’t need to return n the unmanaged side. You can do it all in .Net, but it will be slow.
P.S. also watch out for Chrome returning infinite trees where parents are children of their parents, some snity checks are required. Also, Chrome does not return accRole correctly, and will give you HTML tags instead of VT_I4.
Good luck
A fairly workable solution is as follows, in your IAccessible recursive function:
Use getwindowrect to capture the physical right on main window
Use accChild.accLocation in loop to capture left and Width on each Object
Add this simple test
If l > rct2r.Right And l > arrIACC.x2 Then
arrIACC.x2 = l + w
End If
if dpi = 100 then no Object is furter out than physical right
if dpi > 100 then closebutton is...x pix offset
Use the difference to rescale all values you are in use of Width
arrIACC.w1 = CInt(((-rct2r.Left + arrIACC.w1) / arrIACC.x2) * rct2r.Right)
This solution is from an Excel plugin I have developed, I was testing the Width of the quick access toolbar qat and my result was +- 5 pixels regardless of any DPI.

How to see image data using opencv in visual studio?

I wrote an OPENCV project in VS2010 and the results were not the ones as I expected so I ran the debugger to see where is the problem. When I wanted to see the data inside the image loaded I didn't know how to do it so if I want to see the data inside my images what should I do?
It is pretty simple in matlab for seeing different channel of an image i.e.
a=imread('test.jpg');
p1 = a(:,:,1)
p2 = b(:,:,2)
.
.
In opencv I wrote the same thing but I don't know how to see all the element at once just like Matlab.
a= imread("test.jpg")
split(a,planes);
vector<Mat> T1;
T1 = planes[0];
// How can I see the data inside T1 when debugging the code ?
I think this is what you are looking for - it's a great Visual Studio add-on
https://bitbucket.org/sergiu/opencv-visualizers
Just download the installer, make sure VS is closed, run it, re-open VS and voila! Now, when you point to an OpenCV data structure, all kinds of nice info is showed.
Limitations: I saw some problems with multichannel images (it only shows the first channel) and it also has trouble displaying large matrices. If you want to see raw data in a big matrix, you can use the old good VS trick with debug variables: Stop at a breakpoint, go to Watch tab, and write there
((float*)myMat.data) ,10
Where float is the matrix type, myMat is your matrix, and 10 is the number of values you want to print. It will display the first 10 values at the memory location of myMat.data. If you do not correctly choose the data type, you'll see garbage. In my example, myMat is of type cv::Mat.
And never forget the power of visualizers:
imshow("Image", myMat);
If your data fits into an image. You can use the contrib module's colormap to enhance your visualizers.
I can't actually believe that nobody suggested Image Watch yet. It's the most amazing add-in ever. It shows you a view with all your Mat variables (images (gray and color), matrices) while debugging, there's useful stuff like zooming or contrast-stretching and you can even apply more complex functions directly in the plugin in real-time. It makes debugging of any kind of image operations a breeze and it's immensely helpful if you do calculations and linear algebra stuff with your cv::Mat matrices.
I recommend to use a NativeViewer extension. It actually displays the content of an image in a preview window, not only the properly formatted info.
If you don't want to use a plug-in or extension to Visual Studio, you can access the elements one by one in the debugging watch tab by typing this:
T1.data[T1.step.buf[0]*i + T1.step.buf[1]*j];
where i is the row you want to look at and j is the column.
after downloading imagewatch use the command in watch window
(imagesLoc._Myfirst)[0]
index of image in the vector
You can use the immediate window and the extenshion method like this one
/// <summary>
/// Displays image
/// </summary>
public static void Display (this Mat m, Rect rect = default, string windowName = "")
{
if (string.IsNullOrEmpty(windowName))
{
windowName = m.ToString();
}
var img = rect == default ? m : m.Crop(rect);
double coef = Math.Min(1600d / img.Width, 800d / img.Height);
Cv2.ImShow(windowName, img.Resize(new Size(coef * img.Width, (coef * img.Height) > 1 ? coef * img.Height : 1)));
Cv2.WaitKey();
}
Then you stop at a breakpoint and call yourImage.Display() in the immediate window.
If you can use CLion you can utilize the OpenCV Image Viewer plugin, which displays matrices while debugging just on click.
https://plugins.jetbrains.com/plugin/14371-opencv-image-viewer
Disclaimer: I'm an author of this plugin

Capturing screenshot of windowed OpenGL window in AutoIt?

For my graphics class, I need to match an OpenGL sample output in a pixel-perfect way.
I figured it would be cool if I could spawn the sample, send it some input, then take a screenshot of the exact OpenGL area, do the same for mine, and then just compare those screenshots. I also figured something like AutoIT would be the easiest way to do something like this.
I know that I can use the screencapture function, but I'm unsure of how to get the exact coordinates and size of the OpenGL area of the window (not the title bar/surrounding window stuff).
If anybody could help me out that would be awesome.
Or if anybody can think of an easier solution than AutoIt, and can point me in the right direction, that'd be great too.
EDIT: I also don't have access to the source code of the sample output program.
AutoIt is a pretty good tool for this job. I think you already found the _ScreenCapture in the help file, it has parameters for: X left, Y top, X right and Y bottom coordinates. However, the _ScreenCapture function stores to a file. I've made a library where you can capture part of the screen, or a window, and save this to memory. Then you can get the pixel colors from the memory and compare them to your existing pixels. You can find it here: http://www.autoitscript.com/forum/topic/63318-get-or-read-pixel-from-memory-udf-pixelgetcolor-au3/
The part of a window which does not include the titlebar and the borders is called the 'client area'. You can get the width and the height of the client area with the WinGetClientSize. Alternatively, you can use ControlGetPos on the OpenGL control to get the X and Y relative to the window, and the width and height of the OpenGL control. Combined with WinGetPos you should be able to calculate the values you need for _ScreenCapture. You should be able to find out a good approach if you use the "AutoIt window info" tool.
Finally, a simple and short solution which gives you little control, but might be just what you need, is the PixelChecksum function. Once you have the coordinates of the OpenGL part, you can use PixelChecksum and get a value corresponding to the pixels of the screen (a checksum of the pixels). You can then compare this value to a pre-recorded value to tell whether the pixels on the screen are exactly the same. Check the Autoit help file of PixelChecksum for an example.
If you want to capture data from an OpenGL buffer, here is how you can do it with legacy OpenGL (version 1.2 or so)
glFinish();
glReadBuffer(GL_FRONT);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
glReadPixels(ox, oy, w, h, mode, GL_UNSIGNED_BYTE, img);
where:
ox and oy are origin x and y
w and h are width and height
mode is the mode, like GL_BGR if you want to save to BMP
img is a pointer to unsigned int * to copy the image to
You can search for glReadPixels on the internet and see the reference for more information.

any quick and easy way to capture a part of the screen? getPixel was slow and GetDIBits seemed a bit complcated as a start

I was trying some code to capture part of the screen using getPixel on Windows, with the device context being null (to capture screen instead of window), but it was really slow. It seems that GetDIBits() can be fast, but it seems a bit complcated... I wonder if there is a library that can put the whole region into an array, and pixel[x][y] will return the 24 bit color code of the pixel?
Or does such library exist on the Mac? Or if Ruby or Python already has such a library that can do that?
I've never done this but I'd try to:
Create a memory Device Context (DC)
using CreateCompatibleDC passing it
the Device Context of the desktop
(GetDC(NULL)).
Create a Bitmap (using
CreateCompatibleBitmap) the same
size as the region your capturing.
Select the bitmap into the DC you
created (using SelectObject).
Do a BitBlt from the
desktop DC to the DC you created (using SRCCOPY flag).
Working with Device Contexts can cause GDI leaks if you do things in the wrong order so make sure that you read the documentation on all the GDI functions you use (e.g. SelectObject, GetDC, etc.).

Resources