Duplicate Conky on every connected extending screen? - clone

I have a very straightforward question to ask:
Is it possible to clone / duplicate Conky windows on every connected X screen?

Related

I am having trouble using constraints to make my screen (labels, text) to expand when the screen expands

For multiple weeks I have been trying to solve this problem which I believe (or hope) has an easy solution. The goal is for my schedule (in the picture) to expand to the exact size of the screen and adapt if someone uses an iPhone or iPad.

Changing how windows displays using Win API?

While I have some experience with the WinAPI I do not have a ton, so I have a question for people who do have much experience in it. My question concerns what the limit of our power is. Can we change how windows fundamentally displays?
For example, can I cause windows to render a screen size bigger than the display and pan across it, kind of like workspaces but without separation? Can I apply distortion to the top and bottom of the screen? If distortion is not possible can I have an application mirror what windows is displaying with very little delay?
The biggest question I have is the first one, because if I can make windows render virtual workspaces and pan seamlessly between them then I figure it is possible to make a separate application which handles the distortion on a mirrored image of the desktop. Again, apologies for the vague questions, but I really want to know if we are able to do this stuff, at least in theory, before I dive deep into learning more on the API. If the WinAPI does not allow it is there another way to do this kind of stuff in Windows?
EDIT: Some clarification. What I want to do is basically extend the desktop to a very large size (not sure on exact size yet), both vertically and horizontally. Section the large desktop into workspaces of a specific size which can seamlessly be transitioned across and windows moved across. It would transition workspaces based on a head tracking device and/or mouse movement. Note that when I say workspaces this could be achieved by zomming in and then panning the zoom as well. I also need to be able to distort the screen, such as curving the edges, and render the screen twice. That is the bare minimum of what I am wanting to do.
Yes, you can. The most feasible way I come up with is using a virtual graphics driver (like what Windows Remote Desktop does, which creates a virtual graphics card and a virtual display). Sadly you will lose the ability to run some programs needing advanced graphics API (such as 3D games, 3D modelling tools or so).
There're some examples:
http://virtualmonitor.github.io/
https://superuser.com/questions/62051/is-there-a-way-to-fake-a-dual-second-monitor
And remember Windows has a limit on display resolution (for each and for altogether). I don't remember the exact number but it should be less than 32768*32768.

warp windows desktop

I have read some threads and I know it is easy to warp a video. Now I want to warp windows desktop.The steps are:
1. Capture the desktop screen
2. Use desktop screen as a texture to a bezier surface mesh
The problem is the desktop screen captured isn't normal after first time when the desktop screen is distorted, so there is a endless loop. anyone can give me some suggestions? Thanks in advance!
You start with your original desktop capture, D. You distort it creating D1. Now if I understand your question, you are saying something like, "D1 is not the original image, so I can't use it as a texture for my next frame".
So, don't use D1. Use the original desktop capture as your source for every frame.

Set cursor position in Mac OS

I want to write a little vnc similar program that moves the Mac OS cursor to a position (x, y) given through a protocol which gets data from Bonjour service. The problem is that I don't know how to move the cursor!
I'm working with Cocoa.
You can be forgiven for not looking in Quartz Display Services for this one. The function you're after is CGWarpMouseCursorPosition.
Since the documentation doesn't say, you'll have to experiment to determine which co-ordinate system it uses—i.e., where the origin is and which way positive y goes.

How to display a windows/X window system window in 3d space on texture?

I'm wondering, how can I catch a window and display it on texture in 3D space. And how can I redirect mouse and keyboard input from 3D application to application running in background? I mean full 3D desktop.
This can be a bit complex, and a "full" answer might not be suitable for this forum. Here's an idea/outline, though:
One way of doing it is through VNC. Run a separate, invisible "virtual" desktop in a VNC server, then start the desired apps with it as the display. Your 3D rendering program on the "real" desktop can then connect to the VNC server, and get access to its desktop in bitmap format, and blast that onto textured polygons. Piping in input events is very doable, too.
I've actually done this, or at least half of it (the display). Here is a very old screenshot of what I managed to do, back then:
(source: sourceforge.net)
The black sky and blue/purple-ish "ground" are rendered by the 3D program on the real desktop, while the slanted quad shows a window in the "virtual" VNC desktop.
Fun!
A key part of the solution is the OpenGL/GLX extension GLX_EXT_texture_from_pixmap which bridges the gap between the X11 and OpenGL worlds.
As to the rest... Compiz and CompizFusion already implement 3D desktops. Give 'em a try; if you've got some specific ideas about the way things should work the sources are freely available (and they also support the idea of plugins).

Resources