I own a vServer and want to run a Skype bot on it. Obviously skype can't start without a display. Is there maybe a command line option for skype to disable the GUI and only use the Desktop API? Or do I have to simulate a X11 display, and if thats the case how could I do that?
It is easy to set up a virtual X11 display ('server') with Xvfb, like so:
Xvfb :1 -screen 0 1280x1024x24 &
sleep 3
skype -display :1 &
You can even use VNC to see what's happening on the virtual buffer.
Note that the virtual server is very simple and does not support hardware acceleration so some programs may complain about missing extensions, etc. But most regular desktop applications should be okay.
Related
I understand that desktop/GUI apps are not supported in Windows containers. They do run but there's no built-in way to interact with them. I had the following idea - maybe I could use the Desktop Sharing API (https://learn.microsoft.com/en-us/windows/win32/api/_rdp/) for this purpose, the idea is to run a desktop app, then run a sharing program that uses the Desktop Sharing API, and connect to it using a Desktop Sharing API viewing program from the host.
I had to do some recap about window stations and desktops, and I noticed that when starting the container with cmd in interactive mode, I'm logged with ContainerAdministrator as a service (logon type 5). I tried running some WinAPI functions that deal with desktops and winstation and got some access denied results, so I switched to running cmd as system.
The window station of the cmd process (and other child processes) is not the interactive WinSta0, but instead some other service window station, which makes sense since I'm logged on as a service, and I figured that I can't use this window station, so I used a little program I wrote to run notepad in Winsta0 in the Default desktop. Afterwards I ran another program that enumerates the windows on WinSta0\Default, and the notepad window does get enumerated and I also get it's title, so it's running somewhere.
So now I tried running the desktop sharing API program (also on WinSta0\Default). It runs and I can connect from the host, but I only get a black screen without anything on it. I also tried running a program that takes a screenshot of the windows but I get an empty bitmap.
So I thought maybe the Default desktop is not the active desktop, and by using the OpenInputDesktop function I could confirm it - the current desktop was the Winlogon desktop, so I used the SwitchDesktop function to switch to the Default desktop (I used OpenInputDesktop again to verify that it actually worked).
Unfortunately, this didn't change anything, I still get an empty screen and empty bitmaps.
I know that containers are built for micro services and are not supposed to run GUI apps and so, but still - is there a way to make this work? Or any ideas of what else I can check? Alternatively, if you know that it can't work - I would also be happy to hear a good technical explanation of why it doesn't work.
I had built console core image for raspberry Pi3, and I am able to boot the Rpi3 successfully using SD card.
I have created an electron app which is able to launch on remote display over ssh.
However when I launch the application on Monitor connected through HDMI Cable, It gives following error
Can not open Display :0.0
I have seen many people asking this question for not able to launch on remote display, which works fine in my case.
Can anyone help in this?
With console core, you don't have an X server running to display on.
You can upgrade your system to include the X installation -- search for 'install pixel desktop' -- or you could just refresh the card with the lite or full image.
If you set the Pi to auto login to the pi account, you can start your application on login by adding it to the bottom of .xinitrc. You could also start it from a remote ssh, displaying on the local display, by setting DISPLAY=:0 in the environment before you start. You'll need to explore the world of X Windows authentication to make this work. See the man page for the xhost command, for instance.
(This is an expansion of LetoThe2nd's comment, which probably should have been an answer instead.)
Console core image means that there is no xserver running, and hence no display :0. Try getting started with core-image-x11 maybe, or whatever suits the RasPi.
I am trying to launch an app(created myself, registered for a specific uri), from another app, using
Windows.System.Launcher.LaunchUriAsync(uri, options); an I was able to do it.
I want set the options such a way that my calling app remains on top. I read that it can be done on windows desktop by setting LauncherOptions.DesiredRemainingView
, which is not supported by Windows Phone. Is there any other way to achieve the same?
No, there is no way to do that on Windows Phone. There is only one active app at a time, and it is the one in the foreground.
There are background agents for things like VoIP and music, but that won't do what you want.
I have an app installed on Android Wear Emulator that I can directly run from Start->MyApp. However when I want to start it with voice command i.e. Start MyApp, it keeps waiting for something but does not complete. What could be reason for this?
There is a limitation of the current emulator that it does not support voice actions via the keyboard, even though the text appears on the display. You will need to start the application by clicking on the display, then the red G, and then going to the start menu and picking the app from there. You can also quickly start the application using something like this from your development machine:
adb shell am start -n com.example.android.test/.TestActivity
The watch needs to be connected to a phone (device or emulator) with an internet connection for the voice commands to work.
I need to mirror GUI console activity happening on one Macbook so that it is duplicated on a second identical Macbook.
The idea is to control an application that will run on two Macbooks simultaneously. The application is sort of a presentation with two variations in content, but identical controls. Think of it as two versions of PowerPoint presentation with some slides that are different.
I'm thinking that it may be possible to capture the keypresses and mouse events on one Mac, then use RFB protocol to send these across the network to the other Mac. I'm looking at rfbproxy and rfbplaymacro, but these are somewhat inelegant hacks, and any solution built on these will also be a bit of a hack. And of course, I'd prefer to avoid a solution that requires me to compile and perhaps debug software that hasn't been touched in half a decade. :-)
I could conceivably use Cliclick or xdotool (from MacPorts) to initiate console events on the "slave" Mac. But then I don't know what I'd use to capture the events on the "master". Or would an xdotool-based solution require that both Macs be slaves, and then use some other device as a master?
Input devices could be a presentation mouse, an Apple remote, or in a pinch, the keyboard of on of the Macbooks or even a third device.
Can you suggest tools? Or is there another strategy I haven't thought of?
If the computers are in the same room, a single Apple Remote can control both Macs as long as the remote is not paired to either one. I'm assuming you need a solution that will work over any arbitrary distance, though.
Have you considered AppleScript? It's pretty good at sending keystrokes to ssh-accessible Macs. The receiving application doesn't even need to be aware of AppleScript (i.e. scriptable). You'll just have to be sure GUI scripting is enabled on the targets by checking the Enable access for assistive devices option in the Universal Access system prefs panel.
Here's an example of a shell command that will send a keystroke to the frontmost app via applescript:
osascript -e "tell application \"System Events\" to keystroke \"a\""
If you set up key-based ssh auth between the master and slaves you can simply tack ssh onto the front of this command:
ssh slave osascript -e "tell application \"System Events\" to keystroke \"a\""
For elegance, you could wrap any number of desired keystrokes into a menu-based bash script and run it from a third computer.
I have tried to synchronise systems like this that are NOT Macs a few years ago using rfbproxy and rfbplaymacro which you already know about. The systems were both X terminals running at the same resolution. We still had problems because of different font size settings putting application controls in different places, but the basic VNCiness of the solution seemed to work just fine.
That said, if you want to write a stand-alone application to send stuff using osascript or cliclick or xdotool, and you have a wii, you might get some joy from DarwiinRemote.
Kindof convoluted, but you could use clusterSSH for OSX to start shell sessions from a third machines master window, and then send commands to the two slaves. This could be paired with a screen control utility similar to the ones you list above, another of which is pymaCursor.
If everything could instead be recorded in advance, you could try good ol' applescript/automator recording, or a newer project like sikuli - http://sikuli.org/