I followed instructions on setting up X11 forwarding from my WSL2 to the host on Windows 10 with VcXsrv based on this answer: How to set up working X11 forwarding on WSL2
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0
export LIBGL_ALWAYS_INDIRECT=1
I allowed public access while starting up VcXsrv, and also switched off my firewall just to test if it worked.
mustafa#DESKTOP-MGJG0RL:~$ xeyes
Error: Can't open display: 172.25.32.1:0
Is there a step that I'm missing?
I had the same issue. In my case the problem was that I disabled the Windows Firewall for private networks assuming that the network with the WSL 2 virtual machine would be considered a private network. But actually it turned out that this network is handled as a public network and therefore disabling the firewall for private networks did not help. So the short answer is: Set up a proper firewall rule instead of trying the shortcut with disabling the firewall for a quick try.
instead of disabling the firewall, try adding this rule (admin PowerShell)
New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -InterfaceAlias "vEthernet (WSL)" -Action Allow
I was able to resolve it:
In the sshd_config file
X11UseLocalhost yes
X11Forwarding yes
Adapted from this answer https://superuser.com/a/1476160/1014728
export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):0
Use VcXsrv. Set -ac in the additional parameters field
Run xhost + if you get a no protocol found error
Run an xeyes to test
Related
I'm trying to use ssh forwarding feature on mac to display remote GUI application locally.
on Mac, I installed the official xserver XQaurtz, set it up as below.
$ cat ~/.ssh/config
Host *
XAuthLocation /opt/X11/bin/xauth
ForwardAgent yes
ForwardX11 yes
Then I used "ssh -v -X user#remote_machine" to login a ubuntu machine, then used xclock to test.
$ ssh -v -X user#remote_machine
OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data ~/.ssh/config
debug1: /Users/bwu/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug1: auto-mux: Trying existing master
On remote machine, xclock failed to launch due to $DISPLAY is empty.
$ xclock
Error: Can't open display:
I did two more tests.
From the same mac, ssh login to another centos 7 machine, it's working. $ cat /etc/ssh/sshd_config X11Forwarding yes
X11DisplayOffset 0
From a ubuntu host, ssh login to above ubuntu machine, it's working. $ cat /etc/ssh/sshd_config X11Forwarding yes X11DisplayOffset 0
So we got below results.
mac to centos, working
ubuntu to ubuntu, working
mac to ubuntu, not working
Test 1 indicates the issue might locate on remote ubuntu machine.
Test 2 indicates the issue might locate on local mac machine.
What's wrong with this? Did I miss anything?
Further update on this issue. I noticed x11 forwarding did not work "randomly" on centos or ubuntu (from my macbook), but after a couple hours it may work again.
I checked the sshd configuration on both centos and ubuntu, nothing special and they are same in x11 forwarding part. I don't know why.
X11Forwarding yes
X11DisplayOffset 0
#X11UseLocalhost yes
Here is a solution that might work. I had the same problem and this is how I solved it. Give the following solution a try.
First find the DISPLAY variable.
So in your mac if you type as a normal user
echo $DISPLAY
Then what you would get is something like the following.
/private/tmp/com.apple.launchd.0aQYNoXMFK/org.xquartz:0
Then try something like
xeyes
to see whether forwarding works. There are other apps you could try, but I like this one.
And now you know that your display is working
Now if you want to try the same as root (Please don't jump on me guys, I know some of you all are strongly against root access) echo $DISPLAY, but if does not work
then in your root prompt do the following
export DISPLAY=/private/tmp/com.apple.launchd.0aQYNoXMFK/org.xquartz:0
The same you found in your normal user account. Then copy your
.Xauthority at /Users/normal user/.Xauthority to /var/root/.
The .Xauthority file is already there, but this would over right it.
cp /Users/normal user/.Xauthority /var/root/
Of course the export might work, but there is no harm in doing the above.
Now try the following.
echo $DISPLAY
And you should see the following
/private/tmp/com.apple.launchd.0aQYNoXMFK/org.xquartz:0
If you ssh into Ubuntu from normal user prompt then you do not need to do the root part, but since, I use root to ssh into my Ubuntu systems I often have to do this.
Then when you 'ssh into Ubuntu type
echo $DISPLAY
And you would see something like the following
localhost:10.0
The above would work if you have done all those other bits like forwarding and etc.
Again, if you want to use root in your Ubuntu and if the echo $DISPLAY does not produce any response,
then try the following (Assuming you are at root prompt).
cp /home/user name/.Xauthority /root/.Xauthority
Now try
echo $DISPLAY
again and you would see something like the following
localhost:10.0
For fun try
xeyes
Of course you could try xclock or any other as well
And it works in my case. Hope this is helpful and would solve a problem like the one above or like mine that someone has come across and who spent a few hours on this problem while scratching head and trying to pull hair out like me :-)).
Docker has a run option net=host documented here that allows you to run a virtual machine that shares the network stack with the host — for example, processes inside the docker container can connect to the host machine via localhost and vice versa.
I want to set up a Linux VM on Mac OS X that does the same thing; I've tried using Vagrant and its various networking settings without much luck.
Does Docker's VM rely on the host and guest OSes both being Linux, or is there some way to accomplish this OSX->Linux that I'm missing?
Thanks to some help from my colleagues I found a solution to this problem. This solution works with boot2docker/VirtualBox. I just created my docker VM with boot2docker init, I didn't make any specific changes to the VM configuration.
First you run the docker image with --net=host, so that it shares the network with the host VM e.g.
docker run -it --net=host ubuntu bash
Then you need to find the IP address from the VM used for the docker containers, you can do this by running boot2docker ssh the OSX host.
You can then find the IP address of the VM by finding its gateway:
$ netstat -rn | grep UG | awk '{print $2}'
10.0.2.2
So in my case it's 10.0.2.2. You can now access ports opened on the host, i.e. on a program running on OSX from your docker container by using this IP address.
To automate you could find the IP address first and then pass it into the docker command as an environment variable...
I have found another answer that works, I'll share that here so that people can choose :)
First you need to figure out what the IP address of the preferred network interface is on your OSX host is. The following shell command did this for me:
echo "show State:/Network/Global/IPv4" | scutil | grep PrimaryInterface | awk '{print $3}' | xargs ifconfig | grep inet | grep -v inet6 | awk '{print $2}'
In my case this prints out: 10.226.98.247
Then you can simply use that address inside docker, or even better give this address a hostname for inside docker:
docker run -it --add-host dockerhost:10.226.98.247 ubuntu bash
Then you can use the same dockerhost hostname in your docker container regardless of what environment you're launching your container in...
I am getting a error while accessing the firefox using X11Forwarding.
[root#station2 ~]# firefox
KiTTY X11 proxy: wrong authorisation protocol attemptedKiTTY X11 proxy: wrong authorisation protocol attemptedError: cannot open display: localhost:10.0
setup the following values: /etc/ssh/sshd_config
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
** Installed the package**
#yum install xorg-x11-xauth
#yum -y install xauth
[root#station2 .ssh]# echo $DISPLAY
localhost:10.0
#mkxauth -c
adding key for station2.example.com to /root/.Xauthority ... done
export XAUTHORITY=$HOME/.Xauthority
This fix worked for me
There is a hard, if not even impossible, to find (by search engine) scenario that may may cause that error message.
Preliminary note: The topic of this answer is not to discuss if it is a safety
risc or recommondable at all to use a graphical desktop as root on an remote, display-less, webserver.
Scenario:
A remote internet connected Linux server S has assigned the domain
name example.com to it's public IP4-address 192.0.2.1.
The /etc/hostname file on S contains the single line example.
The /etc/hosts
file on S contains the line 127.0.0.1 localhost example.com example.
The (remote) ssh access to S is by (sshd-) configuration (on S) forbidden
for root by the line DenyUsers root in /etc/ssh/sshd_config, but
allowed for a dummy user user1. From a client computer C a ssh
connection, using the ssh parameter -X or -Y, is established to S
as user user1.
Then, in a remote terminal on S owned by user1,
if any X11 related command is tried to be executed as root, may it be by
su, then trying to start the X11 desktop environment
or, as in the concrete case executing a script containing
#!/bin/bash
su --preserve-environment -c "xfce4-session &" root
the error message
X11 connection rejected because of wrong authentication.
is output and the start of any X11 related program fails.
The DISPLAY variable of root's environment contains
example.com:10.0
then.
One solution to the problem is, in this special case, to modify the line
127.0.0.1 localhost example.com example
in /etc/hosts to
127.0.0.1 localhost
Solution: run the application with the same user you are SSHing.
I have also encounter such errors while using X11.
The source of my problem was that i used SSH with my own username (which was not root).
Then, once logged in i tired running stuff with X11 while doing "su" or doing "sudo",
the problem with that is that the SSH session is configured with your own username - e.g: Raj, but then you switch to user root which is not part of the X11 session.
So what you should do is simply try to run the application (firefox in your case) with the same user you started the X11 session.
Hope this helps.
Talel.
I ran into this running gvim over ssh -t -Y and the solution that worked for me was:
xauth add $(xauth -f ~<logon_user>/.Xauthority list | tail -1) ; export NO_AT_BRIDGE=1 # gvim X11 fix for remote GUI failure after su
I do not know where I stumbled on this answer so I cannot give credit to the author.
Is it possible for one to modify files on the host machine during the vagrant up process? For example, adding an entry to the host machine's /etc/hosts file to avoid having to do this manually?
The solution is to use vagrant-hostsupdater
vagrant plugin install vagrant-hostsupdater
This plugin adds an entry to your /etc/hosts file on the host system.
On up and reload commands, it tries to add the information, if its not
already existant in your hosts file. If it needs to be added, you will
be asked for an administrator password, since it uses sudo to edit the
file.
On halt, suspend and destroy, those entries will be removed again.
OK, so now the guy sitting next to you at the coffee shop can most likely ssh to port 2222 (EDIT: changed on newer versions of vagrant, unless you explicitly enable external access) on your computer, login as vagrant with the insecure key, modify your Vagrantfile, since it's mounted read-write and owned by the vagrant user, insert arbitrary ruby code to run in the host environment, and now it looks like they've got root access on the host environment as well. Brilliant.
I hope people run firewalls on their development machines.
EDIT:
So after writing the above, I bugged the author of Vagrant, the default has been changed so that port 2222 is not open by default on the external interface. Big improvement (though still something to be careful of, since external access is often opened up for various reasons).
So, having put in effort to get the situation fixed since making this comment, I'm now getting down votes, apparently because the comment is out of date. Damn. It was correct when written.
EDIT:
In response to Steve Buzonas, the point is that if there's any likelhihood of the virtual machine being compromised then giving the vagrant up process elevated permissions represents a serious risk to the security of the host environment, and also being able to modify the /etc/hosts environment file is dangerous, even without general root access. As I've pointed out, vagrant's approach to keeping the VM secure is not particularly rigorous.
I don't want to depend on some plug in to vagrant. It should be standard feature in Vagrant!!!! Untill then I use a shell script to propagate VM's in my cluster of new VMs. The key lines are :
# Obtain the hostkey based on the IP-address and add it to the known_host list
ssh-keyscan -t ecdsa ${START}.${OFFSET} >> /home/vagrant/.ssh/known_hosts
# obtain the hostname, because you might not know it yet, with the IP address:
EXTERNAL_HOSTNAME=`ssh ${START}'.'${OFFSET} 'hostname'`
# obtain the key ot the new other VM based on hostname and also add to known_hosts
ssh-keyscan -t ecdsa ${EXTERNAL_HOSTNAME} >> /home/vagrant/.ssh/known_hosts
# so now you have the IP address and the corresponding hostname
# add to /etc/hosts without being asked for "yes/no"
echo ${START}'.'${OFFSET}' '${EXTERNAL_HOSTNAME} >> /etc/hosts
Where IPADRRESS is the IP address of the master VM in the cluster with several slave node VM's with succeedding ip-addresses. (IPADDRESS=IPADDRESS + 1 untill no successfull ping)
IPADDRESS=`ip addr show eth1 | grep 'inet ' | cut -d ' ' -f 6 | cut -d '/' -f1`
START=`echo ${IPADDRESS} | cut -d '.' -f1,2,3`
OFFSET=`echo ${IPADDRESS} | cut -d '.' -f4`
And then I loop trough the next IP addresses until no more succesfull pings.
I do not want to hardcode anything (ip-address or hostname), but to find out itself.
Resulting /etc/hosts file (after
sort /etc/hosts | uniq > /tmp/hosts.uniq && sudo sh -c 'mv /tmp/hosts.uniq /etc/hosts'
:
[vagrant#master ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 master.RHEL70.local master
192.168.1.50 master.RHEL70.local
192.168.1.51 node01.RHEL70.local
192.168.1.52 node02.RHEL70.local
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Previously I didn't know how to vagrant edit my etc/host file. But when i reinstalled window and vagrant, this feature disappeared.
I've gotten sick of how many steps it takes me to get started in the morning. Yes it only takes me a few minutes to start up my whole environment, but I'd really rather just run a single command on boot-up and be ready to go immediately.
I'm writing an app on Rails connected to SqlServer. To develop for it I have a local version of the DB I use on a VM. My manual process goes like this:
Run VirtualBox.
Start the VM.
When the VM is done booting:
Open terminal
Run `rails s`
When rails is done starting:
open browser
navigate to localhost:3000 and start developing
Run Sublime
I'd love to do this in one script:
VirtualBox Windows7 &
sublime &
google-chrome &
But I can't figure out how to run this command only once the VM is done booting:
gnome-terminal --working-directory=git/my_project --tab -e 'rails s' --tab -e 'git status'
Also, it'd be nice (but not necessary) to have chrome start after rails s has succeeded.
Is this even possible?
I'm not opposed to polling, but it feels like this is something VirtualBox should be able to do a bit more naturally.
EDIT
From Comment:
I'm using Host-Only network with two Bridged Interfaces (one for wireless and one for wired) available. (It allows me to use the VM whether or not I'm connected to a network, and lets me freely switch between wired and wireless without noticing the difference).
Here is how I would do:
In the VM, create a script which will find the default gateway, & keep pinging to it. & add it to user's startup. (needs parsing of ipconfig /all which can be done with vbscript/python.)
In host, look at the network interface between host & VM. Find the default gateway on host (parse route -n output in bash script). Since both use same physical interface, the gateway would be same (assuming NAT & ONE physical interface). Use tcpdump, to wait for the ping packets to the gateway.
"Default gateway" was chosen because that was something host & VM can find out independent of each other. Other alternative was to hard-code host's address.
After the host tcpdump on host exits, it means that the VM is alive & booted upto windows desktop.
I looked into this line of inquiry before, and I think Devil's Pie is the closest you can get to setting that up:
http://burtonini.com/blog/computers/devilspie
You could try starting with this (VBoxManager startvm):
How to automatically start and shut down VirtualBox machines?
and then look at some working scripts to add to your init.d and/or rc.local once your VM is up to finish the rest of the job in order:
Get To Know Linux: The /etc/init.d Directory
I needed to orchestrate something similar. I'm using a Windows VM (guest) as a proxy (it runs a Windows-only corporate VPN client) for my Linux laptop (host). The approach is to fully automate the guest and wait until it's ready:
The host must have no funky routes (yet)
The VM starts and runs a powershell script (via Windows Task Scheduler, run-on-startup) that connects the VPN client and sets up ICS (Internet Connection Sharing, basically routing).
The host now adds funky routes that send some traffic via the VM's host-only interface. If it added these routes too soon, step 2 would not work.
The VM also runs squid (http proxy) and its port is NAT port forwarded from the host, so localhost:3128 actually goes to the guest. So a curl using this proxy goes to the corporate network and indicates whether the guest is fully up and connected.
(Squid is also useful as a backup to this complicated but very convenient mechanism, I can still ssh via corkscrew, etc)
So, I run this script on the host (simplified version shown):
#!/bin/bash
VM=vm #Name of the Virtual Machine
SCRIPT_DIR=/some/dir
PROXY_ADDRESS=localhost:3128
REMOTE_CURL_HOST=any.corporate.hostname
function waitloop() {
echo -n "Waiting to hear from $REMOTE_CURL_HOST "
while ! curl -s -m 5 --proxy $PROXY_ADDRESS $REMOTE_CURL_HOST > /dev/null ; do
echo -n .
sleep 10
done
echo "!"
}
# a separate script that takes down my routes, you may not need this.
bash $SCRIPT_DIR/network-config-vboxnet0.sh down
# error is OK if it's already running
vboxmanage startvm $VM
waitloop && bash $SCRIPT_DIR/network-config-vboxnet0.sh up && echo "Completed"
Essentially, the script waits until curl works through the VM.