I have a VPN restricted share network on a server (Win 10), and after I connect to VPN and try to connect to that shared network on my Ubuntu 20.04 client (Ubuntu Desktop) via GUI. By GUI, I mean specifically applying these steps:
Open "Files" browser.
Select "+ Other Locations" on the left side-bar menu.
Type your server to connect on "Connect to server", mine was something like smb://myServer/shared/ and click "Connect".
When a login prompt appears, write down your credentials (or login anonymously).
You should have access to that shared network now.
When I did those steps above before upgrading to Ubuntu 20.04, when I was using Ubuntu 18.04, I was able to successfully access to the shared network.
After upgrading to Ubuntu 20.04, however, on the step 4 (after I enter my credentials and try to connect) the connection just hangs, and the shared network is not mounted.
After researching the problem a bit, the potential solutions I found did not work, most of which
suggests to add the following to smb.conf to be able to access to SMB1 type of network.
client min protocol = NT1
server min protocol = NT1
Reference
Can't acces NAS anymore after upgrading to 20.04.
Currently, what I tried aside is to mount the shared folder manually with the following command
sudo mkdir /mnt/my_share
sudo mount -t cifs -o username=name,password=pw //server/shared /mnt/my_share/
which strangely worked.
I do not have a clue why "Files" did not work and manually doing it worked. I cannot say the former failed completely because after entering credentials on login prompt there was no error, but just hangs.
Related
I tried to install firefox on my redhat 8 machine.but I get
" running firefox as root in a regular user's session is not supported. ($xauthority is /run/user/1001/gdm/xauthority which is owned by user.) "
Then I try it as normal user then it shows another error.
" Failed to open connection to "session" message bus: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
Running without a11y support! "
How to install firefox in Rhel8 (AWS EC2)? Is this possible?
I assume you are connecting via ssh (text mode). Firefox needs a graphical environment to run, which is not available in EC2. You will need a text-only browser.
It seems to me that you need something like WorkSpaces, not ec2 with ssh.
https://docs.aws.amazon.com/workspaces/index.html
The minimum you also need to install with firefox would be the xauth package. This should let you run firefox both as a normal user and while su to root. As root you will need set the XAUTHORITY env variable to point to your .Xauthority file, e.g.
# export XAUTHORITY=/home/sbaby/.Xauthority
This assumes, of course, that you ssh to your server with a X11 server listening for connections. Refer to your ssh client for documentation on how to set that up.
Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.
Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.
I am trying to use a mobile device to view the app served by create react app. When I open the IP:PORT recommended by CRA's "On Your Network", the page never loads on my mobile device.
I am working on Windows 10 laptop, using WSL2 with Ubuntu. My network is all wifi, no ethernet cables. My code is on the Ubuntu file system and I run npm start from the same location in a WSL terminal from VS Code with the WSL extension.
I am able to see the app using http://localhost:3000 with browsers on my Windows machine (Chrome, Edge).
I noticed cmd.exe ipconfig lists the IP address that corresponds to "On Your Network" as "Ethernet adapter vEthernet (WSL)". This IP address (172.17.144.244) is different than what ipconfig shows as "Wireless LAN adapter Wi-Fi" (192.168.1.23). I also tried 192.168.1.23:3000 on my mobile device, but it didn't work either.
Some other posts on SO recommend removing a firewall setting that blocks NodeJS applications. I scrolled through the many applications listed in the firewall settings and found nothing for NodeJS apps.
Since WSL2 uses a virtual NIC what you need is enable port forwarding in the Firewall otherwise your server in WSL2 won't be seen by external PCs in your network.
I recommend reading the entire thread but in resume you can start using this script:
https://github.com/microsoft/WSL/issues/4150#issuecomment-504209723
I had the same issue. So instead of trying to restart the Windows, try to open the Powershell terminal, shut down the WSL, and then start it again. it solves to me.
Command to shut down: wsl --shutdown
Command to start: to start: wsl
Source: https://github.com/microsoft/WSL/issues/4885#issuecomment-803881561
I found a tool that fixes this problem: https://github.com/icflorescu/expose-wsl
First you need to run the command in the wsl terminal
npx expose-wsl#latest
and it gives you an ip address of your pc 192.168.0.130 for example, with that ip you can access the project from devices on the same network.
When running npm start:
.
.
.
On Your Network: http://172.48.228.88:3000
enter on mobile device:
http://192.168.0.130:3000
and works.
Note: You have to allow the port through the Windows firewall. In my case the firewall should allow access to 'C:\Program Files\WindowsApps\MicrosoftCorporationII.WindowsSubsystemForLinux_1.0.3.0_x64__...\wslhost.exe'
You can run your app in Docker instead and use Docker for Windows, enabled for WSL2. It somehow manages dynamic dynamically forward the ports, without having you to change anything in Windows.
I was also having the problem with hot reloading in WSL2 . I tried almost every solution present on github , stackoverflow and where not , from CHOKIDAR_USEPOLLING=true to setting FAST_REFRESH= false in .env file to changing network setting using netsh.... but none of them worked for me and after 2 days of searching solutions and trying to fix it .. finally reverted to wsl 1.
Just run this command on Powershell for now.
wsl --set-version Ubuntu-20.04 1
Consider Nginx for Windows.
I prefer this solution because I'm more familiar configuring web servers and reverse proxies than Windows networking and Powershell.
After unzipping the distribution, for example at C:\somepath\nginx-1.22.1, I add the following reverse proxy configuration to C:\somepath\nginx-1.22.1\conf\nginx.conf
...
http {
...
server {
listen 11500;
server_name wsl2_server;
charset utf-8;
location / {
proxy_pass http://localhost:11500/;
}
}
...
As you can see, I have a web server running on port 11500 in WSL2. When my mobile device requests "lan_ip_of_laptop:11500/", it then forwards to localhost:11500 and WSL2 server completes the request.
I am not able to SSH to Raspberry Pi 3 from Putty. I can ping the 192.168.137.1 IP address assigned by sharing Internet connection.
The problem I realized that SSH is not enabled by default in Pi3 and saw the posts which suggest to enable SSH by creating 'ssh' file inside /boot folder. I got the SD card which has pre-installed Noobs so when I open SD card it shows only /recovery folder. How to enable SSH in this case ? Please help to resolve it .
Enable SSH like:
documentation
Or start it manually sudo service ssh start - note that manual option will require startup scrip to run at each startup.
Additional settings like port configuration should be done insshd_config. Do it with your favorite editor sudo nano /etc/ssh/sshd_config. Anyway your post should be opened at raspberry forum not at SO...
you can take a look at this thread
How can I get connection with Raspberry without access of its shell?
it mentions about PiBakery which will install rasbian with ssh support. by default ssh is disabled but this might help.
When I attempt to initiate 'vagrant up' the script executes as normal until it gets to the last line, where NFS shared drives are mounted.
I have tried deleting the exports file in /etc/ followed by a nfsd restart and vagrant destroy / vagrant up but to no avail.
After some considerable amount of time the console outputs the following [certain details redacted]:
*==> default: Mounting NFS shared folders...*
*The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed!*
*mount -o 'nolock,vers=3,udp,noatime' XXX.XXX.XX.X:'/Users/dhatton/Google Drive/moodle-doodle/site' /var/www/site*
*Stdout from the command:*
*Stderr from the command:*
*mount.nfs: Connection timed out*
UPDATE
The above problem was encountered when using a VPN into the office network. Upon logging in on-site without the VPN, everything works again.
For macOS Monterey 12.1 with virtualBox 6.1.30 and vagrant Vagrant 2.2.19/18:
create vbox folder in /etc
create a file inside /etc/vbox named networks.conf
add the following inside networks.conf
* 0.0.0.0/0 ::/0
Note: if you get the ip address range error, add your IP here too.
I had similar issue. I searched a lot, and tried following solutions:
Check /etc/exports and /etc/hosts files, if there are invalid entries in file, remove them.
Check your firewall is not blocking access
Restart NFS system
install vagrant plugin install vagrant-vbguest plugin
do vagrant reload --provision
Reboot your pc
Reinstall vagrant
For me reinstalling vagrant worked.
I've ran across this before and the problem turned out to be related to my companies VPN. If I tried running vagrant up connected to the VPN it would hang on mounting NFS, but if I disconnected from VPN and tried again it worked. Once running I could connect to VP Probably goes back to it needing a stable internet connection.
Assuming you are trying to mount from guest to host (host being OSX?) trying mounting to a different path. You might be encountering issues with the space in Google Drive?
Vagrant downloads binaries from its cloud while configuring a VM, so a stable internet connection is needed. In fact, an internet connection is necessary for using most of the Hashicorp products.