I would like to install the Windows version of Perforce in a network location so that users can call p4 via:
\\somewhere\p4.exe -p server:1666 -c some_client_name sync
where "somewhere" is consistently mapped on all Windows machines. I tried to do this by installing locally, then copying p4.exe to \\somewhere.
On the computer where I installed locally, \\somewhere\p4.exe works just fine. But when I switch to another machine and try to run
\\somewhere\p4.exe -p server:1666 info
I get the following error:
Perforce client error
Connect to server failed; check $P4PORT.
TCP connect to server:1666 failed.
A non-recoverable error occurred during a database lookup.
What does this error mean? I couldn't find any information in the documentation; I suspect I might need another file besides p4.exe. Indeed, when I install Perforce locally on the other machine, using the local p4.exe works, but \\somewhere\p4.exe still does not.
Any pointers?
Thanks!
You shouldn't need any other files besides P4.exe.
The TCP connection error is probably because that other machine isn't able to translate "server" into an IP address.
Try using some of the Windows command line tools to diagnose this, as in:
nslookup server
or
ping server
Also, try changing your test to run:
\\somewhere\p4.exe -p NNN.NNN.NNN.NNN:1666 info
where the "NNN.NNN.NNN.NNN" is the IP address of your server machine.
Related
Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.
Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.
I am getting an timeout error when trying to deploy to an VM instance hosted on AWS. Manually I can log ing using
ssh -i myKeyFile.pem myuser#IP
Once I accessed the remote machine I can execute some docker commands and everything works fine. But now that I need to automated that on the CD pipeline is where I am getting the following error:
2020-06-02T21:37:12.6877276Z Trying to establish an SSH connection to ***#IP:port
2020-06-02T21:38:52.4629461Z ##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake.
2020-06-02T21:38:52.4685976Z ##[section]Finishing: Run shell commands on remote machine
The steps I follow to make the SSH connection are:
I created a SSH service connection on the project settings in Azure DevOps
I created the CD pipeline
I added a SSH task with the following parameters
When I manually trigger it to test if it works, the release start working fine but after 1:43 minutes more or less is when I got the error:
Then, when I review the logs, it is the same error I pasted at the beginning:
[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake
I've increase the handshake timeout settings from the default one (20000) to 90000, but no luck.
Any one has face this problem before?
Seems there is an ongoing error with the default agent pools from Azure DevOps. Lot of people have been reported this and Azure DevOps teams is working on it at the time this post is been written (I couldn't find the post where all that is details. I will add this later on).
The workaround is
To create a self-hosted agent.
After this has been created you will need to re-create your CD pipeline using the new self-hosted agent.
The rest of the SSH task configuration depends on your needs. But if you want to test the SSH connection works, just print something:
echo 'I'm connected'
After this you CD pipeline should be working fine.
More details on how to created the Self-Hosted Agent on Windows. There are also links for Linux and Mac.
I had a similar issue with a VM in Azure. It turned out I had set the security group to only allow SSH in from my local network and Azure Dev-Ops agents obviously run in a Microsoft network and were coming from a different IP Address range. The solution was to open up SSH to all source IP Addresses. You can get the list of IP address ranges Dev-Ops agents use but they appear to change every week which isn't very helpful.
See https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops#microsoft-hosted-agents
Running into the subject issue trying to update the proxies with nswag... funny enough, the app that this came with is preconfigured to use a specific port for that service, but I don't see anything on that port using netstat -ano in the command line. Does anyone have any thoughts?
Before running nswag/refresh.bat the host needs to be up and running.
To start host, copy the below lines and create a batch file (bat).
CD "D:\Github\MySolution\src\MyProject.Web.Host"
SET ASPNETCORE_ENVIRONMENT=Development
SET ASPNETCORE_URLS=http://*:22742
dotnet run
It was some settings on the server (so a corporate guy told me), so nothing on my end, but thank you everyone for your help!
I'm trying to install Docker on a Windows computer but I get this message:
Running pre-create checks...
(default) No default Boot2Docker ISO found locally, downloading the latest release...
Error with pre-create check: "Get https://api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp 192.30.252.124:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
Looks like something went wrong in step 'Checking if machine default exists'...
Press any key to continue...
Any suggestions on how to resolve this?
Editing the start.sh file may come up with other error things.
Instead that, just put your boot2docker.iso in below location as.
c:\user\USERNAME\\.docker\machine\cache
and restart your Docker terminal.
You may behind a firewall. If so, you will need to configure an http proxy.
According to https://github.com/boot2docker/boot2docker-cli/issues/230 you can do this one of a couple of ways:
(1) Edit start.sh and add the following before boot2docker.exe is called
export HTTP_PROXY=<proxy>
export HTTPS_PROXY=<proxy>
(2) Add HTTP_PROXY and HTTPS_PROXY (and their values) to your System Variables or User Variables in your Windows config.
The proxy value should be of the form http://hostname:port
I am trying to debug PHP files which sit on a remote server (on the same network) without success.
Here is my php.ini config for xdebug on the remote server where PHP and xdebug are installed:
xdebug.remote_enable=1
xdebug.remote_host=192.168.128.56
xdebug.remote_port=9000
xdebug.remote_handler=dbgp
xdebug.remote_autostart=0
192.168.128.56 is the IP address of my PC on which my editor is installed.
I have tried to get this working with both Atom and Sublime Text 3 without success. I think that my path bindings may be incorrect.
I log into the remote linux machine using SFTP. I can then double click on php files in my application and they will open in my editor where I can work on them and save them. How can I set up the path bindings to debug these remote php files? I'm not sure what the second (local) part of the path binding should actually be? Do I need to add the location where the FTP software stores a temporary copy of the file I am working on as the local part of the binding?
I have tried the following:
URL - the address of where the app runs on the remote server:
e.g. http://www.mywebsite.com/testapp/
Path Binding - remote path to the application root on linux : path to the local copy of the files on my machine where the FTP software stores them:
e.g. /web/testApp/ : C:\Users\me\AppData\Local\Temp\scp18929\
I'm a little confused about how the path binding works, and what the values should be. Am I doing this correctly? Can this even be done?
If anyone can help that would be great.
Probably, the first thing to check is whether Xdebug actually tries to connect to your IDE. You can do that, by adding:
xdebug.remote_log=/tmp/xdebug.log
to your php.ini file. When you then initiate debugging, there should be information in the /tmp/xdebug.log file, where it will tell you where it tried to connect too, and whether the connection succeeded or failed.
If you get something like:
I: Remote address found, connecting to 192.168.128.56:9000.
E: Could not connect to client. :-(
That means that either your IDE wasn't listening for something, or that there is a firewall preventing an incoming connection, or that the IP address is incorrect.