Where did the ssh client go in ddev v1.3.0? - ddev

I use ssh inside the ddev web container, and it was there just fine until ddev v1.3.0. Where did it go and how do I get it back?

Unfortunately the base container used for the web container dropped the openssh-client Debian package in this upgrade and we didn't catch that. It will be fixed in v1.4.0 or sooner (November 2018)
In the meantime, you can:
(easiest and fastest): Add webimage: drud/ddev-webserver:20181017_add_ssh to your .ddev/config.yaml (don't forget to remove it next time you upgrade)
or
Add these post-start steps to your .ddev/config.yaml:
hooks:
post-start:
- exec: sudo bash -c "apt-get update && apt-get install -y openssh-client || true"
We do apologize for losing the openssh-client package in this release.

Related

Apt upgrade on WSL is super slow / unusable

I was trying to setup a build environment on WSL. After starting it up and running sudo apt update -y && sudo apt upgrade -y It started doing its thing. But then got super slow (20kb/s) So i deleted the whole WSL and redownloaded it... Same issue. I tried disabling IPV6 in my sysctl that also didnt work.
Any ideas?
Check the local closet mirror for you from here and update sources.list file
sudo sed -i "s/archive.ubuntu.com/us.archive.ubuntu.com/" /etc/apt/sources.list
Found this at source

Docker build fails to fetch packages from archive.ubuntu.com inside bash script used in Dockerfile

Trying to build a docker image with the execution of a pre-requisites installation script inside the Dockerfile fails for fetching packages via apt-get from archive.ubuntu.com.
Using the apt-get command inside the Dockerfile works flawless, despite being behind a corporate proxy, which is setup via the ENV command in the Dockerfile.
Anyway, executing the apt-get command from a bash-script in a terminal inside the resulting docker container or as "postCreateCommand" in a devcontainer.json of Visual Studio Code does work as expected too. But it won't work in my case for the invocation of a bash script from inside a Dockerfile.
It simply will tell:
Starting installation of package iproute2
Reading package lists...
Building dependency tree...
The following additional packages will be installed:
libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
Suggested packages:
iproute2-doc
The following NEW packages will be installed:
iproute2 libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 971 kB of archives.
After this operation, 3,287 kB of additional disk space will be used.
Err:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libcap2 amd64 1:2.32-1
Could not resolve 'archive.ubuntu.com'
... more output ...
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libc/libcap2/libcap2_2.32-1_amd64.deb Could not resolve 'archive.ubuntu.com'
... more output ...
Just for example a snippet of the Dockerfile looks like this:
FROM ubuntu:20.04 as builderImage
USER root
ARG HTTP_PROXY_HOST_IP='http://172.17.0.1'
ARG HTTP_PROXY_HOST_PORT='3128'
ARG HTTP_PROXY_HOST_ADDR=$HTTP_PROXY_HOST_IP':'$HTTP_PROXY_HOST_PORT
ENV http_proxy=$HTTP_PROXY_HOST_ADDR
ENV https_proxy=$http_proxy
ENV HTTP_PROXY=$http_proxy
ENV HTTPS_PROXY=$http_proxy
ENV ftp_proxy=$http_proxy
ENV FTP_PROXY=$http_proxy
# it is always helpful sorting packages alpha-numerically to keep the overview ;)
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
&& \
apt-get -y install \
default-jdk \
git \
python3 python3-pip
SHELL ["/bin/bash", "-c"]
ADD ./env-setup.sh .
RUN chmod +x env-setup.sh && ./env-setup.sh
CMD ["bash"]
The minimal version of the environment script env-setup.sh, which is supposed to be invoked by the Dockerfile, would look like this:
#!/bin/bash
packageCommand="apt-get";
sudo $packageCommand update;
packageInstallCommand="$packageCommand install";
package="iproute2"
packageInstallCommand+=" -y";
sudo $packageInstallCommand $package;
Of course the usage of variables is down to making use of a list for the packages to be installed and other aspects.
Hopefully that has covered everything essential to the question:
Why is the execution of apt-get working with a RUN and as well running the bash script inside the container after creating, but not from the very same bash script while building the image from a Dockerfile?
I was hoping to find the answer with the help of an extensive web-search, but unfortunately I was only able to find anything but an answer to this case.
As pointed out in the comment section underneath the question:
using sudo to launch the command, wiping out all the current vars set in the current environment, more specifically your proxy settings
So that is the case.
The solution is either to remove sudo from the bash script and invoke the script as root inside the Dockerfile.
Or, using sudo will work with ENV variables, just apply sudo -E.

How do I get LaraDock to use yum instead of apt-get?

I am trying to setup a container using laradock with the following command:
docker-compose up -d nginx mysql
The problem is I am getting the following error:
E: There were unauthenticated packages and -y was used without --allow-unauthenticated
ERROR: Service 'workspace' failed to build: The command '/bin/sh -c apt-get update -yqq && apt-get -yqq install nasm' returned a non-zero code: 100`
Is there a way to get it to use yum instead of apt-get?
(I'm a server noob, thought docker would be easy and it seems that it is. Just can't figure out why it's trying to use apt-get instead of yum. Thanks.)
I suggest to read about the problems with different package system: Getting apt-get on an alpine container
Most official docker images are available with different version of Linux (alpine, debian, cent). I would rather create a own Dockerfile and change "FROM x:y" than use different package systems.
But, read the linked comment.

"tls: oversized record received with length 20527" trying to "docker run" from Win10 WSL Bash only

reproduction
Latest Docker Edge (18.03.0-ce-rc1-win54 (16164)) installed on Win10.
Switched to "Linux Container" before updated to latest Docker CE Edge version (but latest "Docker for Windows" UI don't show the switch option anymore?!).
No problem to run docker run hello-world from Windows CMD.
But calling the same from WSL Bash (latest Win10 1709) always respond with this tls error message:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
continuumio/miniconda3 latest 29af5106b6a4 17 hours ago 443 MB
hello-world latest f2a91732366c 3 months ago 1.85 kB
$ docker --version
Docker version 1.13.1, build 092cba3
$ docker version
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.6.2
Git commit: 092cba3
Built: Thu Nov 2 20:40:23 2017
OS/Arch: linux/amd64
Server:
Version: 18.03.0-ce-rc1
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: c160c73
Built: Thu Feb 22 02:42:37 2018
OS/Arch: linux/amd64
Experimental: true
$ echo $DOCKER_HOST
tcp://0.0.0.0:2375
$ docker run hello-world
tls: oversized record received with length 20527
This setting seems unrelated, but necessary to run the docker command at all:
Expose daemon on tcp://localhost:2375 without TLS
question
I wonder why this is not a common reported problem for Windows Docker / WSL usage. Something seems to be messed up, but I've no clue where to start to look into.
For example:
Why does the problem only appear under WSL Bash and not Windows
CMD?
How to change daemon.json value for "insecure-registries": [] as some SO related messages advice?
Any help / pointers are appreciated!
(=PA=)
Solution
As this freaked me out a bit, I made another Google session and found the solution down in the comments of this side:
* https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly
In a nutshell:
* The issue I've described comes from an default but outdated docker.io installation, instead of the latest and maintained docker-ce installation.
Once I've removed the old one with (the trailing * is intended!):
sudo apt-get remove --purge docker*
and installed the latest docker-ce one -- according to the procedure described on the page above -- the TLS issue was gone!
Happy docking.
The proposed solution
sudo apt-get remove --purge docker*
didn't work for me since as soon as I tried to run the apt-get remove command I got the following error:
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
invoke-rc.d: initscript docker, action "stop" failed.
So I had to manually uninstall docker by executing this:
sudo rm /var/lib/dpkg/info/docker.io.*
sudo rm /var/cache/apt/archives/docker.io*
sudo rm /etc/default/docker
sudo rm /etc/init.d/docker
sudo rm /etc/init/docker.conf
and after that I just followed the instruction here:
https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly
Problem fixed.
Here are the steps to solve the problem:
Remove docker.io (if present) and related packages from WSL (Ubuntu):
sudo apt-get remove docker.io
sudo apt-get remove docker*
Note: In case of errors (function not implemented), try to upgrade WSL by (it'll take a while):
sudo -S apt-mark hold procps strace sudo
sudo -S env RELEASE_UPGRADER_NO_SCREEN=1 do-release-upgrade
Install Docker CE in WSL (Ubuntu):
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
Expose daemon without TLS in your Docker app on Windows.
Connect to it by defining DOCKER_HOST variable in WSL:
export DOCKER_HOST=:2375
Related:
Cannot attach to a container on WSL at GitHub

Cannot (apt-get) install packages inside docker

I installed ubuntu 14.04 virtual machine and run docker(1.11.2). I try to build sample image (here).
docker file :
FROM java:8
# Install maven
RUN apt-get update
RUN apt-get install -y maven
....
I get following error:
Step 3: RUN apt-get update
--> Using cache
--->64345sdd332
Step 4: RUN apt-get install -y maven
---> Running in a6c1d5d54b7a
Reading package lists...
Reading dependency tree...
Reading state information...
E: Unable to locate package maven
INFO[0029] The command [/bin/sh -c apt-get install -y maven] returned a non-zero code:100
following solutions I have tried, but no success.
restarted docker here
run as apt-get -qq -y install curl here :same error :(
how can i view detailed error message ?
a
any way to fix the issue?
you may need to update os inside docker before
try to run apt-get update first, then apt-get install xxx
The cached result of the apt-get update may be very stale. Redesign the package pull according to the Docker best practices:
FROM java:8
# Install maven
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y maven \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
Based on similar issues I had, you want to both look at possible network issues and possible image related issues.
Network issues : you are already looking at proxy related stuff. Make sure also the iptables setup done automatically by docker has not been messed up unintentionnaly by yourself or another application. Typically, if another docker container runs with a net=host option, this can cause trouble.
Image issues : The distro you are running on in your container is not Ubuntu 14.04 but the one that java:8 was built from. If you took the java image from official library on docker hub, you have a hierarchy of images coming initially from Debian jessie. You might want to look the different Dockerfile in this hierarchy to find out where the repo setup is not the one you are looking at.
For both situations, to debug this, I recommand you run inside the latest image a shell to look the actual network and repo situation in your image. In your case
docker run -ti --rm 64345sdd332 /bin/bash
gives you a shell just before running your install maven command.
I am currently working behind proxy. it failed to download some dependency. for that you have to mention proxy configuration in docker file. ref
but, now I facing difficulty to run "mvn", "dependency:resolve" due to the proxy, maven itself block to download some dependency and build failed.
thanks buddies for your great support !
Execute 'apt-get update' and 'apt-get install' in a single RUN instruction. This is done to make sure that the latest packages will be installed. If 'apt-get install' were in a separate RUN instruction, then it would reuse a layer added by 'apt-get update', which could have been created a long time ago.
RUN apt-get update && \
apt-get install -y <tool..eg: maven>
Note: RUN instructions build your image by adding layers on top of the initial image.

Resources