How to install Kubernetes cluster behind proxy with Kubeadm? - proxy

I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used sudo -E bash.
$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"
Then the output message hung at the message below forever.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
Then I found that none of the kube components was up while kubelet kept requesting kube-apiserver. sudo docker ps -a returned nothing.
What is the possible root cause of it?
Thanks in advance.

I would strongly suspect it is trying to pull down the docker images for gcr.io/google_containers/hyperkube:v1.7.3 or whatever, which requires teaching the docker daemon about the proxies, in this way using systemd
That would certainly explain why docker ps -a shows nothing, but I would expect the dockerd logs journalctl -u docker.service (or its equivalent in your system) to complain about its inability to pull from gcr.io
Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)

Related

Cannot pull image from AWS ECR repository using docker with VirtualBox or Colima

The Situation
As you may all know, Docker has changed its license for Docker desktop to limit free usage for limited use cases.
As a result, I have resorted to alternatives such as Colima and use of virtual box as a means to continue using docker CLI while respecting Docker's new changes.
While it works fine for pulling images from Docker Hub, I've noticed that I can no longer pull images from my company's AWS ECR repo. The reason is due to unknown certificate authority issues.
My understanding of how docker runs is limited, but the gist I got from this stackoverflow post is that docker CLI acts as the client for the developer to send commands to the Docker Daemon that runs on a virtual machine. So this issue is most likely related to the VM that the docker daemon is running on.
The Error Message
Pulling from myrepo/myapp
5ad559c5ae16: Pulling fs layer
d7a7f7e76287: Pulling fs layer
3eb3e996f0d7: Pulling fs layer
d8f3fbab0eaf: Waiting
d310dd0da683: Waiting
6f542466a6be: Waiting
8851a2099770: Waiting
f1dd90cdff4b: Waiting
4a852bd6c6f1: Waiting
538106d55e7d: Waiting
dbc972867db8: Waiting
2bc8828e78a2: Waiting
1a653b47f557: Waiting
877c2f613a70: Waiting
09eac264496b: Waiting
66dd8ce5c695: Waiting
ccde39d6cfef: Waiting
4351b359c9e4: Waiting
52e095209afc: Waiting
c6ad9f161855: Waiting
233f3e28c5a3: Waiting
error pulling image configuration: Get https://prod-ca-central-1-starport-layer-bucket.s3.ca-central-1.amazonaws.com/<a-very-long-hash>/<another-very-long-hash>?X-Amz-Security-Token=<AWS-security-token>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20220210T215140Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=<my-credential>&X-Amz-Signature=<amazon-signature>: x509: certificate signed by unknown authority
My hypothesis for why I'm getting this error message
This is purely a guess. Please feel free to correct me.
I know that with Docker Desktop, I do not get this certificate error and my guess is that with the integration of hyperkit, it the VM can run via localhost, which will allow Docker Daemon to tap into macOS' trusted certificate authority certs.
The problem now arises because the VM that I've obtained from the Internet now no longer has access to those trusted certs.
What I've tried
Ensure I've logged into ECR using AWS command aws ecr get-login-password --region ca-central-1 | docker login --username AWS --password-stdin <my-aws-account-id>.dkr.ecr.ca-central-1.amazonaws.com
reinstall both Colima and the virtual box hypervisor
Isolate the issue by experimenting solely on virtual box setup.
I noticed that the folder /etc/docker is present on the VM. From Docker's documentation, the default directory for certificates for docker is in /etc/docker/certs.d to which I noticed it is absent in my Virtual Machine installation.
I think I'm close to a solution, but I'm quite new to how certificates work and I'm not sure where I can obtain the certificates I need to put them into that path to test.
Does anyone know how this can be done?
I got into same issue, I did this and it worked
Remove the line "credsStore": "xxx" from ~/.docker/config.json.

Container access to gcloud credentials denied

I'm trying to implement the container that converts data from HL7 to FHIR (https://github.com/GoogleCloudPlatform/healthcare/tree/master/ehr/hl7/message_converter/java) on Google Cloud. However, I can't build the container, locally, on my machine, to later deploy to the cloud.
The error that occurs is always in the authentication part of the credentials when I try to rotate the image locally using the docker:
docker run --network=host -v ~/.config:/root/.config hl7v2_to_fhir_converter
/healthcare/bin/healthcare --fhirProjectId=<PROJECT_ID> --fhirLocationId=<LOCATION_ID> --
fhirDatasetId=<DATASET_ID> --fhirStoreId=<STORE_ID> --pubsubProjectId=<PUBSUB_PROJECT_ID> --
pubsubSubscription=<PUBSUB_SUBSCRIPTION_ID> --apiAddrPrefix=<API_ADDR_PREFIX>
I am using Windows and have already performed the command below to create the credentials:
gcloud auth application-default login
The credential, after executing the above command, is saved in:
C:\Users\XXXXXX\AppData\Roaming\gcloud\application_default_credentials.json
The command -v ~ / .config: /root/.config is supposed to enable the docker to search for the credential when running the image, but it does not. The error that occurs is:
The Application Default Credentials are not available. They are available if running in Google
Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined
pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials for more information.
What am I putting error on?
Thanks,
A container runs isolated to the rest of the system, it's its strength and that's why this packaging method is so popular.
Thus, all the configuration on your environment is void if you don't pass it to the container runtime environment, like the GOOGLE_APPLICATION_CREDENTIALS env var.
I wrote an article on this. Let me know if it helps, and, if not, we will discussed the blocking point!

How to spin up spinnaker locally for the first time

How to spin up a local version of Spinnaker? This has been answered and addressed in detail here.
https://github.com/spinnaker/spinnaker/issues/1729
Ok, so I got it to work, but not without you valuable help! #lwander
So I'll leave the steps here for posterity.
Each line is a separate command in the command line, I've installed this on a virtual machine with a freshly installed Ubuntu 14.04 copy with nothing else than SSH. Then SSH as root, You will need to configure sshd on your console to allow root access.
https://askubuntu.com/questions/469143/how-to-enable-ssh-root-access-on-ubuntu-14-04
> curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/stable/InstallHalyard.sh
created a user account member of the adm and sudo groups (is this necessary???)
then Install Halyard:
bash InstallHalyard.sh
Verify that HAL is installed and validate its version.
hal -v
Tell Hal that the deployment type will be as a local instance (this will publish all services in localhost which will be tricky later in order to access them, but I have a turnaround so keep reading)
hal config deploy edit --type localdebian
Hal will complain that a version has not been selected, just tell HAL which version:
hal config version edit --version 1.0.0
The tell HAL which storage you are going to use, in my case and since it is local I want to use redis.
hal config storage edit --type redis
So now we need to add a cloud provider to HAL, we use AWS so we add it like this:
hal config provider aws edit --access-key-idXXXXXXXXXXXXXXXXXXXX--secret-access-key
I created a user on AWS and added access keys to the user inside IAM on the user security credentials tab. Obviously my access-key-idis not XXXXXXXXXXXXXXXXXXXX, I edited it. You do not need to enter the secret-access-key because the command will prompt for it.
Then you need to create a username relative or that will only concern you spinnaker installation however this will get related to you AWS Account-ID, so in MY spinnaker local installation I chose the username spinnakermaster you should choose yours!. And my AWS Account ID is not YYYYYYYYYYYY, I've edited too.
All the configurations and steps that you'll need to do inside AWS for this to work are really well documented here:
[https://www.spinnaker.io/setup/providers/aws/](https://www.spinnaker.io/setup/providers/aws/
)
And to tell HAL of of the above here's the command:
hal config provider aws account add spinnakermaster --account-id YYYYYYYYYYYY --assume-role role/spinnakerManaged
And after all that and if everything went according to plan we can ask HAL to deploy our brand new spinnaker installation.
hal deploy apply
It will begin a long installation downloading and configuring all the services.
Once it has finished you may do whatever you like but in my case I created a monitoring script like the one described here:
https://github.com/spinnaker/spinnaker/issues/854
Which can be launched on a recursive manner as this:
watch -n1 spinnaker-status.shor until toctrl+Cit!.
then to be able to access your local VM spinnaker copy you can either setup a reverse proxy with the proxy server of your choice to forward all the requests to localhost or you can simply ssh the SH** out of this redirecting the ports;
ssh root#ZZZ.ZZZ.ZZZ.ZZZ -L 9000:127.0.0.1:9000 -L 8084:127.0.0.1:8084 -L 8083:127.0.0.1:8083 -L 7002:127.0.0.1:7002 -L 8087:127.0.0.1:8087 -L 8080:127.0.0.1:8080 -L 8088:127.0.0.1:8088 -L 8089:127.0.0.1:8089
Where obviously theZZZ.ZZZ.ZZZ.ZZZ is not an actual IP Address.
And finally to begin having fun with this cutie you have to go to your browser of choice and type into the address bar:
http://127.0.0.0:9000
Hope this helps and saves some time to everybody!.
Cheers.
EN

Where to add client certificates for Docker for Mac?

I have a docker registry that I'm accessing behind an nginx proxy that does authentication using client-side ssl certificates.
When I attempt to push to this registry, I need the docker daemon to send the client certificate to nginx.
According to:
https://docs.docker.com/engine/security/certificates/
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
So I thought I'd try putting the certificates inside the virtual machine itself by doing:
docker-machine ssh default
This resulted in docker complaining:
Error response from daemon: crypto/tls: private key does not match public key
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
4 yrs later Google still brought me here.
I found the answer in the official docs:
https://docs.docker.com/desktop/mac/#add-client-certificates
Citing from source:
You can put your client certificates in
~/.docker/certs.d/<MyRegistry>:<Port>/client.cert and
~/.docker/certs.d/<MyRegistry>:<Port>/client.key.
When the Docker for Mac application starts up, it copies the
~/.docker/certs.d folder on your Mac to the /etc/docker/certs.d
directory on Moby (the Docker for Mac xhyve virtual machine).
You need to restart Docker for Mac after making any changes to the keychain or to the ~/.docker/certs.d directory in order for the
changes to take effect.
The registry cannot be listed as an insecure registry (see Docker Engine). Docker for Mac will ignore certificates listed under
insecure registries, and will not send client certificates. Commands
like docker run that attempt to pull from the registry will produce
error messages on the command line, as well as on the registry.
Self-signed TLS CA can be installed like this, your certs might reside in the same directory.
sudo mkdir -p /Applications/Docker.app/Contents/Resources/etc/ssl/certs
sudo cp my_ca.pem /Applications/Docker.app/Contents/Resources/etc/ssl/certs/ca-certificates.crt
https://docs.docker.com/desktop/mac/#add-tls-certificates works for me and here is short description of how to for users who use
Docker Desktop
Mac os system
add cert into mac os chain
# Add the cert for all users
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
# Add the cert for yourself
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain ca.crt
restart Docker Desktop
This is a current "Oct. 2022" docs in Docker for Mac. (I made it clear to see full url!)
How do I add TLS certificates?( https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates)
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
In my case, I also don't have /etc/docker by default. If you use ~/.docker, the docker desktop will pass alias into /etc/docker.
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
You can try put your key pairs under ~/.docker/certs.d/Hostname:port, and restart your Docker Desktop for Mac. As a result, I guess you can achieve what you want.

Docker and Namespace-related errors after a successful login to Bluemix

I have installed Python (with Pip and easysetup), Cloud Foundry and ICE in my host machine, OS X 10.10.3.
I've booted boot2docker and attempted to ice login.
After a successful login attempt:
mbp-idan:~ idanadar$ boot2docker up
Waiting for VM and Docker daemon to start...
.o
Started.
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
mbp-idan:~ idanadar$ ice login
API endpoint: https://api.ng.bluemix.net
Email> my-email-address
Password> my-password
Authenticating...
OK
Targeted org my-email-address
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.23.0)
User: my-email-address
Org: my-email-address
Space: dev
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v2/containers completed successfully
You can issue commands now to the container service
I immediately encounter the following errors:
Authentication issue:
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0005] Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Connection:[Keep-Alive] Date:[Wed, 27 May 2015 18:57:41 GMT] Content-Type:[text/plain] X-Client-Ip:[79.176.226.146] X-Backside-Transport:[FAIL FAIL] Server:[nginx/1.7.9] X-Global-Transaction-Id:[380677271] Set-Cookie:[DPJSESSIONID=PBC5YS:481842763; Path=/; Domain=.registry-ice.ng.bluemix.net]])
Docker issue:
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
The only configuration I did previously was adding the following to ~/.bash_profile, which is what provided by Docker when using boot2docker up:
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/idanadar/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
Once I did this change, I get the above two errors. If I will comment out the above three lines from .bash_profile, and not run boot2docker shellinit after boot2docker up, I will get this error:
FATA[0000] Post http:///var/run/docker.sock/v1.18/auth: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
If I will replace the three lines with this single line:
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375
I will get the following error, which is a bit different. Note the -d and lack of error regarding namespace.
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
[docker] Any idea which is the right way to get Docker recognized?
This is being tested in OS X 10.10.3.
[bluemix] Any idea about the namespace?
For some reason they seem inter-linked?
The error that is outputted by ICE is really unhelpful.
To solve it:
Added back to ~/.bash_profile the original 3 lines
Created the namespace in Bluemix.net
After that, everything has fallen to place and everything is working.

Resources