Consul node will not be discoverable via DNS due to invalid characters - consul

We are primarily a VM based environment with a lot of microservices that need client discovery, configuration management etc. So decided to use Hashicorp Consul.
we are facing an issue: hostname with the dot(.)
[WARN] agent: Node name "myorg.vsi.uat.xxx.com" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
We are unable to change the hostname at the moment. Tried changing the node name using configuration but unable to find success.
is there anything we can do to overcome the issue?

You can write small bash script and use it to startup agent with random uuid as name by providing config file name override upon first run:
FILE=/etc/consul.d/host_id.hcl
if [ ! -f "$FILE" ]; then
echo "node_name=$(uuidgen)" > "$FILE"
fi
consul agent -config-file=$FILE #... use your your agent run command with loading file config override

Related

ECS Fargate Extra Hosts

I am in middle of switching from EC2 to ECS Fargate to host my laravel api and I am using laravel passport oauth function with guzzle to generate access tokens. However in order for this to work I have to add an entry to extra_hosts like example.com:10.20.10.140 to my docker-compose.yml.
It looks like I cannot add an extra host entry via Fargate would really appreciate if anyone know of an alternative solution to this problem as the application works perfectly fine on local as well as on ec2.
Thank you
The way I worked around similar limitations in the past (e.g. for me it was searchdomain) is to add a line in a startup script that hack the local file. This is what I did:
#!/bin/bash
# when the variable is populated a search domain entry is added to resolv.conf at startup
# this is needed for the ECS service discovery given the app works by calling host names and not FQDNs
# a search domain can't be added to the container when using the awsvpc mode
# and the awsvpc mode is needed for A records (bridge only supports SRV records)
if [ $SEARCH_DOMAIN ]; then echo "search ${SEARCH_DOMAIN}" >> /etc/resolv.conf; fi
ruby /app/yelb-appserver.rb -o 0.0.0.0
Link to source
If you do not want to change the main container/application to do this you can have a sidecar container in the same task that only does this and then exits.

JMeter Error is showing for distributed environment

I have Set up everything, running Jmeter-Server.bat file in slave system, added IP in Jmeter.properties. It was running before one month properly, but when I am trying to run, Its showing me the Error.
Most probably there is a mismatch between certificates in master and slave(s) or 7 days had passed since the last keystore creation so the certificates are exired now.
Either follow steps from Setting up SSL one more time and make sure to use exactly the same rmi_keystore.jks file on master and all the slaves or define the following property:
server.rmi.ssl.disable=true
This can be done either by adding the above line to user.properties file or alternatively you can provide it using -J command-line argument like:
jmeter -Jserver.rmi.ssl.disable=true ..... - on master
jmeter-server -Jserver.rmi.ssl.disable=true ..... - on slaves
More information:
Remote hosts and RMI configuration
Apache JMeter Properties Customization Guide
Overriding Properties Via The Command Line

gcloud composer: The network “network-name" does not have available private IP space in x.0.0.0/x to reserve a /x block for containers for cluster

I am trying to evaluate Airflow on gCloud using gcloud composer. When I try to create an environment from console by providing all the required details, it fails with the error: The network “network-name" does not have available private IP space in 10.0.0.0/8 to reserve a /14 block for containers for cluster. I am not able to get past this error to create the environment.
I know from previous experience that GKE that this error could be solved by passing the desired address space using the option: --cluster-ipv4-cidr. The 'gcloud composer environments create' command does not accept the ' --cluster-ipv4-cidr' option. It is only accepted in the context of 'gcloud container clusters create'. Is there a way to explicitly mention the desired CIDR in 'gcloud composer' command? Please advise.
Unfortunately there is currently no way to pass that flag on through composer during environment creation.
Without this flag, the cluster is created using the default value which uses the CIDR range of 10.0.0.0/8. This being said, you can also configure a secondary range for your subnet which can handle a /14 range within 10.0.0.0/8.

How to install Kubernetes cluster behind proxy with Kubeadm?

I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used sudo -E bash.
$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"
Then the output message hung at the message below forever.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
Then I found that none of the kube components was up while kubelet kept requesting kube-apiserver. sudo docker ps -a returned nothing.
What is the possible root cause of it?
Thanks in advance.
I would strongly suspect it is trying to pull down the docker images for gcr.io/google_containers/hyperkube:v1.7.3 or whatever, which requires teaching the docker daemon about the proxies, in this way using systemd
That would certainly explain why docker ps -a shows nothing, but I would expect the dockerd logs journalctl -u docker.service (or its equivalent in your system) to complain about its inability to pull from gcr.io
Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)

How to read Elastic ip of an aws instance when created through vagrant and chef-solo

I am using vagrant with chef-solo for creating and provisioning a test environment, with an elastic ip assigned. I want to read the elastic ip of the test environment and return it to jenkins, where jenkins uses this ip and deploys the war into this machine for a functional testing.
Is this possible to do?
Yes it's possible to get the system public IP. You can get the information by accessing the instance metadata. Here is the command by which you can get the public IP associated with your instance.
GET http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:29:96:8f:6a:2d/public-ipv4s
Change the MAC address as yours.
Thanks linuxnewbee. But i found another way.
$vagrant ssh
$ ec2metadata | grep public-hostname
This command returns publich-hostname ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com
Now I have to pass this ip to jenkins, which is yet to be done.

Resources