Bash Script not runnuning properly in openstack server creation - bash

I have an openstack server in which i want to create an instance with user data file for example
openstack server create --flavor 2 --image 34bf1632-86ed-46ca-909e-c6ace830f91f --nic net-id=d444145e-3ccb-4685-88ee --security-group default --key-name Adeel --user-data ./adeel/script.sh m3
script.sh contain
#cloud-config
password: mypasswd
chpasswd: { expire: False }
ssh_pwauth: True
#!/bin/sh
wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-7.17.7-linux-x86_64.tar.gz && tar -xzf elastic-agent-7.17.7-linux-x86_64.tar.gz cd
elastic-agent-7.17.7-linux-x86_64 sudo ./elastic-agent install \
--fleet-server-es=http://localhost:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2Njc0MDM1 \
--fleet-server-policy=499b5aa7-d214-5b5d \
--fleet-server-insecure-http
when i add this script nothing executed. i want run above script when my instance boot first time.

Related

Changing ownership of a directory/volume using linux in Dockerfile

I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...

Docker/K8 : OpenSSL SSL_connect: SSL_ERROR_SYSCALL

Running a k8 cronjob on an endpoint. Test works like a charm locally and even when I sleep infinity at the end of my entrypoint then curl inside the container. However once the cron kicks off I get some funky error:
[ec2-user#ip-10-122-8-121 device-purge]$ kubectl logs appgate-device-cron-job-1618411080-29lgt -n device-purge
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 52.61.245.214:444
docker-entrypoint.sh
#! /bin/sh
export api_vs_hd=$API_VS_HD
export controller_ip=$CONTROLLER_IP
export password=$PASSWORD
export uuid=$UUID
export token=$TOKEN
# should be logged in after token export
# Test API call: list users
curl -k -H "Content-Type: application/json" \
-H "$api_vs_hd" \
-H "Authorization: Bearer $token" \
-X GET \
https://$controller_ip:444/admin/license/users
# test
# sleep infinity
Dockerfile
FROM harbor/privateop9/python38:latest
# Use root user for packages installation
USER root
# Install packages
RUN yum update -y && yum upgrade -y
# Install curl
RUN yum install curl -y \
&& curl --version
# Install zip/unzip/gunzip
RUN yum install zip unzip -y \
&& yum install gzip -y
# Install wget
RUN yum install wget -y
# Install jq
RUN wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
RUN chmod +x ./jq
RUN cp jq /usr/bin
# Install aws cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
## set working directory
WORKDIR /home/app
# Add user
RUN groupadd --system user && adduser --system user --no-create-home --gid user
RUN chown -R user:user /home/app && chmod -R 777 /home/app
# Make sure that your shell script file is in the same folder as your dockerfile while running the docker build command as the below command will copy the file to the /home/root/ folder for execution
# COPY . /home/root/
COPY ./docker-entrypoint.sh /home/app
RUN chmod +x docker-entrypoint.sh
# Switch to non-root user
USER user
# Run service
ENTRYPOINT ["/home/app/docker-entrypoint.sh"]
Cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: device-cron-job
namespace: device-purge
spec:
#Cron Time is set according to server time, ensure server time zone and set accordingly.
schedule: "*/2 * * * *" # test
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: appgate-cron
containers:
- name: device-cron-pod
image: harbor/privateop9/python38:device-purge
env:
- name: API_VS_HD
value: "Accept:application/vnd.appgate.peer-v13+json"
- name: CONTROLLER_IP
value: "value"
- name: UUID
value: "value"
- name: TOKEN
value: >-
curl -H "Content-Type: application/json" -H "${api_vs_hd}" --request POST
--data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$password\",\"deviceId\":\"$uuid\"}"
https://$controller_ip:444/admin/login --insecure | jq -r '.token'
- name: PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
Please help! I am running out of ideas....
The issue with my post was on the server itself due to some firewall with the IP whitelisting set up on the AWS cloud account. After that problem was addressed by the security team on the account I was able to pass the blocker.

How to fix this Elasticsearch agent on VM of Azure Kubernetes Services?

I installed Elasticsearch agent to VM of AKS. There is no data to send to Kibana cloud(which is not install outside AKS). When I tested Elastecsearch modules, there is error:
azureuser#aks-agentpool-yyyyyy-0:/var/log/elasticsearch$ sudo metricbeat test modules
Error getting metricbeat modules:
module initialization error:
5 errors:
reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Checked that the error should be based on below code in kubernetes.yml
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
How can I create secret on Azure VM to solve this problem?
Supplements
I follow https://learn.microsoft.com/en-us/azure/aks/ssh to connect to VMSS / VM then install Metricbeat already. But there is no data to send to Kibana.
Below are the steps that I performed:
# CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group XXX --name YYY --query nodeResourceGroup -o tsv)
# SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
# az vmss extension set \
--resource-group $CLUSTER_RESOURCE_GROUP \
--vmss-name $SCALE_SET_NAME \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
# az vmss update-instances --instance-ids '*' \
--resource-group $CLUSTER_RESOURCE_GROUP \
--name $SCALE_SET_NAME
# kubectl get nodes -o wide
# az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
# az vm list-ip-addresses --resource-group $CLUSTER_RESOURCE_GROUP -o table
# kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
// Inside aks-ssh
apt-get update && apt-get install openssh-client -y
// Open another terminal then copy SSH key
# kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
// Inside aks-ssh again
#chmod 0600 id_rsa
// Connect to vmss/VM:
#ssh -i id_rsa azureuser#10.240.0.4
// -- in VMSS --
// Download Metricbeat(use deb)
# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.5.0-amd64.deb
# sudo dpkg -i metricbeat-7.5.0-amd64.deb
// Modify metricbeat.yml:
# sudo nano /etc/metricbeat/metricbeat.yml
// Add below in “Elastic Cloud” section
cloud.id: "<--id-->"
cloud.auth: "<--auth-->"
// Enable Kubernetes
# sudo metricbeat modules enable kubernetes
// Modify kubernetes.yml
# sudo nano /etc/metricbeat/modules.d/kubernetes.yml
// Start Metricbeat
# sudo metricbeat setup
# sudo service metricbeat start

Dockerfile with entrypoint for executing bash script

I downloaded docker files from official repository (version 2.3), and now I want to build the image and upload some local data (test.json) into the container. It is not enough just to run COPY test.json /usr/share/elasticsearch/data/, because in this case the indexing of data is not done.
What I want to achieve is to be able to run sudo docker run -d -p 9200:9200 -p 9300:9300 -v /home/gosper/tests/tempESData/:/usr/share/elasticsearch/data test/elasticsearch, and after its execution I want to be able to see the mapped data on http://localhost:9200/tests/test/999.
If I use the below-given Dockerfile and *sh script, then I get the following error: Failed to connect to localhost port 9200: Connection refused
This is the Dockerfile from which I build the image:
FROM java:8-jre
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
# https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html
# https://packages.elasticsearch.org/GPG-KEY-elasticsearch
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 46095ACC8548582C1A2699A9D27D666CD88E42B4
ENV ELASTICSEARCH_VERSION 2.3.4
ENV ELASTICSEARCH_REPO_BASE http://packages.elasticsearch.org/elasticsearch/2.x/debian
RUN echo "deb $ELASTICSEARCH_REPO_BASE stable main" > /etc/apt/sources.list.d/elasticsearch.list
RUN set -x \
&& apt-get update \
&& apt-get install -y --no-install-recommends elasticsearch=$ELASTICSEARCH_VERSION \
&& rm -rf /var/lib/apt/lists/*
ENV PATH /usr/share/elasticsearch/bin:$PATH
WORKDIR /usr/share/elasticsearch
RUN set -ex \
&& for path in \
./data \
./logs \
./config \
./config/scripts \
; do \
mkdir -p "$path"; \
chown -R elasticsearch:elasticsearch "$path"; \
done
COPY config ./config
VOLUME /usr/share/elasticsearch/data
COPY docker-entrypoint.sh /
EXPOSE 9200 9300
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
COPY template.json /usr/share/elasticsearch/data/
RUN /bin/bash -c "source /docker-entrypoint.sh"
This is the docker-entrypoint.sh in which I added the line curl -XPOST http://localhost:9200/uniko-documents/document/978-1-60741-503-9 -d "/usr/share/elasticsearch/data/template.json":
#!/bin/bash
set -e
# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- elasticsearch "$#"
fi
# Drop root privileges if we are running elasticsearch
# allow the container to be started with `--user`
if [ "$1" = 'elasticsearch' -a "$(id -u)" = '0' ]; then
# Change the ownership of /usr/share/elasticsearch/data to elasticsearch
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data
set -- gosu elasticsearch "$#"
#exec gosu elasticsearch "$BASH_SOURCE" "$#"
fi
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
# As argument is not related to elasticsearch,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$#"
Remove the following from your docker-entrypoint.sh:
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
It's running before you exec the service at the end.
In your Dockerfile, move the following after any commands that modify the directory:
VOLUME /usr/share/elasticsearch/data
Once you create a volume, future changes to the directory are typically ignored.
Lastly, in your Dockerfile, this line at the end likely doesn't do what you think, I'd remove it:
RUN /bin/bash -c "source /docker-entrypoint.sh"
The entrypoint.sh should be run when you start the container, not when you're building it.
#Klue in case you still need it.. you need to change the -d option on your curl command to --data-binary. -d strips the newlines. that's why you are getting the errors.

Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.
I was hoping the following would work in .ebextensions/ebs.config:
commands:
01mkdir:
command: "mkdir /data"
02mount:
command: "mount /dev/sdh /data"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=vol-XXXXX
https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments
But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."
Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.
I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.
To add to #Simon's answer (to avoid traps for the unwary):
If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)
Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html
Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"
Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.
.ebextensions/whatever.config
container_commands:
chmod:
command: chmod +x .platform/hooks/predeploy/mount-volume.sh
Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.
.platform/hooks/predeploy/mount-volume.sh
#!/bin/sh
# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".
# All platform hooks run as root user, no need for sudo.
# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')
aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1
# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')
FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)
# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi
mkdir /data
mount /dev/$NON_ROOT_VOLUME_NAME /data
# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.
cp /etc/fstab /etc/fstab.orig
NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')
# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /data xfs defaults,nofail 0 2" | tee -a /etc/fstab
Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.
Instance profile should include these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:DescribeVolumes"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
]
}
]
}
You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.
Here it is with missing config:
commands:
01mount:
command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /home/lucene"
test: "[ ! -d /home/lucene ]"
04mount:
command: "mount /dev/xvdf /home/lucene"
test: "! mountpoint -q /dev/xvdf"

Resources