how do I pipe in file content as args to kubectl? - bash

I wish to run k6 in a container with some simple javascript load from local file system,
It seems the below had some syntax error
$ cat simple.js
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10,
duration: '30s',
};
export default function () {
http.get('http://100.96.1.79:8080');
sleep(1);
}
$kubectl run k6 --image=grafana/k6 -- run - <simple.js
//OR
$kubectl run k6 --image=grafana/k6 run - <simple.js
in the k6 pod log, I got
│ time="2023-02-16T12:12:05Z" level=error msg="could not initialize '-': could not load JS test 'file:///-': no exported functions in s │
I guess this means the simple.js is not really passed to k6 this way?
thank you!

I think you can't pipe (host) files into Kubernetes containers this way.
One way that it should work is to:
Create a ConfigMap to represent your file
Apply a Pod config that mounts the ConfigMap file
NAMESPACE="..." # Or default
kubectl create configmap simple \
--from-file=${PWD}/simple.js \
--namespace=${NAMESPACE}
kubectl get configmap/simple \
--output=yaml \
--namespace=${NAMESPACE}
Yields:
apiVersion: v1
kind: ConfigMap
metadata:
name: simple
data:
simple.js: |
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
http.get('http://test.k6.io');
sleep(1);
}
NOTE You could just create e.g. configmap.yaml with the above YAML content and apply it.
Then with pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: simple
spec:
containers:
- name: simple
image: docker.io/grafana/k6
args:
- run
- /m/simple.js
volumeMounts:
- name: simple
mountPath: /m
volumes:
- name: simple
configMap:
name: simple
Apply it:
kubectl apply \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
Then, finally:
kubectl logs pod/simple \
--namespace=${NAMESPACE}
Yields:
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: /m/simple.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
running (00m01.0s), 1/1 VUs, 0 complete and 0 interrupted iterations
default [ 0% ] 1 VUs 00m01.0s/10m0s 0/1 iters, 1 per VU
running (00m01.4s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [ 100% ] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU
data_received..................: 17 kB 12 kB/s
data_sent......................: 542 B 378 B/s
http_req_blocked...............: avg=128.38ms min=81.34ms med=128.38ms max=175.42ms p(90)=166.01ms p(95)=170.72ms
http_req_connecting............: avg=83.12ms min=79.98ms med=83.12ms max=86.27ms p(90)=85.64ms p(95)=85.95ms
http_req_duration..............: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
{ expected_response:true }...: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
http_req_failed................: 0.00% ✓ 0 ✗ 2
http_req_receiving.............: avg=102.59µs min=67.99µs med=102.59µs max=137.19µs p(90)=130.27µs p(95)=133.73µs
http_req_sending...............: avg=67.76µs min=40.46µs med=67.76µs max=95.05µs p(90)=89.6µs p(95)=92.32µs
http_req_tls_handshaking.......: avg=44.54ms min=0s med=44.54ms max=89.08ms p(90)=80.17ms p(95)=84.62ms
http_req_waiting...............: avg=88.44ms min=81.05ms med=88.44ms max=95.83ms p(90)=94.35ms p(95)=95.09ms
http_reqs......................: 2 1.394078/s
iteration_duration.............: avg=1.43s min=1.43s med=1.43s max=1.43s p(90)=1.43s p(95)=1.43s
iterations.....................: 1 0.697039/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1
Tidy:
kubectl delete \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
kubectl delete configmap/simple \
--namespace=${NAMESPACE}
kubectl delete namespace/${NAMESPACE}

Related

unable to execute a bash script in k8s cronjob pod's container

Team,
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
cannot run above file but I can cat it well. I tried my best and still trying to find but no luck so far..
my requirement is to mount bash script from config map to a directory inside container and run it to clone a repo but am getting below message.
cron job
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
spec:
template:
metadata:
spec:
containers:
- args:
- -c
- |
set -x
pwd && ls
ls -ltr /
cat /repo/clone.sh
./repo/clone.sh
pwd
command:
- /bin/bash
envFrom:
- configMapRef:
name: sonarscanner-configmap
image: artifactory.build.team.com/product-containers/user/sonarqube-scanner:4.7.0.2747
imagePullPolicy: IfNotPresent
name: sonarqube-sonarscanner
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /repo
name: repo-checkout
dnsPolicy: ClusterFirst
initContainers:
- args:
- -c
- cd /
command:
- /bin/sh
image: busybox
imagePullPolicy: IfNotPresent
name: clone-repo
securityContext:
privileged: true
volumeMounts:
- mountPath: /repo
name: repo-checkout
readOnly: true
restartPolicy: OnFailure
securityContext:
fsGroup: 0
volumes:
- configMap:
defaultMode: 420
name: product-configmap
name: repo-checkout
schedule: '*/1 * * * *'
ConfigMap
kind: ConfigMap
metadata:
apiVersion: v1
data:
clone.sh: |-
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd ${CODE_REPO_NAME}
pwd
output pod describe
Warning FailedCreatePodSandBox 1s kubelet, node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "sonarqube-cronjob-1670256720-fwv27": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown
pod logs
+ pwd
+ ls
/usr/src
+ ls -ltr /repo/clone.sh
lrwxrwxrwx 1 root root 15 Dec 5 16:26 /repo/clone.sh -> ..data/clone.sh
+ ls -ltr
total 60
.
drwxr-xr-x 2 root root 4096 Aug 9 08:58 sbin
drwx------ 2 root root 4096 Aug 9 08:58 root
drwxr-xr-x 2 root root 4096 Aug 9 08:58 mnt
drwxr-xr-x 5 root root 4096 Aug 9 08:58 media
drwxrwsrwx 3 root root 4096 Dec 5 16:12 repo <<<<< MY MOUNTED DIR
.
+ cat /repo/clone.sh
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd code_dir
+ ./repo/clone.sh
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
+ pwd
pwd/usr/src
Assuming the working directory is different thant /:
If you want to source your script in the current process of bash (shorthand .) you have to add a space between the dot and the path:
. /repo/clone.sh
If you want to execute it in a child process, remove the dot:
/repo/clone.sh

Passing YAML content to a command in a bash function

I'm currently writing a bash script and struggling with something that looked fairly simple at first.
I'm trying to create a function that calls a kubectl (Kubernetes) command. The command is expecting the path to a file as an argument although I'd like to pass the content itself (multiline YAML text). It works in the shell but can't make it work in my function. I've tried many things and the latest looks like that (it's just a subset of the the YAML content):
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
kubectl apply -n default -f - $(cat <<- END
kind: ConfigMap
metadata:
name: $AGENT_NAME
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
END
)
}
deploy_agent_statefulset
The initial command that works in the shell is the following.
cat <<'EOF' | NAMESPACE=default /bin/sh -c 'kubectl apply -n $NAMESPACE -f -'
kind: ConfigMap
...
I'm sure I m doing a lot of things wrong - keen to get some help
Thank you.
name: grafana-agent
In your function, you didn't contruct stdin properly :
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
kubectl apply -n default -f - <<END
kind: ConfigMap
metadata:
name: $AGENT_NAME
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
END
}
deploy_agent_statefulset
this one should work:
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
cat << EOF | kubectl apply -n default -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: $AGENT_NAME
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
EOF
}
deploy_agent_statefulset
To point out what is wrong in your yaml which are all indentations,
you don't need to add the indentations in the beginning
name goes under metadata, so it needs to be intended.
agent.yaml is the key, for the data in the ConfigMap, so it needs to be intended as well

Kubernetes readiness probe fails

I wrote a readiness_probe for my pod by using a bash script. Readiness probe failed with Reason: Unhealthy but when I manually get in to the pod and run this command /bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; fi bash script exits with code 0.
What could be the reason? I am attaching the code and the error below.
Edit: Found out that the health variable is set to 000 which means timeout in for bash script.
readinessProbe:
exec:
command:
- /bin/bash
- '-c'
- |-
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; fi
"kubectl describe pod {pod_name}" result:
Name: rustici-engine-54cbc97c88-5tg8s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Tue, 12 Jul 2022 18:39:08 +0200
Labels: app.kubernetes.io/name=rustici-engine
pod-template-hash=54cbc97c88
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/rustici-engine-54cbc97c88
Containers:
rustici-engine:
Container ID: docker://f7efffe6fc167e52f913ec117a4d78e62b326d8f5b24bfabc1916b5f20ed887c
Image: batupaksoy/rustici-engine:singletenant
Image ID: docker-pullable://batupaksoy/rustici-engine#sha256:d3cf985c400c0351f5b5b10c4d294d48fedfd2bb2ddc7c06a20c1a85d5d1ae11
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 12 Jul 2022 18:39:12 +0200
Ready: False
Restart Count: 0
Limits:
memory: 350Mi
Requests:
memory: 350Mi
Liveness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=20
Readiness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=10
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whb8d (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-whb8d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/rustici-engine-54cbc97c88-5tg8s to minikube
Normal Pulling 23s kubelet Pulling image "batupaksoy/rustici-engine:singletenant"
Normal Pulled 21s kubelet Successfully pulled image "batupaksoy/rustici-engine:singletenant" in 1.775919851s
Normal Created 21s kubelet Created container rustici-engine
Normal Started 20s kubelet Started container rustici-engine
Warning Unhealthy 4s kubelet Readiness probe failed:
Warning Unhealthy 4s kubelet Liveness probe failed:
The probe could be failing because it is facing performance issues or slow startup. To troubleshoot this issue, you will need to check that the probe doesn’t start until the app is up and running in your pod. Perhaps you will need to increase the Timeout of the Readiness Probe, as well as the Timeout of the Liveness Probe, like in the following example:
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 2
timeoutSeconds: 10
You can find more details about how to configure the Readlines Probe and Liveness Probe in this link.

Unable to put Vault UI in https

I try to run Vault with a CRC OpenShift 4.7 and helm3 but I've some problems when I try to enable the UI in https.
Add hashicorp repo :
helm repo add hashicorp https://helm.releases.hashicorp.com
Install the latest version of vault :
[[tim#localhost config]]$ helm install vault hashicorp/vault \
> --namespace vault-project \
> --set "global.openshift=true" \
> --set "server.dev.enabled=true"
Then I run oc get pods
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
I run an interactive shell session with the vault-0 pod :
oc rsh vault-project-0
Then I initialize Vault :
/ $ vault operator init --tls-skip-verify -key-shares=1 -key-threshold=1
Unseal Key 1: iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Initial Root Token: s.xVb0DvIMQRYam7oS2C0ZsHBC
Vault initialized with 1 key shares and a key threshold of 1. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 1 of these keys to unseal it
before it can start servicing requests.
Vault does not store the generated master key. Without at least 1 key to
reconstruct the master key, Vault will remain permanently sealed!
It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
Export the token :
export VAULT_TOKEN=s.xVb0DvIMQRYam7oS2C0ZsHBC
Unseal Vault :
/ $ vault operator unseal --tls-skip-verify iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.6.2
Storage Type file
Cluster Name vault-cluster-21448fb0
Cluster ID e4d4649f-2187-4682-fbcb-4fc175d20a6b
HA Enabled false
I check the pods :
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 1/1 Running 0 35m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 35m
 
I'm able to get the UI without https :
In the OpenShift console, I switch to the Administrator mode and this is what I've done :
Networking part
- Routes > Create routes
Name : vault-route
Hostname : 192.168.130.11
Path :
Service : vault
Target Port : 8200 -> 8200 (TCP)
Now, if I check the URL : http://192.168.130.11/ui :
The UI is available.
 
In order to enable the https, I've followed the step here :
https://www.vaultproject.io/docs/platform/k8s/helm/examples/standalone-tls
But I've change the K8S commands for the OpenShift commands
# SERVICE is the name of the Vault service in Kubernetes.
# It does not have to match the actual running service, though it may help for consistency.
SERVICE=vault-server-tls
# NAMESPACE where the Vault service is running.
NAMESPACE=vault-project
# SECRET_NAME to create in the Kubernetes secrets store.
SECRET_NAME=vault-server-tls
# TMPDIR is a temporary working directory.
TMPDIR=/**tmp**
Then :
openssl genrsa -out ${TMPDIR}/vault.key 2048
Then create the csr.conf file :
[tim#localhost tmp]$ cat csr.conf
[req]
default_bits = 4096
default_md = sha256
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
DNS.1 = vault-project
DNS.2 = vault-project.vault-project
DNS.3 = *apps-crc.testing
DNS.4 = *api.crc.testing
IP.1 = 127.0.0.1
Create the CSR :
openssl req -new -key': openssl req -new -key ${TMPDIR}/vault.key -subj "/CN=${SERVICE}.${NAMESPACE}.apps-crc.testing" -out ${TMPDIR}/server.csr -config ${TMPDIR}/csr.conf
Create the file ** csr.yaml :
$ export CSR_NAME=vault-csr
$ cat <<EOF >${TMPDIR}/csr.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
groups:
- system:authenticated
request: $(cat ${TMPDIR}/server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Send the CSR to OpenShfit :
oc create -f ${TMPDIR}/csr.yaml
Approve CSR :
oc adm certificate approve ${CSR_NAME}
Retrieve the certificate :
serverCert=$(oc get csr ${CSR_NAME} -o jsonpath='{.status.certificate}')
Write the certificate out to a file :
echo "${serverCert}" | openssl base64 -d -A -out ${TMPDIR}/vault.crt
Retrieve Openshift CA :
oc config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d > ${TMPDIR}/vault.ca
Store the key, cert, and OpenShift CA into Kubernetes secrets :
oc create secret generic ${SECRET_NAME} \
--namespace ${NAMESPACE} \
--from-file=vault.key=/home/vault/certs/vault.key \
--from-file=vault.crt=/home/vault/certs//vault.crt \
--from-file=vault.ca=/home/vault/certs/vault.ca
The command oc get secret | grep vault :
NAME TYPE DATA AGE
vault-server-tls Opaque 3 4h15m
Edit my vault-config with the oc edit cm vault-config command:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
extraconfig-from-values.hcl: |-
disable_mlock = true
ui = true
listener "tcp" {
tls_cert_file = "/vault/certs/vault.crt"
tls_key_file = "/vault/certs/vault.key"
tls_client_ca_file = "/vault/certs/vault.ca"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-03-15T13:47:24Z"
name: vault-config
namespace: vault-project
resourceVersion: "396958"
selfLink: /api/v1/namespaces/vault-project/configmaps/vault-config
uid: 844603a1-b529-4e33-9d58-20525ea7bff
Edit the VolumeMounst, volumes and ADDR parts my statefulset :
volumeMounts:
- mountPath: /home/vault
name: home
- mountPath: /vault/certs
name: certs
volumes:
- configMap:
defaultMode: 420
name: vault-config
name: config
- emptyDir: {}
name: home
- name: certs
secret:
defaultMode: 420
secretName: vault-server-tls
name: VAULT_ADDR
value: https://127.0.0.1:8200
I delete my pods in order to take into account all my changes
oc delete pods vault-project-0
And...
tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
vault-project-0 is on 0/1 but running. If I describe the pods :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 1s (x6 over 26s) kubelet Readiness probe failed: Error checking seal status: Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client
If think that I've missed something but I don't know what...
Someone to tell me how to enable https for the vault UI with openshift ?

Autoscaling: Newly created instance always OutOfService

I have setup autoscaling using these steps...
$ elb-create-lb autoscalelb --headers --listener
"lb-port=80,instance-port=80,protocol=http" --listener
"lb-port=443,instance-port=443,protocol=tcp" --availability-zones
us-east-1d
$ elb-describe-lbs autoscalelb
$ elb-register-instances-with-lb autoscalelb --instances i-ee364697
$ elb-configure-healthcheck autoscalelb --headers --target "TCP:80"
--interval 5 --timeout 3 --unhealthy-threshold 2 --healthy-threshold 4
$ as-create-launch-config autoscalelc --image-id ami-baba68d3
--instance-type t1.micro
$ as-create-auto-scaling-group autoscleasg --availability-zones
us-east-1d --launch-configuration autoscalelc --min-size 1 --max-size
5 --desired-capacity 1 --load-balancers autoscalelb
$ as-describe-auto-scaling-groups autoscleasg
$ as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group
autoscleasg --adjustment=1 --type ChangeInCapacity --cooldown 300
$ mon-put-metric-alarm MyHighCPUAlarm --comparison-operator
GreaterThanThreshold --evaluation-periods 1 --metric-name
CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average
--threshold 80 --alarm-actions arn:aws:autoscaling:us-east-1:616259365041:scalingPolicy:46c2d3b3-7f29-42b6-ab64-548f45de334f:autoScalingGroupName/autoscleasg:policyName/MyScaleUpPolicy
--dimensions "AutoScalingGroupName=autoscleasg"
$ as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group
autoscleasg --adjustment=-1 --type ChangeInCapacity --cooldown 300
$ mon-put-metric-alarm MyLowCPUAlarm --comparison-operator
LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization
--namespace "AWS/EC2" --period 600 --statistic Average --threshold 50 --alarm-actions arn:aws:autoscaling:us-east-1:616259365041:scalingPolicy:30ccd42c-06fe-401a-8b8f-a4e49bbb9c7d:autoScalingGroupName/autoscleasg:policyName/MyScaleDownPolicy
--dimensions "AutoScalingGroupName=autoscleasg"
After this I'm running this command:
$ as-describe-auto-scaling-groups autoscleasg --headers
Response:
AUTO-SCALING-GROUP GROUP-NAME LAUNCH-CONFIG AVAILABILITY-ZONES
LOAD-BALANCERS MIN-SIZE MAX-SIZE DESIRED-CAPACITY
AUTO-SCALING-GROUP autoscleasg autoscalelc us-east-1d
autoscalelb 1 5 1 INSTANCE INSTANCE-ID
AVAILABILITY-ZONE STATE STATUS LAUNCH-CONFIG INSTANCE
i-acf48bd5 us-east-1d InService Healthy autoscalelc
And then:
$ elb-describe-instance-health autoscalelb --headers
It shows:
INSTANCE_ID INSTANCE_ID STATE DESCRIPTION
REASON-CODE INSTANCE_ID i-ee364697 InService N/A
N/A INSTANCE_ID i-acf48bd5 OutOfService Instance has failed at
least the UnhealthyThreshold number of health checks consecutively.
Instance
My first problem is:
It automatically creates One extra instance when there is no load on Main instance.
Secondly,
Newly created instance is always OutOfService.
if I change Min Size to 0 using following command:
$ as-update-auto-scaling-group autoscleasg --launch-configuration
autoscalelc --availability-zones us-east-1d --min-size 0 --max-size 5
And trying to put load on instance using xen:
hg clone http://xenbits.xensource.com/xen-unstable.hg
Autoscaling not creating any instance. Even if I'm running above command on upto 5 session, CPU Utilization reaches to 100% and still no instance is being created.
Please help me...
I am not sure what you want to achieve but if you want to use autoscaling capabilities to add more instances based on traffic increase or decrease , you need to use the load balancer parameters (i.e. Latency):
Change yours to:
--namespace='AWS/ELB'
--metric-name Latency
--period 60 (this is super quick)
--threshold 2.0 (this is very low)
To test if it works, I use Apache Bench, I run below command on multiple micro instances
$ ab -n 10000 -c 10 http://<your ELB>.us-east-1.elb.amazonaws.com/index.php

Resources