I am creating a pod spec using json which when run will give me a shell on the underlying node as shown below.
overrides=$(cat <<EOF
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "admin-shell"
},
"spec": {
"containers": [
{
"name": "admin-shell",
"securityContext": {
"privileged": true
},
"image": "alpine:latest",
"args": ["chroot", "/kdet", "/bin/bash"],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"name": "kdet",
"mountPath": "/kdet"
}]
}],
"volumes": [{
"name": "kdet",
"hostPath": {
"path": "/",
"type": "Directory"
}
}]
}
}
EOF
)
kubectl run --image alpine:latest --rm --restart=Never --overrides="$foo" -ti test
If you don't see a command prompt, try pressing enter.
[root#admin-shell /]# exit
exit
pod "admin-shell" deleted
However, when I try to launch the pod using curl with the config saved in api.json, the pod is being created, however I don't get the shell as I got in the previous step.
>curl -k $APISERVER/api/v1/namespaces/default/pods \
-XPOST -H 'Content-Type: application/json' \
-d#api.json \
--header "Authorization: Bearer $TOKEN"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2900 0 2250 100 650 13005 3757 --:--:-- --:--:-- --:--:-- 16763
{
"phase": "Pending",
"qosClass": "BestEffort"
}
Can you please help me how can I get the shell using curl?
Related
What is working fine
I have successfully created a AWS ECS cluster using terraform script
with following parameters
max_instance_size = 2
min_instance_size = 1
desired_capacity = 1
"maximumPercent": 100,
"minimumHealthyPercent": 0
Everything is working fine when I am creating this Cluster . My application is up and running and accessible through Load Balancer.
What is giving problem
Now i have a Jenkins Job to do following steps
Checkout
Build Application
Create Docker Image
Push Docker image into Hub
Deploy the image through Task Definition update.
Here is the Jenkins snippet
stage("Deploy") {
sh "sed -e 's;%BUILD_TAG%;${BUILD_NUMBER};g' accountupdateecs-task-defination.json > accountupdateecs-task-defination-${BUILD_NUMBER}.json"
def currTaskDef = sh (returnStdout: true,script: "aws ecs describe-task-definition --task-definition ${taskFamily}| egrep 'revision'| tr ',' ' '| awk '{print \$2}'").trim()
def currentTask = sh (returnStdout: true,script: "aws ecs list-tasks --cluster ${clusterName} --family ${taskFamily} --output text | egrep 'TASKARNS' | awk '{print \$2}' ").trim()
if(currTaskDef) {sh "aws ecs update-service --cluster ${clusterName} --service ${serviceName} --task-definition ${taskFamily}:${currTaskDef} --desired-count 0 "}
if (currentTask) {sh "aws ecs stop-task --cluster ${clusterName} --task ${currentTask}"}
sh "aws ecs register-task-definition --family ${taskFamily} --cli-input-json ${taskDefile}"
def taskRevision = sh (returnStdout: true, script: "aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' ' | awk '{print \$2}'").trim()
sh "aws ecs update-service --force-new-deployment --cluster ${clusterName} --service ${serviceName} --task-definition ${taskFamily}:${taskRevision} --desired-count 1"
}
Issue
After successfully execution of job my always i can see in cluster
desired-count = 1
running = 0
and application is not available.
Here is the success log of jenkins
+ aws ecs update-service --force-new-deployment --cluster FinanceManagerCluster --service financemanager-ecs-service --task-definition accountupdateapp:20 --desired-count 1
{
"service": {
"serviceArn": "arn:aws:ecs:us-east-1:3432423423423:service/financemanager-ecs-service",
"serviceName": "financemanager-ecs-service",
"clusterArn": "arn:aws:ecs:us-east-1:3432423423423:cluster/FinanceManagerCluster",
"loadBalancers": [
{
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:3432423423423:targetgroup/ecs-target-group/ed44ae00d0de463d",
"containerName": "accountupdateapp",
"containerPort": 8181
}
],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 0,
"pendingCount": 0,
"launchType": "EC2",
"taskDefinition": "arn:aws:ecs:us-east-1:3432423423423:task-definition/accountupdateapp:20",
"deploymentConfiguration": {
"maximumPercent": 100,
"minimumHealthyPercent": 0
},
"deployments": [
{
"id": "ecs-svc/9223370480222949120",
"status": "PRIMARY",
"taskDefinition": "arn:aws:ecs:us-east-1:3432423423423:task-definition/accountupdateapp:20",
"desiredCount": 1,
"pendingCount": 0,
"runningCount": 0,
"createdAt": 1556631826.687,
"updatedAt": 1556631826.687,
"launchType": "EC2"
},
{
"id": "ecs-svc/9223370480223135598",
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:us-east-1:3432423423423:task-definition/accountupdateapp:19",
"desiredCount": 0,
"pendingCount": 0,
"runningCount": 0,
"createdAt": 1556631640.195,
"updatedAt": 1556631823.692,
"launchType": "EC2"
}
],
"roleArn": "arn:aws:iam::3432423423423:role/ecs-service-role",
"events": [
{
"id": "967c99cc-5de0-469f-8cdd-adadadad",
"createdAt": 1556631824.549,
"message": "(service financemanager-ecs-service) has begun draining connections on 1 tasks."
},
{
"id": "c4d99570-408a-4ab7-9790-adadadad",
"createdAt": 1556631824.543,
"message": "(service financemanager-ecs-service) deregistered 1 targets in (target-group arn:aws:elasticloadbalancing:us-east-1:3432423423423:targetgroup/ecs-target-group/ed44ae00d0de463d)"
},
{
"id": "bcafa237-598f-4c1d-97e9-adadadad",
"createdAt": 1556631679.467,
"message": "(service financemanager-ecs-service) has reached a steady state."
},
{
"id": "51437232-ed5f-4dbb-b09f-adadadad",
"createdAt": 1556631658.185,
"message": "(service financemanager-ecs-service) registered 1 targets in (target-group arn:aws:elasticloadbalancing:us-east-1:3432423423423:targetgroup/ecs-target-group/ed44ae00d0de463d)"
},
{
"id": "c42ee5c9-db5b-473a-b3ca-adadadad",
"createdAt": 1556631645.944,
"message": "(service financemanager-ecs-service) has started 1 tasks: (task fc04530a-479d-4385-9856-adadadad)."
}
],
"createdAt": 1556631640.195,
"placementConstraints": [],
"placementStrategy": [],
"healthCheckGracePeriodSeconds": 0
}
Need some help to understand and resolve this issue
Thanks in advance.
I know how to create a repo in BitBucket by doing this.
Let email = john#outlook.com, and password 123
curl -k -X POST --user john#outlook.com:123 "https://api.bitbucket.org/1.0/repositories" -d "name=test"
But how would one check if a repo exist in BitBucket programmatically ?
Here is what I get for a curl call to a public, private and non-existing repos:
Private (Status code 403):
> curl -k -X GET https://api.bitbucket.org/1.0/repositories/padawin/some-private-repo
Forbidden
Non existing (Status code 404):
> curl -k -X GET https://api.bitbucket.org/1.0/repositories/padawin/travels1
{"type": "error", "error": {"message": "Repository padawin/travels1 not found"}}
Public (Status code 200):
> curl -k -X GET https://api.bitbucket.org/1.0/repositories/padawin/travels
{"scm": "git", "has_wiki": false, "last_updated": "2015-08-02T14:09:42.134", "no_forks": false, "forks_count": 0, "created_on": "2014-06-08T23:48:28.483", "owner": "padawin", "logo": "https://bytebucket.org/ravatar/%7Bb56f8d55-4821-4c89-abbc-7c1838fb68a3%7D?ts=default", "email_mailinglist": "", "is_mq": false, "size": 1194864, "read_only": false, "fork_of": null, "mq_of": null, "followers_count": 1, "state": "available", "utc_created_on": "2014-06-08 21:48:28+00:00", "website": "", "description": "", "has_issues": false, "is_fork": false, "slug": "travels", "is_private": false, "name": "travels", "language": "", "utc_last_updated": "2015-08-02 12:09:42+00:00", "no_public_forks": false, "creator": null, "resource_uri": "/api/1.0/repositories/padawin/travels"}
You can use the status code, given that the body is not always a valid json (Forbidden would have to be "Forbidden" to be a valid JSON).
Using the 2.0 API, I check in this way:
if curl -s -f -o /dev/null -u "${USERNAME}:${APP_PASSWORD}" "https://api.bitbucket.org/2.0/repositories/${USERNAME}/${REPONAME}"; then
echo "Repo exists in Bitbucket."
else
echo "Repo either does not exist or is inaccessible in Bitbucket."
Access is required to the repository:read scope. Note that access to the repository:admin scope is insufficient and irrelevant for this check.
I am working on downloading a Docker Image on an internet-connected Windows machine that does not have (and cannot have) Docker installed on it, to transfer to an non-internet-connected Linux machine that does have Docker. I'm using git-bash to run download-frozen-image-v2.sh. Everything is working as expected until the script begins to download the final layer of any given image. On the final layer the json file is being returned empty. Through echo statements, I'm able to see that everything is working flawlessly until lines 119-142
jq "$addJson + ." > "$dir/$layerId/json" <<-'EOJSON'
{
"created": "0001-01-01T00:00:00Z",
"container_config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
}
}
EOJSON
Only on the final layer, this code is resulting in an empty json file, which in-turn creates an error in line 173
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" "$dir/$configFile" > "$dir/$imageId/json"
jq: error: syntax error, unexpected '+', expecting $end (Windows cmd shell quoting issues?) at <top-level>, line 1:
+ del(.history, .rootfs)
jq: 1 compile error
Update
Exact steps to replicate
Perform on Windows 10 computer.
1) Install scoop for Windows https://scoop.sh/
2) in Powershell scoop install git curl jq go tar
3) git-bash
4) in git-bash curl -o download-frozen-image-v2.sh https://raw.githubusercontent.com/moby/moby/master/contrib/download-frozen-image-v2.sh
5) bash download-frozen-image-vs.sh ubuntu ubuntu:latest
The above will result in the aforementioned error.
in response to #peak below
The command I'm using is bash download-frozen-image-v2.sh ubuntu ubuntu:latest which should download 5 layers. The first 4 download flawlessly, it is only the last layer that fails. I tried this process for several other images, and it always fails on the final layer.
addJson:
{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9", parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }
dir/configFile:
ubuntu/113a43faa1382a7404681f1b9af2f0d70b182c569aab71db497e33fa59ed87e6.json
dir/configFile contents:
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"container": "6713e927cc43b61a4ce3950a69907336ff55047bae9393256e32613a54321c70",
"container_config": {
"Hostname": "6713e927cc43",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/bash\"]"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2018-06-05T21:20:54.310450149Z",
"docker_version": "17.06.2-ce",
"history": [
{
"created": "2018-06-05T21:20:51.286433694Z",
"created_by": "/bin/sh -c #(nop) ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in / "
},
{
"created": "2018-06-05T21:20:52.045074543Z",
"created_by": "/bin/sh -c set -xe \t\t&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d \t&& echo 'exit 101' >> /usr/sbin/policy-rc.d \t&& chmod +x /usr/sbin/policy-rc.d \t\t&& dpkg-divert --local --rename --add /sbin/initctl \t&& cp -a /usr/sbin/policy-rc.d /sbin/initctl \t&& sed -i 's/^exit.*/exit 0/' /sbin/initctl \t\t&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup \t\t&& echo 'DPkg::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' > /etc/apt/apt.conf.d/docker-clean \t&& echo 'APT::Update::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' >> /etc/apt/apt.conf.d/docker-clean \t&& echo 'Dir::Cache::pkgcache \"\"; Dir::Cache::srcpkgcache \"\";' >> /etc/apt/apt.conf.d/docker-clean \t\t&& echo 'Acquire::Languages \"none\";' > /etc/apt/apt.conf.d/docker-no-languages \t\t&& echo 'Acquire::GzipIndexes \"true\"; Acquire::CompressionTypes::Order:: \"gz\";' > /etc/apt/apt.conf.d/docker-gzip-indexes \t\t&& echo 'Apt::AutoRemove::SuggestsImportant \"false\";' > /etc/apt/apt.conf.d/docker-autoremove-suggests"
},
{
"created": "2018-06-05T21:20:52.712120056Z",
"created_by": "/bin/sh -c rm -rf /var/lib/apt/lists/*"
},
{
"created": "2018-06-05T21:20:53.405342638Z",
"created_by": "/bin/sh -c sed -i 's/^#\\s*\\(deb.*universe\\)$/\\1/g' /etc/apt/sources.list"
},
{
"created": "2018-06-05T21:20:54.091704323Z",
"created_by": "/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container"
},
{
"created": "2018-06-05T21:20:54.310450149Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:db9476e6d963ed2b6042abef1c354223148cdcdbd6c7416c71a019ebcaea0edb",
"sha256:3a89e0d8654e098e949764b1cb23018e27f299b0931c5fd41c207d610ff356c4",
"sha256:904d60939c360b5f528b886c1b534855a008f9a7fd411d4977e09aa7de74c834",
"sha256:a20a262b87bd8a00717f3b30c001bcdaf0fd85d049e6d10500597caa29c013c5",
"sha256:b6f13d447e00fba3b9bd10c1e5c6697e913462f44aa24af349bfaea2054e32f4"
]
}
}
Any help in figuring out what is occurring here would be greatly appreciated.
Thank you.
I can't tell you why this happens but it appears to be a problem with how jq parses the input file. It's segfaulting when reading the file. It's a known issue in the windows builds where the problem is triggered by the length of the paths to the files.
Fortunately, there is a way around this issue by modifying the script to go against all conventional wisdom and cat the file to jq.
The script isn't utilizing jq very well and builds some of the json manually so some additional fixes would be needed. It will have errors regarding INVALID_CHARACTER when parsing. It's probably a manifestation of this issue since the script is manually building a lot of the jq programs.
I put up a gist with the updated file that at least doesn't error out, check to see if it works as expected.
Changes start at line 172 and 342.
The way it builds the manifest is just messy. I've cleaned it up a bit removing all the string interpolations instead passing all parameters in as arguments to jq.
# munge the top layer image manifest to have the appropriate image configuration for older daemons
local imageOldConfig="$(cat "$dir/$imageId/json" | jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end')"
cat "$dir/$configFile" | jq --raw-output "$imageOldConfig + del(.history, .rootfs)" > "$dir/$imageId/json"
local manifestJsonEntry="$(
jq --raw-output --compact-output -n \
--arg configFile "$configFile" \
--arg repoTags "${image#library\/}:$tag" \
--argjson layers "$(IFS=$'\n'; jq --arg a "${layerFiles[*]}" -n '$a | split("\n")')" \
'{
Config: $configFile,
RepoTags: [ $repoTags ],
Layers: $layers
}'
)"
(1) I have verified that using bash, the sequence:
addJson='{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9",
parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }'
jq "$addJson + ." configFile > layerId.json
succeeds, where configFile has the contents shown in the updated question.
(2) Similarly, I have verified that the following also succeeds:
imageOldConfig="$(jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end' layerId.json)"
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" <<-'EOJSON'
<JSON as in the question>
EOJSON
where <JSON as in the question> stands for the JSON shown in the question.
(3) In general, it is not a good idea to pass shell $-variables into jq programs by shell string interpolation.
For example, rather than writing:
jq --raw-output "$imageOldConfig + del(.history, .rootfs)"
it would be much better to write something like:
jq --raw-output --argjson imageOldConfig "$imageOldConfig" '
$imageOldConfig + del(.history, .rootfs)'
I've created a bunch of test services in my consul cluster I wish to remove, I have tried using the /v1/agent/service/deregister/{service id} - and ensured it runs fine on each node - I can see this run on each node
[INFO] agent: Deregistered service 'ci'
Is there another way to manually clean out these old services ?
Thanks,
Try this
$ curl \
--request PUT \
https://consul.rocks/v1/agent/service/deregister/my-service-id
fetch service info curl $CONSUL_AGETNT_ADDR:8500/v1/catalog/service/$SERVICE_NAME | python -mjson.tool :
{
"Address": "10.0.1.2",
"CreateIndex": 30242768,
"Datacenter": "",
"ID": "",
"ModifyIndex": 30550079,
"Node": "log-0",
"NodeMeta": null,
"ServiceAddress": "",
"ServiceEnableTagOverride": false,
"ServiceID": "log",
"ServiceName": "log",
"ServicePort": 9200,
"ServiceTags": [
"log"
],
"TaggedAddresses": null
},
...
prepare a json file, fulfill the values with the above outputs cat > data.json :
{
"Datacenter": "",
"Node": "log-0",
"ServiceID": "log-0"
}
deregister with: curl -X PUT -d #data.json $CONSUL_AGETNT_ADDR:8500/v1/catalog/deregister
Login in to the consul machine,and issue the command as follow:
consul services deregister -id={Your Service Id}
You can clear service config in config directory mannually
I've got this script in the user-data of an ec2 linux. Is there a way to do a while to "loop" in this script so that it keeps doing the curl requests every 5 minutes until the requests return 200?
#!/bin/bash
sed -i -e '/<Name>loadbalanceServerIP<\/Name>/,/<Value>/s/<Value>[^<]*/<Value>52.53.197.227/' /home/wowza/conf/Server.xml
edge_ip=`curl -s http://169.254.169.254/latest/meta-data/public-ipv4`
curl --digest -u 'wowza:i-0fbfeb0718fab03b8' -X POST --header 'Accept:application/json; charset=utf-8' --header 'Content-type:application/json; charset=utf-8' http://52.53.197.227:8087/v2/servers/_defaultServer_/vhosts/_defaultVHost_/applications/live/pushpublish/mapentries/letitoptier_source -d'
{
"restURI": "http://52.53.197.227:8087/v2/servers/_defaultServer_/vhosts/_defaultVHost_/applications/live/pushpublish/mapentries/letitoptier_source",
"serverName": "_defaultServer_",
"sourceStreamName": "letitoptier_source",
"entryName": "letitoptier_source_target",
"profile": "rtmp",
"host": "'$edge_ip'",
"application": "live",
"userName": "wowza",
"password": "i-0fbfeb0718fab03b8",
"streamName": "letitoptier_source"
}'
curl --digest -u 'wowza:i-0fbfeb0718fab03b8' -X POST --header 'Accept:application/json; charset=utf-8' --header 'Content-type:application/json; charset=utf-8' http://52.53.197.227:8087/v2/servers/_defaultServer_/vhosts/_defaultVHost_/applications/live/pushpublish/mapentries/letitoptier_160p -d'
{
"restURI": "http://52.53.197.227:8087/v2/servers/_defaultServer_/vhosts/_defaultVHost_/applications/live/pushpublish/mapentries/letitoptier_160p",
"serverName": "_defaultServer_",
"sourceStreamName": "letitoptier_160p",
"entryName": "letitoptier_160p_target",
"profile": "rtmp",
"host": "'$edge_ip'",
"application": "live",
"userName": "wowza",
"password": "i-0fbfeb0718fab03b8",
"streamName": "letitoptier_160p"
}'
How can I know if it ran and what result or message it returned?
Thank you
If you are unsure whether the User Data script executed, log files are available at:
Linux: /var/log/cloud-init-output.log
Windows: C:\cfn