Can anyone provide sample code to deploy docker container on Jelastic? I was reading Jelastic official API document, it looks like this piece of information is missing.
Many thanks!
This is a sample of the creating new environment using bash:
#!/bin/bash
hoster_api_url='app.cloudplatform.hk'
platform_app_id='77047754c838ee6badea32b5afab1882'
email=''
password=''
docker_image='tutum/wordpress'
docker_tag='latest'
env_name='testenv-'$(((RANDOM % 10000) + 1))
WORK_DIR=$(dirname $0)
LOG="$WORK_DIR/$hoster_api_url.log"
log() {
echo -e "\n$#" >>"$LOG"
}
login() {
SESSION=$(curl -s "http://$hoster_api_url/1.0/users/authentication/rest/signin?appid=$platform_app_id&login=$email&password=$password" | \
sed -r 's/^.*"session":"([^"]+)".*$/\1/')
[ -n "$SESSION" ] || {
log "Failed to login with credentials supplied"
exit 0
}
}
create_environment() {
log "=============================== START CREATING $env_name | $(date +%d.%m.%y_%H-%M-%S) ==============================="
request='nodes=[{"nodeType":"docker","extip":false,"count":1,"fixedCloudlets":1,"flexibleCloudlets":16,"fakeId":-1,"dockerName":"'$docker_image'","dockerTag":"'$docker_tag'","displayName":"'$docker_image':'$docker_tag'","metadata":{"layer":"cp"}}]&env={"pricingType":"HYBRID","region":"default_region","shortdomain":"'$env_name'"}&actionkey=createenv;'$env_name'&appid='$platform_app_id'&session='$SESSION
log "$request"
curl -s -X POST --data $request "https://$hoster_api_url/1.0/environment/environment/rest/createenvironment" >> "$LOG"
log "=============================== STOP CREATING $env_name | $(date +%d.%m.%y_%H-%M-%S) ==============================="
}
login
create_environment
Also you can see Samples there
Related
I would like to invoke an aws lambda function from a java project, first thing first, the java project sends a payload to lambda, then lambda processes this payload and execute some kubectl commands. Right now I am using lambda-layer-kubectl in order to use kubectl inside lambda function.
Java project code is below:
// snippet-start:[lambda.java2.invoke.main]
public static void invokeFunction(LambdaClient awsLambda, String functionName) {
InvokeResponse res = null ;
try {
// Need a SdkBytes instance for the payload.
JSONObject jsonObj = new JSONObject();
jsonObj.put("number", 80);
String json = jsonObj.toString();
SdkBytes payload = SdkBytes.fromUtf8String(json) ;
// Setup an InvokeRequest.
InvokeRequest request = InvokeRequest.builder()
.functionName(functionName)
.payload(payload)
.build();
res = awsLambda.invoke(request);
String value = res.payload().asUtf8String() ;
System.out.println(value);
} catch(LambdaException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
I am using Tutorial – Publishing a custom runtime to build my lambda function.
My bootstrap code is below:
#!/bin/sh
set -euo pipefail
export HOME="/tmp"
export PATH=$PATH://opt/awscli:/opt/kubectl:/opt/helm:/opt/jq
mkdir -p /tmp/.kube
cp kubeConfig /tmp/.kube/config
# Handler format: <script_name>.<bash_function_name>
# The script file <script_name>.sh must be located at the root of your
# function's deployment package, alongside this bootstrap executable.
# Initialization - load function handler
source $LAMBDA_TASK_ROOT/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA" | jq ".number")
if [[ $RESPONSE == 80 ]]
then
TEST=$(echo "1111")
cp 80.yaml /tmp/80.yaml
kubectl apply -f test-80.yaml
fi
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$TEST"
done
I got "{"errorMessage":"2022-11-20T20:35:41.005Z e0d3dedb-3b82-4007-9bf6-5649eddda916 Task timed out after 3.01 seconds"}
Process finished with exit code 0" after running the java project."
Tried extend lambda running time to 10s, still time out.
However, if I put "cp 80.yaml /tmp/80.yaml kubectl" and "apply -f test-80.yaml" outside while loop, behind "cp kubeConfig /tmp/.kube/config", kubernetes job will be created successfully.
However, if I put "cp 80.yaml /tmp/80.yaml kubectl" and "apply -f test-80.yaml" outside while loop, behind "cp kubeConfig /tmp/.kube/config", kubernetes job will be created successfully.
I expect kubectl commands to execute successfully and new kube job to be created.
Could somebody help me with it? Thank you very much in advance.
I am trying to pass the GitHub branch creating API URL as a string into a function to get the status code. But as I tried in many ways, it is not working as I think,
the original URL is :
curl -s -X POST -u [user]:[token] -d '{"ref": "refs/heads/feature/bar", "sha": "'$SHA'"}' https://api.github.com/repos/$user/$repo/git/refs
what I am trying to do is, taking some part of this URL and passing into a function as a string as follows:
new_branch_creating_url="-X POST -u $username:$password -d '{"'ref'": "'refs/heads/'$new_branch_to_be_created''", "'sha'": ""$old_sha_value""}' https://api.github.com/repos/$username/$repository_name/git/refs"
My intention is to get the status code of that URL... and for that my function is
#get status code
get_status_code(){
test_url=$1
status_code=$(curl -s -I $test_url | awk '/HTTP/{print $2}')
#echo "status code :$status_code"
if [ $status_code == 404 ]
then
echo "wrong credentials passed..."
exit 1
else
return $status_code
fi
}
and while debugging the code, I am getting like
++ curl -s -I -X POST -u myusername:tokenid -d ''\''{ref:' refs/heads/branch2, sha: '1b2hudksjahhisabkhsd6asdihds8dsajbsualhcn}'\''' https://api.github.com/repos/myusername/myrepo/git/refs
++ awk '/HTTP/{print $2}'
and also my doubt is why sometimes I am received a wrong status code from the function above which I used to get a status code?
while debugging my code:
status_code=
+ '[' == 404 ']'
git_branches.sh: line 154: [: ==: unary operator expected
+ return
+ new_branch_status_code=2
+ echo 'new branch status code ... 2'
new branch status code ... 2
+ '[' 2 == 200 ']'
actually, it is nothing there on status_code from the function, but I received status code =2,,,
not only this time, but I also received 153 instead of 409... why it is like that?
I know this is not a relevant to ask but I have no choice and also it would be very helpful if someone helps me in the early stage of my learning in the shell scripting...
thank you...
Instead of curl -s -I, use :
curl -I -s -o /dev/null -w "%{http_code}"
which will give directly http_code.
You don't need awk
I've created a simple pipeline which is attempting to run a script and then I'll do something else with the output, however the script (CheckTagsDates.sh) never finishes according to Jenkins. If I SSH into the Jenkins slave node, su as the jenkins user, navigate to the correct workspace folder, I can execute the command successfully.
pipeline {
agent {label 'agent'}
stages {
stage('Check for releases in past 24hr') {
steps{
sh 'chmod +x CheckTagsDates.sh'
script {
def CheckTagsDates = sh(script: './CheckTagsDates.sh', returnStdout: true)
echo "${CheckTagsDates}"
}
}
}
}
}
Here is the contents of the CheckTagsDates.sh file
#!/bin/bash
while read line
do
array[ $i ]="$line"
(( i++ ))
done < <( curl -L -s 'https://registry.hub.docker.com/v2/repositories/library/centos/tags'|jq -r '."results"[] | "\(.name)&\(.last_updated)"')
for i in "${array[#]}"
do
echo $i | cut -d '&' -f 1
echo $i | cut -d '&' -f 2
done
Here is the output from the script in the console
latest
2020-01-18T00:42:35.531397Z
centos8.1.1911
2020-01-18T00:42:33.410905Z
centos8
2020-01-18T00:42:29.783497Z
8.1.1911
2020-01-18T00:42:19.111164Z
8
2020-01-18T00:42:16.802842Z
centos7.7.1908
2019-11-12T00:42:46.131268Z
centos7
2019-11-12T00:42:41.619579Z
7.7.1908
2019-11-12T00:42:34.744446Z
7
2019-11-12T00:42:24.00689Z
centos7.6.1810
2019-07-02T14:42:37.943412Z
How I told you in a comment, I think that is a wrong use of the echo instruction for string interpolation.
Jenkins Pipeline uses rules identical to Groovy for string interpolation. Groovy’s String interpolation support can be confusing to many newcomers to the language. While Groovy supports declaring a string with either single quotes, or double quotes, for example:
def singlyQuoted = 'Hello'
def doublyQuoted = "World"
Only the latter string will support the dollar-sign ($) based string interpolation, for example:
def username = 'Jenkins'
echo 'Hello Mr. ${username}'
echo "I said, Hello Mr. ${username}"
Would result in:
Hello Mr. ${username}
I said, Hello Mr. Jenkins
Understanding how to use string interpolation is vital for using some of Pipeline’s more advanced features.
Source: https://jenkins.io/doc/book/pipeline/jenkinsfile/#string-interpolation
As a workaround for this case, I would suggest you to do the parsing of the json content in Groovy, instead of shell, and limit the script to only retrieving the json.
pipeline {
agent {label 'agent'}
stages {
stage('Check for releases in past 24hr') {
steps{
script {
def TagsDates = sh(script: "curl -L -s 'https://registry.hub.docker.com/v2/repositories/library/centos/tags'", returnStdout: true).trim()
TagsDates = readJSON(text: TagsDates)
TagsDates.result.each {
echo("${it.name}")
echo("${it.last_updated}")
}
}
}
}
}
}
I want to create a script that gathers information about the ec2 instance ( id, ip, os, users mb other if needed ), but i need help with getting info about running system - i think it easy to get OS info from /etc/os-release ? And the second question about yaml - is it possible parse output to data.txt as yaml ?
Please help me add OS info to data.txt :)
#!/bin/bash
URL="http://169.254.169.254/latest/meta-data/"
which curl > /dev/null 2>&1
if [ $? == 0 ]; then
get_cmd="curl -s"
else
get_cmd="wget -q -O -"
fi
get () {
$get_cmd $URL/$1
}
data_items=(instance-id
local-ipv4
public-ipv4
)
yaml=""
for meta_thing in ${data_items[*]}; do
data=$(get $meta_thing)
entry=$(printf "%-30s%s" "$meta_thing:" "$data\n")
yaml="$yaml$entry"
done
echo -e "$yaml" > data.txt
maybe add
lsb_release -a >> data.txt
uname -a >> data.txt
To the end of the script
i tried to setup squid3 with multiple auth_param. Basically, the first choice should be basic_ldap_auth and if this doesnt return OK it should try basic_ncsa_auth with the same values. As far as i know squid doesnt support it however there is the possibility to use "external" ACL
auth_param basic program /usr/lib/squid3/basic_fake_auth
external_acl_type MultAuth %SRC %LOGIN %{Proxy-Authorization} /etc/squid3/multAuth.pl
acl extAuth external MultAuth
my "multAuth.pl"
use URI::Escape;
use MIME::Base64;
$|=1;
while (<>) {
($ip,$user,$auth) = split();
# Retrieve the password from the authentication header
$auth = uri_unescape($auth);
($type,$authData) = split(/ /, $auth);
$authString = decode_base64($authData);
($username,$password) = split(/:/, $authString);
# do the authentication and pass results back to Squid.
$ldap = `/bin/bash auth/ldap.sh`;
if ($ldap == "OK") {
print "OK";
}
$ncsa = `/bin/bash auth/ncsa.sh`;
if ($ncsa == "OK") {
print "OK";
} else {
print "ERR";
}
}
now i am trying to run with ncsa.sh and ldap.sh the "normal" shell command for these auth methods.
./basic_ldap_auth -R -b "dc=domain,dc=de" -D "CN=Administrator,CN=Users,DC=domain,DC=de" -w "password" -f sAMAccountName=%s -h domain.de
user password
and
./basic_ncsa_auth /etc/squid3/users
user password
Therefor i ran:
auth/ncsa.sh
#!/usr/bin/expect
eval spawn [lrange $argv 0 end]
expect ""
send [lindex $argv 1]
send '\r'
expect {
"OK" {
echo OK
exp_continue
}
"ERR" {
echo ERR
exp_continue
}
interact
with
./ncsa.sh "/usr/lib/squid3/basic_ncsa_auth /etc/squid3/users" "user password"
and i generate the following error:
couldn't execute "/usr/lib/squid3/basic_ncsa_auth /etc/squid3/users": no such file or directory
while executing
"spawn {/usr/lib/squid3/basic_ncsa_auth /etc/squid3/users} {user password}"
("eval" body line 1)
invoked from within
"eval spawn [lrange $argv 0 end]"
(file "./ncsa.sh" line 2)
Besides this error, i am not sure how to pass the variables (username & password) forward and i am also not sure how to answer the shell questions like for example the user & pw input for basic_ldap_auth .
Is there a nice way how to solve that? or any other good plan ?
thanks!
FWIW, the following script helped me transition from passwd based to LDAP based authentication.
Contrary to your requirements, my script acts the other way around: It first checks passwd, then LDAP.
#!/usr/bin/env bash
# multiple Squid basic auth checks
# originally posted here: https://github.com/HackerHarry/mSquidAuth
#
# credits
# https://stackoverflow.com/questions/24147067/verify-user-and-password-against-a-file-created-by-htpasswd/40131483
# https://stackoverflow.com/questions/38710483/how-to-stop-ldapsearch1-from-base64-encoding-userpassword-and-other-attributes
#
# requires ldap-utils, openssl and perl
# tested with Squid 4 using a "auth_param basic program /usr/lib/squid/mSquidAuth.sh" line
# authenticate first against squid password file
# if this fails, try LDAP (Active Directory) and also check group membership
# variables
# sLOGFILE=/var/log/squid/mSquidAuth.log
sPWDFILE="/etc/squid/passwd"
sLDAPHOST="ldaps://dc.domain.local:636"
sBASE="DC=domain,DC=local"
sLDS_OPTIONS="-o ldif-wrap=no -o nettimeout=7 -LLL -P3 -x "
sBINDDN="CN=LDAP-read-user,OU=Users,DC=domain,DC=local"
sBINDPW="read-user-password"
sGROUP="Proxy-Users"
# functions
function _grantAccess {
# echo "access granted - $sUSER" >>$sLOGFILE
echo "OK"
}
function _denyAccess {
# echo "access denied - $sUSER" >>$sLOGFILE
echo "ERR"
}
function _setUserAndPass {
local sAuth="$1"
local sOldIFS=$IFS
IFS=' '
set -- $sAuth
IFS=$sOldIFS
# set it globally
sUSER="$1"
sPASS="$2"
}
# loop
while (true); do
read -r sAUTH
sUSER=""
sPASS=""
sSALT=""
sUSERENTRY=""
sHASHEDPW=""
sUSERDN=""
iDNCOUNT=0
if [ -z "$sAUTH" ]; then
# echo "exiting" >>$sLOGFILE
exit 0
fi
_setUserAndPass "$sAUTH"
sUSERENTRY=$(grep -E "^${sUSER}:" "$sPWDFILE")
if [ -n "$sUSERENTRY" ]; then
sSALT=$(echo "$sUSERENTRY" | cut -d$ -f3)
if [ -n "$sSALT" ]; then
sHASHEDPW=$(openssl passwd -apr1 -salt "$sSALT" "$sPASS")
if [ "$sUSERENTRY" = "${sUSER}:${sHASHEDPW}" ]; then
_grantAccess
continue
fi
fi
fi
# LDAP is next
iDNCOUNT=$(ldapsearch $sLDS_OPTIONS -H "$sLDAPHOST" -D "$sBINDDN" -w "$sBINDPW" -b "$sBASE" "(|(sAMAccountName=${sUSER})(userPrincipalName=${sUSER}))" dn 2>/dev/null | grep -cE 'dn::? ')
if [ $iDNCOUNT != 1 ]; then
# user needs a unique account
_denyAccess
continue
fi
# get user's DN
# we need the extra grep in case we get lines back starting with "# refldap" :/
sUSERDN=$(ldapsearch $sLDS_OPTIONS -H "$sLDAPHOST" -D "$sBINDDN" -w "$sBINDPW" -b "$sBASE" "(|(sAMAccountName=${sUSER})(userPrincipalName=${sUSER}))" dn 2>/dev/null | perl -MMIME::Base64 -n -00 -e 's/\n +//g;s/(?<=:: )(\S+)/decode_base64($1)/eg;print' | grep -E 'dn::? ' | sed -r 's/dn::? //')
# try and bind using that DN to check password validity
# also test if that user is member of a particular group
# backslash in DN needs special treatment
if ldapsearch $sLDS_OPTIONS -H "$sLDAPHOST" -D "$sUSERDN" -w "$sPASS" -b "$sBASE" "name=${sGROUP}" member 2>/dev/null | perl -MMIME::Base64 -n -00 -e 's/\n +//g;s/(?<=:: )(\S+)/decode_base64($1)/eg;print' | grep -q "${sUSERDN/\\/\\\\}"; then
_grantAccess
continue
fi
_denyAccess
done