Authenticate to gitlab from shell - shell

I'd like to get a temporary GitLab token to run a simple script. I do not want a personal access token, nor project access token as that is too many clicks for the user. How do I log in or sign in? GitLab does not support password login anymore and one has to use OAuth. But that means I have to receive an HTTP redirect, running a web server!

GitLab Oauth dialogue can be very quick once authenticated in your browser. It doesn't even ask for confirmation in default configuration. The user doesn't then have to leave the command line. The little trouble is a pile of authentication browser tabs/windows left behind.
Only one HTTP request needs to be processed. We do not need a web server for that. Netcat can accept connection, give us the request and respond as well so that the user gets some feedback. Against the recommendation, this goes unencrypted. It is on a loopback device, though. Risk should be low. To up the security here, we'd need a proper browser app that can handle a custom protocol. WebView can be used to that effect, but then we're out of simple shell.
Curl (or wget) handles the necessary API calls.
This is a simple OAuth client written in a (posix?) shell:
#!/bin/sh
# fail fast
set -e
# defaults
app_id="$GITLAB_APP_ID"
app_port="8001"
app_addr="127.0.0.1"
api_v4=""
test -z "$GITLAB_HOST" || api_v4="https://${GITLAB_HOST}/api/v4"
test -z "$GITLAB_API_V4" || api_v4="$GITLAB_API_V4"
# display usage info help text
help () {
cat <<-EOHELP
Sign into GitLab on ${api_v4:-GITLAB_API_V4} with application '${app_id:-GITLAB_APP_ID}'
Requires: nc, curl, jq, x-www-browser|open, mktemp, posix(sh, cat, cut, head, rm, grep)
It is meant to work the same in a pipeline as well as on local environment.
It will use the CI_JOB_TOKEN set by GitLab runner or request OAuth from the user.
For OAuth, make the app with [api] access, not confidential, expire tokens: yes,
redirect URI should be set to http://127.0.0.1:8001
Args:
-h|--help this text
-a|--app_id set GitLab app id [GITLAB_APP_ID]
-4|--api_v4 set GitLab API v4 address [GITLAB_API_V4]
or set it with a GITLAB_HOST FQDN[:port]
-s|--server set GitLab API v4 address with a GITLAB_HOST FQDN[:port]
EOHELP
}
# parse command arguments
parse_args () {
while test "$#" -ne 0
do
case "$1" in
-h|--help) help; exit 0 ;;
-a|--app_id) app_id="$2"; shift 2 ;;
-a=*|--app_id=*) app_id="${1#*=}"; shift 1 ;;
-4|--api_v4) api_v4="$2"; shift 2 ;;
-4=*|--api_v4=*) api_v4="${1#*=}"; shift 1 ;;
-s|--server) api_v4="https://$2/api/v4"; shift 2 ;;
-s=*|--server=*) api_v4="https://${1#*=}/api/v4"; shift 1 ;;
*) ( echo "Unexpected arg '$1'"; echo; help ) >&2; exit 1 ;;
esac
done
}
auth_netcat () {
# compatible way to invoke different nc flavors: 1) gnu/bsd/mac 2) busybox
nc -l "$1" "$2" 2>/dev/null || nc -l "$1:$2"
}
auth_browse () {
# compatible way to open a default browser
x-www-browser "$#" || open "$#"
}
authenticate () {
# Use CI_JOB_TOKEN in a pipeline as gitlab_token if available or get oauth access token
if [ -n "$CI_JOB_TOKEN" ]; then
gitlab_user="gitlab-ci-token"
gitlab_token="$CI_JOB_TOKEN"
else
: "${api_v4:?specify a GitLab API v4 URL}"
: "${app_id:?specify a GitLab app ID}"
: "${app_port:?specify the app port}"
: "${app_addr:?specify the app address}"
echo "Getting token"
auth_file="$(mktemp)"
# Start netcat as a local web server to receive the auth code
auth_netcat "$app_addr" "$app_port" <<-EORSP >"$auth_file" &
HTTP/1.0 200 OK
Content-Length: 13
Authenticated
EORSP
auth_pid=$!
# If netcat started, proceed with requesting auth code
if kill -0 "$auth_pid"; then
auth_state="$auth_pid"
auth_url="${api_v4%/api/v4}/oauth/authorize?response_type=code&scope=api"
auth_url="$auth_url&client_id=$app_id"
auth_url="$auth_url&redirect_uri=http://$app_addr:$app_port"
auth_url="$auth_url&state=$auth_state"
auth_browse "$auth_url" &&
echo "Authentication window opened in your browser:" ||
echo "Authenticate:"
echo " $auth_url"
fi
# Wait for netcat to receive the code and then request access token and user name
if wait "$auth_pid" 2>/dev/null; then
auth_code="$(head -1 "$auth_file" | grep -Eo 'code=[^&]+' | cut -d= -f2)"
echo "claiming access token"
gitlab_token="$(
curl -sSLf "${api_v4%/api/v4}/oauth/token" \
-F "client_id=$app_id" \
-F "code=$auth_code" \
-F "grant_type=authorization_code" \
-F "redirect_uri=http://$app_addr:$app_port" |
jq -r '.access_token'
)"
echo "getting current user name"
gitlab_user="$(
curl -sSLf "${api_v4}/user/" \
-H "Authorization: Bearer $gitlab_token" |
jq -r '.username'
)"
rm "$auth_file"
else
echo "authentication FAILED"
fi
fi
}
if parse_args "$#" && authenticate >&2
then
echo "GITLAB_USER=${gitlab_user:?}"
echo "GITLAB_TOKEN=${gitlab_token:?}"
fi
Tbh, this Q&A is a somewhat more sophisticated gist to keep this handy and available to others. Improvements welcome!
If you copy&paste, convert spaces to tabs so the heredoc doesn't break

Related

How to check if my remote git credentials are correct without changing state [duplicate]

I'm currently writing a shell/bash script to automate a workflow. This bash script can clone projects and create new repo's on Bitbucket, do composer install/update's and more of that stuff.
My first plan was to do this all over SSH, but there are some situations that I need HTTPS. For all thinks that go's over HTTPS I need to check the user credentials for Bitbucket first. The credentials consisting of a username and password.
Is this possible. If so, how?
As you suggested in comments, curl can do HTTP Basic Auth for you. For BitBucket, the response will be 401 Unauthorized or 200 OK, depending on the validity of username/password pair. You could test the output (or only headers) with grep, but a little bit more robust approach would be to use the -w/--write-out option, combined with a HEAD request and -silenced output:
http_status=$(curl -X HEAD -s -w '%{http_code}' \
-u "username:password" -H "Content-Type: application/json" \
https://api.bitbucket.org/2.0/repositories/$repo_owner)
Then, to test the status, you can use a simple conditional expression:
if [[ $http_status == 200 ]]; then
echo "Credentials valid"
else
if [[ $http_status == 401 ]]; then
echo "Credentials INVALID"
else
echo "Unexpected HTTP status code: $http_status"
fi
fi
Or, if you plan on testing multiple status codes, you can use the case command, for example:
case $http_status in
200) echo "Credentials valid";;
301|302) echo "API endpoint changed";;
401) echo "Credentials INVALID";;
5*) echo "BitBucket Internal server error";;
*) echo "Unexpected HTTP status code: $http_status";;
esac

Bash Script with Multiple if statments

So I am trying to get email notifications setup on about 100 servers and I am using an if script that works perfectly, however I have a tool that ssh's into each machine ever 5 min to gather statistics. I am trying to adapt the script to ignore any ssh attempts from 1 IP. I have racked my brain and I think I have looked through every possible question on the subject. Any help would be amazing thank guys!!!
Currently the script sends an email no matter who ssh's in.
#!/bin/sh
# Change these two lines:
sender="fromtest#test.com"
recepient="test#test.com"
if [ "$PAM_RUSER" != "192.168.1.10" ]; then
goto done
next
if [ "$PAM_TYPE" != "close_session" ]; then
host="`hostname`"
subject="SSH Login: $PAM_USER from $PAM_RHOST on $host"
# Message to send, e.g. the current environment variables.
message="`env`"
echo "$message" | mail "$sender" -s "$subject" "$recepient"
fi
fi
#!/bin/sh
# Change these two lines:
sender="fromtest#test.com"
recepient="test#test.com"
if [ "$PAM_RHOST" != "192.168.1.10" -a "$PAM_TYPE" != "close_session" ]; then
host="`hostname`"
subject="SSH Login: $PAM_USER from $PAM_RHOST on $host"
# Message to send, e.g. the current environment variables.
message="`env`"
echo "$message" | mail "$sender" -s "$subject" "$recepient"
fi
This solution uses a different conditional to skip the body of the if if the PAM_RHOST variable is equal to 192.168.1.10. We use -a (and) to specify that both conditions must be met.

Ensure Kubernetes Deployment has completed and all pods are updated and available

The status of a deployment indicates that you can look at a deployments observedGeneration vs generation and when observedGeneration >= generation then the deployment succeeded. That's fine, but I'm interested in knowing when the new container is actually running in all of my pods, so that if I hit a service I know for sure I'm hitting a server that represents the latest deployed container.
Another tip from a K8S Slack member:
kubectl get deployments | grep <deployment-name> | sed 's/ /,/g' | cut -d ' ' -f 4
I deployed a bad image, resulting in ErrImagePull, yet the deployment still reported the correct number of 8 up-date-date replicas (available replicas was 7).
Update #2: Kubernetes 1.5 will ship with a much better version of kubectl rollout status and improve even further in 1.6, possibly replacing my custom solution/script laid out below.
Update #1: I have turned my answer into a script hosted on Github which has received a small number of improving PRs by now.
Original answer:
First of all, I believe the kubectl command you got is not correct: It replaces all white spaces by commas but then tries to get the 4th field after separating by white spaces.
In order to validate that a deployment (or upgrade thereof) made it to all pods, I think you should check whether the number of available replicas matches the number of desired replicas. That is, whether the AVAILABLE and DESIRED columns in the kubectl output are equal. While you could get the number of available replicas (the 5th column) through
kubectl get deployment nginx | tail -n +2 | awk '{print $5}'
and the number of desired replicas (2nd column) through
kubectl get deployment nginx | tail -n +2 | awk '{print $2}'
a cleaner way is to use kubectl's jsonpath output, especially if you want to take the generation requirement that the official documentation mentions into account as well.
Here's a quick bash script I wrote that expects to be given the deployment name on the command line, waits for the observed generation to become the specified one, and then waits for the available replicas to reach the number of the specified ones:
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
deployment=
get_generation() {
get_deployment_jsonpath '{.metadata.generation}'
}
get_observed_generation() {
get_deployment_jsonpath '{.status.observedGeneration}'
}
get_replicas() {
get_deployment_jsonpath '{.spec.replicas}'
}
get_available_replicas() {
get_deployment_jsonpath '{.status.availableReplicas}'
}
get_deployment_jsonpath() {
local readonly _jsonpath="$1"
kubectl get deployment "${deployment}" -o "jsonpath=${_jsonpath}"
}
if [[ $# != 1 ]]; then
echo "usage: $(basename $0) <deployment>" >&2
exit 1
fi
readonly deployment="$1"
readonly generation=$(get_generation)
echo "waiting for specified generation ${generation} to be observed"
while [[ $(get_observed_generation) -lt ${generation} ]]; do
sleep .5
done
echo "specified generation observed."
readonly replicas="$(get_replicas)"
echo "specified replicas: ${replicas}"
available=-1
while [[ ${available} -ne ${replicas} ]]; do
sleep .5
available=$(get_available_replicas)
echo "available replicas: ${available}"
done
echo "deployment complete."
Just use a rollout status:
kubectl rollout status deployment/<deployment-name>
This will run in foreground, it waits and displays the status, and exits when rollout is complete on success or failure.
If you're writing a shell script, then check the return code right after the command, something like this.
kubectl rollout status deployment/<deployment-name>
if [[ "$?" -ne 0 ]] then
echo "deployment failed!"
exit 1
fi
To even further automate your script:
deployment_name=$(kubectl get deployment -n <your namespace> | awk '!/NAME/{print $1}')
kubectl rollout status deployment/"${deployment_name}" -n <your namespace>
if [[ "$?" -ne 0 ]] then
echo "deployment failed!"
#exit 1
else
echo "deployment succeeded"
fi
If you're running in default namespace then you could leave out the -n <your namespace>.
The command awk '!/NAME/{print $1}') extracts the first field (deployment name), while ignoring the first row which is the header(NAME READY UP-TO-DATE AVAILABLE AGE).
If you have more than one deployment files then you could also add more regex or pattern to awk: e.g. awk '!/NAME/<pattern to parse>/{print $1}')

How to list all active usernames with active session in Apache Tomcat server and Glassfish Server

For my requirement I need to have list of all active username with active session in Apache Tomcat server & Glassfish server.
I'm not sure about GlassFish, but you can script Tomcat to do it, especially if you have an object in the user's session that contains their username. In my session, I have a "user" object which can be used for this.
Here's the recipe:
Install Tomcat's manager web application and configure it for proper authentication
Use a script like the following:
for sessionid in `wget -qO - 'http://username:password#host:port/manager/jmxproxy?invoke=Catalina:type=Manager,context=/contextName,host=localhost&op=listSessionIds' \
| sed -e "s/ /\n/g" -e 's/.*returned://'`
do
response=`wget -qO - "http://username:password#host:port/manager/jmxproxy?invoke=Catalina:type=Manager,context=/contextName,host=localhost&op=getSessionAttribute&ps=$sessionid,user" 2>/dev/null`
#echo "$response"
if [ `expr "$response" : "^OK"` ] ; then
user=`expr "$response" : ".*\(User.*\)"`
if [ "$user" ] ; then
echo "$sessionid: $user"
else if [ "$VERBOSE" ] ; then
echo "$sessionid: [ no authenticated user ]"
fi fi
else
echo "[error]: $response"
fi
done
Hmm... back-quotes are not looking good up there. Let me do some reading on how to make that code snippet readable and I'll update this answer. Both of the wget commands should have backtics around them, and the first one should have the close-backtick after the returned://' string.

App Engine: Launching a script upon update/run

I'm working with App Engine and I'm thinking about using the LESS CSS extension in my next project. There's no good LESS CSS library written in Python so I went on with the original Ruby one which works great and out of the box. I'd like App Engine to execute lessc ./templates/css/style.less before running the development server and before uploading the files to the cloud. What is the best way to automate this? I'm thinking:
#run.sh:
lessc ./templates/css/style.less
.gae/dev_appserver.py --use_sqlite .
And
#deploy.sh
lessc ./templates/css/style.less
.gae/appcfg.py update .
Am I on the correct path or is there a more elegant way of doing things, perhaps at the appcfg.py level?
Thanks.
One option is to use the javascript version of Less and hence do the less-to-css conversion in the browser.. simply upload your less formatted file (see http://lesscss.org/ for details).
Alternately, I do the conversion (first with less, now I use sass) in a deploy script which does a number of things
checks that my source code control has no outstanding files checked out (uncommited changes)
joins and minifies my .js code (and runs jslint over it) into a single file
generates other content (including stamping the source code control version as a version number into certain key files and as a parameter on some files to avoid caching issues) so my main page pulls in scripts with URLs such as "allmysource.js?v=585".. the file might be static but the added params force cache invalidation
calls appcfg to perform the upload and checks the return code
makes some calls to the real site with wget to check the previously generated files are actually returned, by checking they're stamped with the expected version
applies another source code control tag to say that the intended version was successfully deployed
My script also accepts a "-preview" flag in which case it doesn't actually do the upload, but reports the version control comments for what's changed since the previous deployment.
me#here $ ./deploy -preview
Deployment preview...
Would deploy v596 to the production site (currently v593, previously v587)
594 Fix blah blah blah for X Y Z
595 New feature nah nah nah
596 Update help pages
This is pretty handy as a reminder of what I need to put in things like a changelog
I plan to also expand it so that I can, as part of my source code control, add any code that needs running once only when deployed (eg database schema changes) and know that it'll be automatically run when I next deploy a new version.
Essence of the script below as people asked... it doesn't show my "check code, generate, join, and minify" as that's another script... I realise that the original question was asking about that step of course :) but you can see where you'd add the call to generate CSS etc
#!/bin/sh
function abort () {
echo
echo "ERROR: $1"
echo "$2"
exit 99
}
function warn () {
echo
echo "WARNING: $1"
echo "$2"
}
# Overrides the Gentoo eselect mechanism to force the python version the GAE scripts expect
export EPYTHON=python2.5
# names of tags used to label bzr versions
CURR_DTAG=deployed
PREV_DTAG=prevDeployed
# command line options
PREVIEW=0
IGNORE_BZR=0
# These next few vars are set to values to identify my site, insert your own values here...
APPID=your_gae_appid_here
ADMIN_EMAIL=your_admin_email_address_here
SRCDIR=directory_to_deploy
CHECK_URL=url_of_page_to_retrive_that_does_upload_initialisation
for ARG; do
if [[ "$ARG" == "-preview" ]]; then
echo "Deployment preview..."
PREVIEW=1
fi
if [[ "$ARG" == "-force" ]]; then
echo "Ignoring the fact some files may not be committed to bzr..."
IGNORE_BZR=1
fi
done
echo
# check bzr for uncommited changed
BSTATUS=`bzr status`
if [[ "$BSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are uncommited changes - commit/revert/ignore all files before deploying" "$BSTATUS"
else
warn "There are uncommited changes" "$BSTATUS"
fi
fi
# get version of numbers of last deployed etc
currver=`bzr log -l1 --line | sed -e 's/: .*//'`
lastver=`bzr log -rtag:${CURR_DTAG} --line | sed -e 's/: .*//'`
prevver=`bzr log -rtag:${PREV_DTAG} --line | sed -e 's/: .*//'`
lastlog=`bzr log -l 1 --line gae/changelog | sed -e 's/: .*//'`
RELEASE_NOTES=`bzr log --short --forward -r $lastver..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastver "`
LOG_NOTES=`bzr log --short --forward -r $lastlog..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastlog "`
# Crude but old habit - BUGBUGBUG is a marker in the code for things to be fixed before deployment
echo "Checking code for outstanding issues before deployment"
BUGSTATUS=`grep BUGBUGBUG js/*js`
if [[ "$BUGSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are outstanding BUGBUGBUGs - fix them before deploying" "$BUGSTATUS"
else
warn "There are outstanding BUGBUGBUGs" "$BUGSTATUS"
fi
fi
echo
echo "Deploy v$currver to the production site (currently v$lastver, previously v$prevver)"
echo "$RELEASE_NOTES"
echo
if [[ "$currver" -gt "$lastlog" && "$lastver" -ne "$lastlog" ]]; then
echo "Changes since the changelog was last updated"
echo "$LOG_NOTES"
echo
fi
if [[ "$IGNORE_BZR" == "0" && $lastver -ge $currver ]]; then
abort "There don't appear to be any changes to deploy..."
fi
if [[ "$PREVIEW" == "1" ]]; then
exit 0
fi
$EPYTHON -c "import ssl" \
|| abort "$EPYTHON can't find ssl module for $EPYTHON - download it from pypi and install with the inbuilt setup.py"
# REMOVED - call to my script that calls jslint, generates files and compresses JS etc
# || abort "Generation of code failed"
/opt/google_appengine/appcfg.py --email=$ADMIN_EMAIL -v -A $APPID update $SRCDIR \
|| abort "Appcfg failed - upload presumably incomplete"
# move the tags to show we deployed properly
bzr tag -r $lastver --force ${PREV_DTAG}
bzr tag -r $currver --force ${CURR_DTAG}
echo
echo "Production site updated from v$lastver to v$currver (in turn from v$prevver)"
echo
echo "Now visiting $CHECK_URL to upload the source to the database"
# new version doesn't seem to always be there (may be caching by the webserver etc) to be uploaded into the database.. try again just in case
for cb in $RANDOM $RANDOM $RANDOM $RANDOM ; do
prodver=`wget $CHECK_URL?_cb=$cb -q -O - | perl -ne 'print $1 if /^\s*Rev #(\d+)\s*$/'`
if [[ "$currver" == "$prodver" ]]; then
echo "OK: New version $prodver successfully deployed"
exit 0
fi
echo "Retrying the upload of source to the database"
sleep 5
done
abort "The new source doesn't seem to be loading into the database" "Try 'wget $CHECK_URL?_cb=$RANDOM -q -O -'"
It's not particularly big or clever, but it automates the upload job

Resources