Openvpn failing to send cURL request through user based script - bash

I am trying to authenticate my OpenVPN clients (with a username and password) using a bash script. Here is part of my server side config:
client-to-client
username-as-common-name
client-cert-not-required
script-security 3
auth-user-pass-verify '/etc/openvpn/script/auth.sh' via-env
Here is my bash script:
#!/bin/bash
SECRET='mysecret'
RESPONSE=$(sudo /usr/bin/curl https://myvpn.com/somedir/auth.php -d"username=$1&password=$2&secret=$SECRET" --silent)
if [ "$RESPONSE" = "y" ]; then
exit 0
else
exit 1
fi
When I run it on the command line (./auth.sh) it runs fine and authenticates correctly. I have setup my php script on my webserver such that it generates a log everytime it is called, so I know if the request successfully reached. However, when OpenVPN calls the script, the curl request fails to send (authentication fails on client side). My guess is that for some reason OpenVPN doesn't have permission to use cURL? How do I give OpenVPN permission to use curl?
Note: I have tried putting exit 0 on top of my bash script, and it successfully authenticates the user and connects to the VPN.

If you don't need sudo, you can do it with:
#!/usr/bin/env bash
SECRET='mysecret'
[ 'Y' = "$(
/usr/bin/curl \
--data "username=$1&password=$2&secret=$SECRET" \
--silent \
'https://myvpn.com/somedir/auth.php'
)" ]
No need to exit with explicit return code, since the test will take care of it?

Related

gcloud auth print-access-token gives permission denied error

I have a shell script trying to download some secrets from secret manager inside dataproc cluster.
GetAuthJson() {
authjson=$(curl "https://secretmanager.googleapis.com/v1/projects/$PROJECT_ID/secrets/$AUTH_JSON/versions/1:access" \
--request "GET" \
--header "authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json")
if [ $? -ne 0 ]; then
Error "Unable to extract the $PIPENAME Auth json details from GCP Secret Manager"
fi
echo $authjson | grep -o '"data": "[^"]*' | grep -o '[^"]*$' >$BASE_DIR/encodedauth.json
if [ $? -ne 0 ]; then
Error "Unable to save the $PIPENAME auth.json server secret to auth.json"
fi
auth_json=$(base64 -d $BASE_DIR/encodedauth.json)
base64 -d $BASE_DIR/encodedauth.json >/etc/secrets/auth.json
if [ $? -ne 0 ]; then
Error "Unable to decode the $PIPENAME auth.json server secret"
fi
Log "auth.json secret extraction done"
}
when i run this curl it generates an error
authjson='{
"error": {
"code": 403,
"message": "Permission '\''secretmanager.versions.access'\'' denied for resource '\''projects/**-**-dev/secrets/**_AUTH_JSON/versions/1'\'' (or it may not exist).",
"status": "PERMISSION_DENIED"
}
}'
the same curl with same service account is working in local meachine. and more over if i copy the CURL from local and run it in dataproc cluster it works as well.
But the curl generated from dataproc fails in local .
whats more weird is if i run gcloud auth print-access-token separately and paste it in curl command it works in both meachine.
so my question is why gcloud auth print-access-token generated as part of curl in dataproc cluster is not working ?
It would be useful if you could capture the value of the curl command or, at least the value of gcloud auth print-access-token that's failing in the script.
I suspect (I'm unfamiliar with Dataproc) that the Dataproc instance does not have gcloud installed and the gcloud auth print-access-token is failing.
If the instance does have gcloud installed, since it's running then it must have a Service Account and so should permit authenticating. There may (!?) be a more nuanced issue with getting an access token as a Dataproc instance, unclear.
Please consider using either gcloud secrets versions access directly or one of Google's client libraries to access the secret.
You're making the process more complex than it need be by curl'ing the endpoint; you're having to use gcloud anyway to get the auth token.
The issue was i ran the script as sudo user. When i ran normally it worked.

Deploying a bash-based server on Cloud Run: how to fix 502 Bad gateway error?

I am trying to deploy a bash server on Cloud run in order to easily trigger gcloud commands with parameters that would be passed with the POST request to the service.
I take the inspiration mainly from here.
At the moment the bash server looks like this:
#!/usr/bin/env bash
PORT=${PORT-8080}
echo "Listening on port $PORT..."
while true
do
rm -f out
mkfifo out
trap "rm -f out" EXIT
echo "Waiting.."
cat out | nc -Nv -l 0.0.0.0 "${PORT}" > >( # parse the netcat output, to build the answer redirected to the pipe "out".
# A POST request is expected, so the request is read until the '}' ending the json payload.
# while
read -d } PAYLOAD;
# do
# The contents of the payload are extracted by stripping the headers of the request
# Then every entry of the json is exported :
# export KEY=VALUE
for s in $(echo $PAYLOAD} | \
sed '/./{H;$!d} ; x ; s/^[^{]*//g' | \
sed '/./{H;$!d} ; x ; s/[^}]*$//g' | \
jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" );
do
export $s;
echo $s;
done
echo "Running the gcloud command..."
# gsutil mb -l $LOCATION gs://$BUCKET_NAME
printf "%s\n%s\n%s\n" "HTTP/1.1 202 Accepted" "Content-length: 0" "Connection: Keep- alive" > out
# done
)
continue
done
The Dockerfile for deployment looks like this:
FROM google/cloud-sdk:alpine
RUN apk add --upgrade jq netcat-openbsd coreutils \
&& apk add --no-cache --upgrade bash
COPY main.sh .
ENTRYPOINT ["./main.sh"]
(Cloud SDK image with netcat-openbsd -server-, jq -JSON processing part-, and bash in addition)
With this quite simple setup I can deploy the service and it listens to incoming requests.
When receiving a POST request with a payload looking like
{"LOCATION": "MY_VALID_LOCATION", "BUCKET_NAME": "MY_VALID_BUCKET_NAME"}
the (here commented out) cloud SDK command runs correctly and creates a bucket with the specified name in the specified region, in the same project as the the Cloud Run service.
However, I receive a 502 error at the end of the process. (I'm expecting a 202 Accepted response).
The server seems to work correctly locally. However it seems that Cloud Run cannot receive the HTTP response.
How can I make sure the HTTP response is correctly transmitted back ?
Thanks to #guillaumeblaquiere suggestion I managed to obtain a solution to what I was ultimately trying to do: running bash scripts on Cloud Run.
Two important points were to be able to : 1) be able to access the payload of the incoming HTTP request, 2) have access to google cloud SDK.
To do so, the trick was to build a Docker image based on not only on the Google SDK base image, but also on the shell2http one that deals with the server aspects.
The Dockerfile therefore looks like this:
FROM google/cloud-sdk:alpine
RUN apk add --upgrade jq
COPY --from=msoap/shell2http /app/shell2http /shell2http
COPY main.sh .
ENTRYPOINT ["/shell2http","-cgi"]
CMD ["/","/main.sh"]
Thanks to the latter image, the HTTP processing is handled by a Go server and the incoming HTTP is piped into the stdin of the main.sh script.
The -cgi option also allows different environment variables to be set, for instance the $HTTP_CONTENT_LENGTH variable that contains the length of the payload (therefore of the JSON containing the different parameters we want to extract and pass to further actions).
As such, with the following as the first line of our main.sh script:
read -n$HTTP_CONTENT_LENGTH PAYLOAD;
the PAYLOAD is set with the incoming JSON payload.
Using jq it is then possible to do whatever we want.
All the code is gathered in this repository:
https://github.com/cylldby/bash2run
In addition, this solution is used in this project in order to have a very simple solution to trigger a Google workflow from an eventarc trigger.
Big thanks again to #guillaume_blaquiere for the tip !

Bash script with sendmail delivers email when executed manually but not from crontab

I wrote the following bash script to send me an alert if there is a problem with my website:
#!/bin/bash
# 1. download the page
BASE_URL="https://www.example.com/ja"
JS_URL="https://www.example.com/"
# # 2. search the page for the following URL: /sites/default/files/google_tag/google_tag.script.js?[FIVE-CHARACTER STRING WITH LETTERS AND NUMBERS]
curl -k -L ${BASE_URL} 2>/dev/null | grep -Eo "/sites/default/files/google_tag/google_tag.script.js?[^<]+" | while read line
do
# 3. download the js file
if curl -k -L ${JS_URL}/$line | grep gtm_preview >/dev/null 2>&1; then
# 4. check if this js file has the text "gtm_preview" or not; if it does, send an email
# echo "Error: gtm_preview found"
sendmail error-ec2#example.com < email-gtm-live.txt
else
echo "No gtm_preview tag found."
fi
done
I am running this from an Amazon EC2 Ubuntu instance. When I execute the script manually like ./script.sh, I receive an email in my webmail inbox for example.com.
However, when I configure this script to run via crontab, the mail does not get sent via the Internet; instead, it gets sent to /var/mail on the EC2 instance.
I don't understand why this is happening or what I can do to fix it. Why does sendmail behave different if it is being run from bash vs being run from crontab?
Be aware that the PATH environment variable is different for crontab executions than it is for your typical interactive sessions. Also, not all of the same environment variables are set. Consider specifying the full path for the sendmail executable ( which you can learn by issuing the 'which sendmail' command ).

Uploading the content of a variable as a file into FTP server using Bash

Using Bash scripting, I'm trying to upload the content of variable into an FTP server.
The variable is $HASHED and it contains some hashed password
echo $HASHED
The output of the above command is: M0eSl8NR40wH
I need to do the following:
Create a time/date stamped file in the FTP server (i.e. PASSWORD_18_02_2014)
The file needs to have the same content of the $HASHED value (i.e. the PASSWORD_18_02_2014 needs to have M0eSl8NR40wH inside it).
Trying Curl, I couldn't get it working using the following:
UPLOAD="curl -T $HASHED ftp://192.168.0.1/passwords/ --user username:password"
$UPLOAD
Your help is very much appreciated.
Something like this might help you (tested on Linux Mint 13):
#!/bin/bash
FILENAME=PASSWORD_`date +%d_%m_%Y`
echo $HASHED >$FILENAME
ftp -n your_ftp_site <<EOF
user your_user_name_on_the_ftp_server
put $FILENAME
EOF
rm $FILENAME
A few caveats:
You have to export HASHED, e.g. when you set it, set it like this: export HASHED=M0eSl8NR40wH
The above assumes you will be running this from a terminal and can type in your password when prompted
You may have to add some cd commands after the line that starts "user" and before the line that starts "put", depending on where you want to put the file on your ftp server
Don't forget to make the script executable:
chmod u+x your_command_script_name
You can code the password after the user name on the line that starts "user", but this leaves a big risk that someone can discover your password on the ftp server. At the very least, make the bash command script readable only by you:
chmod 700 your_command_script_name
Please try this:
HASHED="M0eSl8NR40wH"
echo "$HASHED" | curl --silent --show-error --upload-file \
-ftp://192.168.0.1/passwords/$(date +PASSWORD_%d_%m_%Y) --user username:password
Where:
--silent : prevents the progress bar
--show-error : shows errors if any
--upload-file - : get file from stdin
The target name is indicated as part of the URL

Run local Perl script on remote server through expect

I have a Perl script on my local machine, and I want to run it on a remote server. The following command works fine:
ssh user#ipaddress "perl - --arg1 arg1 --arg2 arg2" < /path/to/local/script.pl
The thing is that a prompt shows up to ask me for the password, and I don't want that.
I looked around the net, and found 3 solutions:
Using a public/private key authentication -> Not ok in my case
Using sshpass -> Not in my company's 'official' repo so cannot install it
Using expect
I followed this page to create my expect script (I'm new to expect): How to use bash/expect to check if an SSH login works, I took the script from the correct answer, and replaced the 3 first lines with #!/usr/bin/expect -f to have an expect script.
Then I ran
./ssh.exp user password ipaddress "perl - --arg1 arg1 --arg2 arg2" < /path/to/local/script.pl
And I have a timeout error. It was as if I ran ./ssh.exp user password ipaddress "perl"
I also tried putting quotes like
./ssh.exp user password ipaddress '"perl - --arg1 arg1 --arg2 arg2" < /path/to/local/script.pl'
But I have a /path/to/local/script.pl not found error.
So can anyone help me figure out how to run a script through expect ? Thanks a lot.
Ok, I found it myself.
Still using this answer How to use bash/expect to check if an SSH login works but I replaced the 6th line with
set pid [ spawn -noecho sh -c "ssh $1#$3 $4 < /path/to/local/script.pl" ]
And everything works like a charm.

Resources