passing password to curl on command line - shell

I have a requirement where I am trying to write a shell script which is calling curl command internally. I have the password, username and url stored as variables in the script. However, since I want to avoid using user:password format of curl command in the script, I am just using curl --user command. My intention is to pass the password through stdin. So, I am trying something like this -
#!/bin/bash
user="abcuser"
pass="trialrun"
url="https://xyz.abc.com"
curl --user $user $url 2>&1 <<EOF
$pass
EOF
But this is not working. I know there are variations to this question being asked, but I didn't quite get the exact answer, hence posting this question.

You can use:
curl -u abcuser:trialrun https://xyz.abc.comp
In your script:
curl -u ${user}:${pass} ${url}
To read from stdin:
curl https://xyz.abc.com -K- <<< "-u user:password"
When using -K, --config specify - to make curl read the file from stdin
That should work for HTTP Basic Auth, from the curl man:
-u, --user <user:password>
Specify the user name and password to use for server authentication.

To expand on #nbari's answer, if you have a tool "get-password" that can produce a password on stdout, you can safely use this invocation:
user="abcuser"
url="https://xyz.abc.com"
get-password $user | sed -e "s/^/-u $user:/" | curl -K- $url
The password will be written to a pipe. We use sed to massage the password into the expected format. The password will therefore never be visible in ps or in the history.

Related

iterate through specific files using webHDFS in a bash script

I want to download specific files in a HDFS directory, with their names starting with "total_conn_data_". Since I've got many files I want to write a bash script.
Here's what I do:
myPatternFile="total_conn_data_*.csv"
for filename in `curl -i -X GET "https://knox.blabla/webhdfs/v1/path/to/the/directory/?OP=LISTSTATUS" -u username`; do
curl -i -X GET "https://knox.blabla/webhdfs/v1/path/to/the/directory/$filename?OP=OPEN" -u username -L -o "./data/$filename" -k;
done
But it does not work since curl -i -X GET "https://knox.blabla/webhdfs/v1/path/to/the/directory/?OP=LISTSTATUS" -u username is sending back a json text and not file names.
How should I do? Thanks
curl provides output in json format only. you will have to use other tools like jquery and sed to format that output and get the list of files.

How can I verify curl response before passing to bash -s?

I have a script that looks like:
curl -sSL outagebuddy.com/path/linux_installer | bash -s
Users can install a linux client for the site using the command that is provided to them. I'm thinking there should be an intermediary step that verifies the curl had a 2XX response and downloaded the content successfully before passing it to bash. How can I do that?
Without a user-managed temporary file:
if script=$(curl --fail -sSL "$url"); then
bash -s <<<"$script"
fi
If you don't mind having an intermediate file (which you certainly need if you want to make sure the curl command worked fully) then you can use:
if curl --fail -sSL <params> -o script.sh
then
bash script.sh
fi

How to retrieve error code from cURL on shell

I know a similar question was posted, but I can't get it to work on my machine.
I tried the 1st answer from the mentioned question, i.e. response=$(curl --write-out %{http_code} --silent --output /dev/null servername) and when I echo $response I got 000 [Not sure if that is the desired output].
However, when trying to do so with my cURL command, I get no output.
This is my command:
curl -k --silent --ftp-pasv --ftp-ssl --user C:is_for_cookies --cert localcert_cert.pem --key certs/localcert_pkey.pem ftps://10.10.10.10:21/my_file.txt
and I use it with
x=$(curl -k --silent --ftp-pasv --ftp-ssl --user C:is_for_cookies --cert localcert_cert.pem --key certs/localcert_pkey.pem ftps://10.10.10.10:21/my_file.txt)
but when I try to echo $x all I get is a newline...
I know the cURL is failing, because when I run the same command, without --silent, I get curl: (7) Couldn't connect to server
This Q is tagged with both sh, bash because I've tried it on both with same results
I found this option which kind of helps (but I still don't know how to assign it to a variable, which should be easier than this...):
--stderr <file>
Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it is instead written to stdout.
If this option is used several times, the last one will be used.
When I use it like this:
curl -k --silent -S --stderr my_err_file --ftp-pasv --ftp-ssl --user C:is_for_cookies --cert localcert_cert.pem --key certs/localcert_pkey.pem ftps://10.10.10.10:21/my_file.txt
I can see the errors (i.e. curl: (7) Couldn't connect to server) inside that file.
I used --silent to suppress all output, and -S to un-suppress the errors, and the --stderr <file> to redirect them

Why use -Lo- with curl when piping to bash?

In the janus project, they use curl to download and pipe a bootstrap script into bash.
https://github.com/carlhuda/janus
It looks like this:
$ curl -Lo- https://bit.ly/janus-bootstrap | bash
Why would one want to use the args -Lo-?
-o is supposed to be for output, but wouldn't that happen anyway (i.e. to stdout)?
It's all in the man pages:
-L in case the page has moved (3xx response) curl will redirect the request to the new address
-o output to a file instead of stdout (usually the screen). In your case the o flag is redundant since the output is piped to bash (for execution) - not to a file.
The -o is redundant, they produce the exact same output:
$ curl --silent example.com | sha256sum
3587cb776ce0e4e8237f215800b7dffba0f25865cb84550e87ea8bbac838c423 *-
$ curl --silent --output - example.com | sha256sum
3587cb776ce0e4e8237f215800b7dffba0f25865cb84550e87ea8bbac838c423 *-
They have used that syntax since that line was first introduced in 2011.
You might ask Wael Nasreddine (#kalbasit on GitHub) why he did it. He
is still active on that repo.

superuser pass switch to shell script

I need to run a shell script as another user while logged in as root. Something along the lines of
su <user> ./scriptname -d
where the -d bit is the switch to be passed to scriptname.
However, when I attempt to execute the command as shown above su complains that -d is not a valid option and presents me with a list of valid options. How do I get it to understand that the -d is meant for consumption by the script not itself?
su <user> -c './scriptname -d'

Resources