How to print out content-security-policy - bash

I have tried this command
curl -Ik https://dev.mydomain.com/
and it does print everything. And now what I want is to print out content-security-policy only.
Do I need to use jq or is there any other helpful tool that I can use?

curl -sIk https://stackoverflow.com/ | grep content-security-policy | cut -d ' ' -f 2-
Will curl the url, grep only the line with content-security-policy, cut on a space, and get all the fields from 2 onwards.
Example:
➜ ~ curl -sIk https://stackoverflow.com/ | grep content-secur | cut -d ' ' -f 2-
upgrade-insecure-requests; frame-ancestors 'self' https://stackexchange.com

If you use cURL >= 7.84.0, you can use the syntax %header{name} :
curl -Iks https://stackoverflow.com -o /dev/null -w "%header{content-security-policy}"
If you want to try it without installing a new version, you can run the Docker image :
docker run --rm curlimages/curl:7.85.0 -Iks https://stackoverflow.com -o /dev/null -w "%header{content-security-policy}"

Related

Sorted ouput, needs to have text inserted between string

I trying to add text (predefined) between a sorted output and saved to a new file.
I'm using a curl command to gather my info.
$ curl --user XXX:1234!## "http://......"
Then using grep to find IP addresses and sorting so they only appear once.
$ curl --user XXX:1234!## "http://......" | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u
I need to add <my_text_predefined> ([0-9]{1,3}[\.]){3}[0-9]{1,3} <my_text_predefined> between the regex ip address and then saved to a new file.
The script below only get my the ip address
$ curl --user XXX:1234!## "http://......" | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u
123.12.0.12
123.56.98.76
$ curl --user some_user:password "http://...." | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u | sed 's/.*/<prefix> -s & <suffix>/'
So if we need print some text for each IP ... try xargs
for i in {1..100}; do echo $i; done | xargs -n1 echo "Values are:"
if based on IP you would need to take decision put in a loop
for file $(curl ...) do ...
and check $file or do something with it ...

Refining output of a curl command

I have an output of a curl command as below,
comand: curl https://application.com/api/projectcreator
Output:
{"Table":[{"key":"projectA","name":"Jhon"},
{"key":"projectB","name":"Sam"},
{"key":"ProjectC","name":"Jack"}]}
I would like to cut this output to get only names. is there a way to do this in shell?
Eg:
Jhon
Sam
Jack
I tried below but doesnt seem to be promissing for me.
for Table in `curl -s -k http://application.com/api/projectcreator grep "name"`
do
echo "$Table"
done
Thanks for your help and insights!
Using jq you can do this easily:
curl -s -k 'http://application.com/api/projectcreator' |
jq -r '.Table[].name' | paste -s -d ' '
Jhon Sam Jack
If jq cannot be installed then use gnu grep:
curl -s -k 'http://application.com/api/projectcreator' |
grep -oP '"name":"\K[^"]+' | paste -s -d ' '

Check if url returns 200 using bash

I need to check if the remote file exists based on the url response by doing:
curl -u myself:XXXXXX -Is https://mylink/path/to/file | head -1
What can give something like these:
'HTTP/1.1 200 OK
'
or
'HTTP/1.1 404 Not Found
'
Now, I want to extract the http status code like 200 from the resulting string above and assign the number to a variable. How can I do that?
Use the -o option to send the headers to /dev/null, and use the -w option to output only the status.
$ curl -o /dev/null -u myself:XXXXXX -Isw '%{http_code}\n' https://mylink/path/to/file
200
$
If you intended to capture the status to a variable, you can omit the newline from the format.
$ status=$(curl ... -o /dev/null -Isw '%{http_code}' ...)
Use grep:
curl -u myself:XXXXXX -Is https://mylink/path/to/file | head -1 | grep -o '[0-9][0-9][0-9]'
Nice and simple:
curl --output /dev/null --silent --head --fail http://google.com

curl: (3) Illegal characters found in URL with bash shell

I want to get the random page from wiki and paste it on txt file.
curl -I https://en.wikipedia.org/wiki/Special:Random|grep -E "Location:"|cut -d ' ' -f2 > "result.txt"
But when I retrieve it from txt file and it come out the error.
cat result.txt| xargs -I % curl %
How about just following redirects with curl by adding the -L switch? No need to parse the Location header:
curl -L https://en.wikipedia.org/wiki/Special:Random

curl complex usage with pattern

I'm trying to get 2 files using curl based on some pattern but that doesn't seem to work:
Files:
SystemOut_15.04.01_21.12.36.log
SystemOut_15.04.01_15.54.05.log
curl -f -k -u "login:password" https://myserver/cgi-bin/logviewer/index.cgi?getlogfile=SystemOut_15.04.01_21.12.36.log'&'server=qwerty123.com'&'numlines=100000000'&'appenv=MBL%20-%20PROD'&'directory=/app/WAS/was85/profiles/node/logs/mbl-server1
I know there is -A key but it doesn't work since my file is inside the link.
How can I extract those 2 files using a pattern?
Did that myself. One curl gets the list of logs on the webpage. Another downloads those files.
The code looks like:
for file in $(curl -f -k -u "user:pwd" https://selfservice.pwj.com/cgi-bin/logviewer/index.cgi?listdirectory=/app/smx_client_mob/data/log'&'appenv=MBL%20-%20PROD'&'server=xshembl04pap.she.pwj.com | grep href | sed 's/.*href="//' | sed 's/".*//' | sed 's/javascript:getLog//g' | sed "s/['();]//g" | grep -i 'service' | grep '^[a-zA-Z].*'); do
curl -o $file -f -k -u "user:pwd" https://selfservice.pwj.com/cgi-bin/logviewer/index.cgi?getlogfile="$file"'&'server=xshembl04pap.she.pwj.com'&'numlines=100000000'&'appenv=MBL%20-%20PROD'&'directory=/app/smx_client_mob/data/log; done

Resources