Refining output of a curl command - shell

I have an output of a curl command as below,
comand: curl https://application.com/api/projectcreator
Output:
{"Table":[{"key":"projectA","name":"Jhon"},
{"key":"projectB","name":"Sam"},
{"key":"ProjectC","name":"Jack"}]}
I would like to cut this output to get only names. is there a way to do this in shell?
Eg:
Jhon
Sam
Jack
I tried below but doesnt seem to be promissing for me.
for Table in `curl -s -k http://application.com/api/projectcreator grep "name"`
do
echo "$Table"
done
Thanks for your help and insights!

Using jq you can do this easily:
curl -s -k 'http://application.com/api/projectcreator' |
jq -r '.Table[].name' | paste -s -d ' '
Jhon Sam Jack
If jq cannot be installed then use gnu grep:
curl -s -k 'http://application.com/api/projectcreator' |
grep -oP '"name":"\K[^"]+' | paste -s -d ' '

Related

How to print out content-security-policy

I have tried this command
curl -Ik https://dev.mydomain.com/
and it does print everything. And now what I want is to print out content-security-policy only.
Do I need to use jq or is there any other helpful tool that I can use?
curl -sIk https://stackoverflow.com/ | grep content-security-policy | cut -d ' ' -f 2-
Will curl the url, grep only the line with content-security-policy, cut on a space, and get all the fields from 2 onwards.
Example:
➜ ~ curl -sIk https://stackoverflow.com/ | grep content-secur | cut -d ' ' -f 2-
upgrade-insecure-requests; frame-ancestors 'self' https://stackexchange.com
If you use cURL >= 7.84.0, you can use the syntax %header{name} :
curl -Iks https://stackoverflow.com -o /dev/null -w "%header{content-security-policy}"
If you want to try it without installing a new version, you can run the Docker image :
docker run --rm curlimages/curl:7.85.0 -Iks https://stackoverflow.com -o /dev/null -w "%header{content-security-policy}"

Get latest version of Maven with curl

My task is to grab latest version of maven in bash script using curl https://apache.osuosl.org/maven/maven-3/
Output should be: 3.8.5
curl -s "https://apache.osuosl.org/maven/maven-3/" | grep -o "[0-9].[0-9].[0-9]" | tail -1
works for me ty
One way is to
Use grep to only get the lines containing the folder link
Use sed with an regex to get only the version number
Use sort to sort the lines
Use tail -n1 to get the last line
curl -s https://apache.osuosl.org/maven/maven-3/ \
| grep folder \
| gsed -E 's/.*([[:digit:]]\.[[:digit:]]\.[[:digit:]]).*/\1/' \
| sort \
| tail -n1
Output:
3.8.5
The url you're mentioning is an HTML-document and curl is not an HTML-parser, so I'd look into an HTML-parser, like xidel, if I were you:
$ xidel -s "https://apache.osuosl.org/maven/maven-3/" -e '//pre/a[last()]'
3.8.5/
$ xidel -s "https://apache.osuosl.org/maven/maven-3/" -e 'substring-before(//pre/a[last()],"/")'
3.8.5
I would suggest to use the central repository instead one of the apache maven sites because the information on central more maded for machine consumption.
It will be taken into account that the file maven-metadata.xml contains the appropriate information.
There are two options using the <version>..</version> tag and the <latest>..</latest> tag.
MVNVERSION=$(curl https://repo1.maven.org/maven2/org/apache/maven/maven/maven-metadata.xml \
| grep "<version>.*</version>" \
| tr -s " " \
| cut -d ">" -f2 \
| cut -d "<" -f1 \
| sort -V -r \
| head -1)
The second option:
MVNVERSION=$(curl https://repo1.maven.org/maven2/org/apache/maven/maven/maven-metadata.xml \
| grep "<latest>.*</latest>" \
| cut -d ">" -f2 \
| cut -d "<" -f1)

Sending grep data iteratively using curl for output of tail -f

I am using grep to extract data from a log file. The log file is dynamically updating new rows. and I need to send the grepped data to a REST endpoint using curl. This can be done easily for a static file but cannot find a solution fo a running log file. How can I realize this situation?
eg: tail -f | grep "<string>" > ~/<fileName>.log
The above can put the data in a file. Need to send it using a POST curl.
Maybe using a function like
send_data(){
curl -s -k -X POST –header Content-Type: application/json’ \
–header ‘Accept: application/json’ \
“http://${HOST}${PORT}/v1/notify” \
-d $1
}
If tail -f | grep "<string>" > ~/<fileName>.log is working for you then you could do:
tail -f file | stdbuf -i0 -o0 -e0 grep "<string>" | xargs -n 1 -d $'\n' curl ...
or:
while IFS= read -r line; do
curl ... "$line"
done < <(tail -f file | stdbuf -i0 -o0 -e0 grep "<string>")

Sorted ouput, needs to have text inserted between string

I trying to add text (predefined) between a sorted output and saved to a new file.
I'm using a curl command to gather my info.
$ curl --user XXX:1234!## "http://......"
Then using grep to find IP addresses and sorting so they only appear once.
$ curl --user XXX:1234!## "http://......" | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u
I need to add <my_text_predefined> ([0-9]{1,3}[\.]){3}[0-9]{1,3} <my_text_predefined> between the regex ip address and then saved to a new file.
The script below only get my the ip address
$ curl --user XXX:1234!## "http://......" | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u
123.12.0.12
123.56.98.76
$ curl --user some_user:password "http://...." | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u | sed 's/.*/<prefix> -s & <suffix>/'
So if we need print some text for each IP ... try xargs
for i in {1..100}; do echo $i; done | xargs -n1 echo "Values are:"
if based on IP you would need to take decision put in a loop
for file $(curl ...) do ...
and check $file or do something with it ...

curl complex usage with pattern

I'm trying to get 2 files using curl based on some pattern but that doesn't seem to work:
Files:
SystemOut_15.04.01_21.12.36.log
SystemOut_15.04.01_15.54.05.log
curl -f -k -u "login:password" https://myserver/cgi-bin/logviewer/index.cgi?getlogfile=SystemOut_15.04.01_21.12.36.log'&'server=qwerty123.com'&'numlines=100000000'&'appenv=MBL%20-%20PROD'&'directory=/app/WAS/was85/profiles/node/logs/mbl-server1
I know there is -A key but it doesn't work since my file is inside the link.
How can I extract those 2 files using a pattern?
Did that myself. One curl gets the list of logs on the webpage. Another downloads those files.
The code looks like:
for file in $(curl -f -k -u "user:pwd" https://selfservice.pwj.com/cgi-bin/logviewer/index.cgi?listdirectory=/app/smx_client_mob/data/log'&'appenv=MBL%20-%20PROD'&'server=xshembl04pap.she.pwj.com | grep href | sed 's/.*href="//' | sed 's/".*//' | sed 's/javascript:getLog//g' | sed "s/['();]//g" | grep -i 'service' | grep '^[a-zA-Z].*'); do
curl -o $file -f -k -u "user:pwd" https://selfservice.pwj.com/cgi-bin/logviewer/index.cgi?getlogfile="$file"'&'server=xshembl04pap.she.pwj.com'&'numlines=100000000'&'appenv=MBL%20-%20PROD'&'directory=/app/smx_client_mob/data/log; done

Resources