How to pick one username:password every 3 proxy in bash? - bash

I have bash script that need proxy to running because it's a curl script, the problem is every 3 proxy has a different username and password. I have managed by curl script to have random proxy and proxy username:password, but How to manage it pick different username:password after 3 proxy taken , Thanks
The code itself :
rand_useragent=$(head -$((${RANDOM} % `wc -l < tmp/ua.lst` + 1)) tmp/ua.lst | tail -1)
rand_proxy=$(head -$((${RANDOM} % `wc -l < tmp/proxy.lst` + 1)) tmp/proxy.lst | tail -1)
rand_user=$(head -$((${RANDOM} % `wc -l < tmp/userproxy.lst` +`)) tmp/userproxy.lst | tail -1)
posted=`curl --max-time 20 --connect-timeout 20 --proxy "$rand_proxy" --user-proxy "$rand_user" 'url'

Related

display grid of data in bash

would like to get an opinion on how best to do this in bash, thank you
for x number of servers, each has it's own list of replication agreements and their status.. it's easy to run a few commands and get this data, ex;
get servers, output (setting/variable in/from a local config file);
. ./ldap-config ; echo "$MASTER $REPLICAS"
dc1-server1 dc1-server2 dc2-server1 dc2-server2 dc3...
for dc1-server1, get agreements, output;
ipa-replica-manage -p $(cat ~/.dspw) list -v $SERVER.$DOMAIN | grep ': replica' | sed 's/: replica//'
dc2-server1
dc3-server1
dc4-server1
for dc1-server1, get agreement status codes, output;
ipa-replica-manage -p $(cat ~/.dspw) list -v $SERVER.$DOMAIN | grep 'status: Error (' | sed -e 's/.*status: Error (//' -e 's/).*//'
0
0
18
so output would be several columns based on the 'get servers' list with each 'replica: status' under each server, for that server
looking to achieve something like;
dc2-server1: 0 dc2-server2: 0 dc1-server1: 0 ...
dc3-server1: 0 dc3-server2: 18 dc3-server1: 13 ...
dc4-server1: 18 dc4-server2: 0 dc4-server1: 0 ...
Generally eval is considered evil. Nevertheless, I'm going to use it.
paste is handy for printing files side-by-side.
Bash process substitutions can be used where you'd use a filename.
So, I'm going to dynamically build up a paste command and then eval it
I'm going to use get.sh as a placeholder for your mystery commands.
cmd="paste"
while read -ra servers; do
for server in "${servers[#]}"; do
cmd+=" <(./get.sh \"$server\" agreements | sed 's/\$/:/')"
cmd+=" <(./get.sh \"$server\" status)"
done
done < <(./get.sh servers)
eval "$cmd" | column -t

What's the easiest way to find multiple unused local ports withing a range?

What I need is to find unused local ports withing a range for further usage (for appium nodes). I found this code:
getPorts() {
freePort=$(netstat -aln | awk '
$6 == "LISTEN" {
if ($4 ~ "[.:][0-9]+$") {
split($4, a, /[:.]/);
port = a[length(a)];
p[port] = 1
}
}
END {
for (i = 7777; i < 65000 && p[i]; i++){};
if (i == 65000) {exit 1};
print i
}
')
echo ${freePort}
}
this works pretty well if I need singe free port, but for parallel test execution we need multiple unused ports. So I need to modify the function to be able to get not one free port, but multiple (depends on parameter), starting from the first found free port and then store the result in one String variable. For example if I need ports for three 3 devices, the result should be:
7777 7778 7779
the code should work on macOS, because we're using mac mini as a test server.
Since I only started with bash, it's a bit complicated to do for me
This is a bash code, it works fine on Linux, so if your Mac also runs bash it will work for you.
getPorts() {
amount=${1}
found=0
ports=""
for ((i=7777;i<=65000;i++))
do
(echo > /dev/tcp/127.0.0.1/${i}) >/dev/null 2>&1 || {
#echo "${i}"
ports="${ports} ${i}"
found=$((found+1))
if [[ ${found} -ge ${amount} ]]
then
echo "${ports:1}"
return 0
fi
}
done
return 1
}
Here is how to use use it and the output:
$ getPorts 3
7777 7778 7779
$ getPorts 10
7777 7778 7779 7780 7781 7782 7783 7784 7785 7786
Finding unused ports from 5000 to 5100:
range=(`seq 5000 5100`)
ports=`netstat -tuwan | awk '{print $4}' | grep ':' | cut -d ":" -f 2`
echo ${range[#]} ${ports[#]} ${ports[#]} | tr ' ' '\n' | sort | uniq -u

how to pass variable value in POST JSON data in CURL command?

There are two fields which are unique , so i wrote a command to generate random value everytime the POST method is build and stored those values into the variables and pass those variables in the CURL command line. The script is below.
rcontactno=$(cat /dev/urandom | tr -dc '0-9' | fold -w 10 | head -n 1)
rfirstname=$(cat /dev/urandom | tr -dc 'a-zA-Z' | fold -w 10 | head -n
echo $rcontactno and $rfirstname
STATUS=$(curl -v -X POST -d '{"userName":"$rfirstname","contactNo":$rcontactno}' $1/restaurants/53/managers --header "Content-Type:application/json" --header "Accept:application/json" | grep HTTP | cut -d ' ' -f2 )
#Passing the URL using command-line argument
if [[ STATUS -eq 201 ]]; then
echo “Success”
exit 0
else
echo “Failed”
exit 127
fi
then i execute the script by
bash manager-post.sh
i get this type of error
> POST /restaurants/53/managers HTTP/1.1
> User-Agent: curl/7.37.1
> Host: my-url
> Content-Type:application/json
> Accept:application/json
> Content-Length: 54
>
} [data not shown]
* upload completely sent off: 54 out of 54 bytes
< HTTP/1.1 400 Bad Request
* Server Apache-Coyote/1.1 is not blacklisted
< Server: Apache-Coyote/1.1
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Fri, 16 Jan 2015 08:59:01 GMT
< Connection: close
<
{ [data not shown]
* Closing connection 0
“Failed”
but when i run the curl command without the bash script and explicitly mention the values of userName and contactNo , then it will execute successfully.
Where am i making mistake?
The single quotes don't allow variable expansion in the shell, so you need to use double quotes instead. Then you need to subsequently escape the double-quotes you want to send as-is.
A useful debugging technique is to add --trace-ascii dump.txt to your command line and inspect dump.txt after invoking curl to see that it matches what you intended to send.

how to distinguish the domain from a subdomain

i have a problem with script.
At my work i often have to check PTR of domain, MX records and similar data. I did the most of the script, but the problem appears when i do something likethis is subdomain
e-learning.go4progress.pl
this is domain but second level?
simple.com.pl
Now in my script there is something like: if i don't put www, it adds and does
dig www.e-learning.go4progress.pl
dig e-learning.go4progress.pl
dig go4progress.pl
he counts dots and subtracts 1, but problem is when domain looks like
simple.com.pl
because script make dig too for
com.pl
I don't check many domain which contains com.pl, co.uk, gov.pl. I've got an idea to make array and compare this what i put in script with array, when it finds in string component of array he subtracts 2 instead 1 ;)
I paste a part of script to better understand me why he substracks 1.
url="`echo $1 | sed 's~http[s]*://~~g' | sed 's./$..' | awk '!/^www/ {$0="www." $0}1'`"
ii=1
dots="`echo $url | grep -o "\." | wc -l`"
while [ $ii -le $dots] ; do
cut="`echo $url | cut -d "." -f$ii-9`"
ip="`dig +short $cut`"
host="`dig -x $ip +short`"
if [[ -z "$host" ]]; then
host="No PTR"
fi
echo "strona: $cut Host: $host"
ii=$[ii + 1]
Maybe you have diffrent idea how to help with my problem.
Second question is how to distinguish subdomain from domain.
I need compare mx records of subdomain(when the url contains subdomain) and mx records top level domain.
I waiting for your response ;)
My solution to find the domain as registered with the registrar:
wget https://raw.githubusercontent.com/gavingmiller/second-level-domains/master/SLDs.csv
DOMAIN="www.e-learning.go4progress.co.uk";
KEEPPARTS=2;
TWOLEVELS=$( /bin/echo "${DOMAIN}" | /usr/bin/rev | /usr/bin/cut -d "." --output-delimiter=".\\" -f 1-2 | /usr/bin/rev );
if /bin/grep -P ",\.${TWOLEVELS}" SLDs.csv >/dev/null; then
KEEPPARTS=3;
fi
DOMAIN=$( /bin/echo "${DOMAIN}" | /usr/bin/rev | /usr/bin/cut -d "." -f "1-${KEEPPARTS}" | /usr/bin/rev );
echo "${DOMAIN}"
Thanks to https://github.com/gavingmiller/second-level-domains and https://github.com/medialize/URI.js/issues/17#issuecomment-3976617

How to get remote file size from a shell script?

Is there a way to get the size of a remote file like
http://api.twitter.com/1/statuses/public_timeline.json
in shell script?
You can download the file and get its size. But we can do better.
Use curl to get only the response header using the -I option.
In the response header look for Content-Length: which will be followed by the size of the file in bytes.
$ URL="http://api.twitter.com/1/statuses/public_timeline.json"
$ curl -sI $URL | grep -i Content-Length
Content-Length: 134
To get the size use a filter to extract the numeric part from the output above:
$ curl -sI $URL | grep -i Content-Length | awk '{print $2}'
134
Two caveats to the other answers:
Some servers don't return the correct Content-Length for a HEAD request, so you might need to do the full download.
You'll likely get an unrealistically large response (compared to a modern browser) unless you specify gzip/deflate headers.
Also, you can do this without grep/awk or piping:
curl 'http://api.twitter.com/1/statuses/public_timeline.json' --location --silent --write-out 'size_download=%{size_download}\n' --output /dev/null
And the same request with compression:
curl 'http://api.twitter.com/1/statuses/public_timeline.json' --location --silent -H 'Accept-Encoding: gzip,deflate' --write-out 'size_download=%{size_download}\n' --output /dev/null
Similar to codaddict's answer, but without the call to grep:
curl -sI http://api.twitter.com/1/statuses/public_timeline.json | awk '/Content-Length/ { print $2 }'
The preceding answers won't work when there are redirections. For example, if one wants the size of the debian iso DVD, he must use the --location option, otherwise, the reported size may be that of the 302 Moved Temporarily answer body, not that of the real file.
Suppose you have the following url:
$ url=http://cdimage.debian.org/debian-cd/8.1.0/amd64/iso-dvd/debian-8.1.0-amd64-DVD-1.iso
With curl, you could obtain:
$ curl --head --location ${url}
HTTP/1.0 302 Moved Temporarily
...
Content-Type: text/html; charset=iso-8859-1
...
HTTP/1.0 200 OK
...
Content-Length: 3994091520
...
Content-Type: application/x-iso9660-image
...
That's why I prefer using HEAD, which is an alias to the lwp-request command from the libwww-perl package (on debian). Another advantages it has is that it strips the extra \r characters, which eases subsequent string processing.
So to retrieve the size of the debian iso DVD, one could do for example:
$ size=$(HEAD ${url})
$ size=${size##*Content-Length: }
$ size=${size%%[[:space:]]*}
Please note that:
this method will require launching only one process
it will work only with bash, because of the special expansion syntax used
For other shells, you may have to resort to sed, awk, grep et al..
I think the easiest way to do this would be to:
use cURL to run in silent mode -s,
pull only the headers -I (so as to avoid downloading the whole file)
then do a case insensitive grep -i
and return the second arg using awk $2.
output is returned as bytes
Examples:
curl -sI http://api.twitter.com/1/statuses/public_timeline.json | grep -i content-length | awk '{print $2}'
//output: 52
or
curl -sI https://code.jquery.com/jquery-3.1.1.min.js | grep -i content-length | awk '{print $2}'
//output: 86709
or
curl -sI http://download.thinkbroadband.com/1GB.zip | grep -i content-length | awk '{print $2}'
//output: 1073741824
Show as Kilobytes/Megabytes
If you would like to show the size in Kilobytes then change the awk to:
awk '{print $2/1024}'
or Megabytes
awk '{print $2/1024/1024}'
The accepted solution was not working for me, this is:
curl -s https://code.jquery.com/jquery-3.1.1.min.js | wc -c
I have a shell function, based on codaddict's answer, which gives a remote file's size in a human-readable format thusly:
remote_file_size () {
printf "%q" "$*" |
xargs curl -sI |
grep Content-Length |
awk '{print $2}' |
tr -d '\040\011\012\015' |
gnumfmt --to=iec-i --suffix=B # the `g' prefix on `numfmt' is only for systems
# ^ # that lack the GNU coreutils by default, i.e.,
# | # non-Linux systems
# |
# | # in other words, if you're on Linux, remove this
# | # letter `g'; if you're on BSD or Mac, install the GNU coreutils
} # | |
# +----------------------------------------+
This will show you a detailed info about the ongoing download
you just need to specify an URL like below example.
$ curl -O -w 'We downloaded %{size_download} bytes\n'
https://cmake.org/files/v3.8/cmake-3.8.2.tar.gz
output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7328k 100 7328k 0 0 244k 0 0:00:29 0:00:29 --:--:-- 365k
We downloaded 7504706 bytes
For automated purposes you'll just need to add the command to your
script file.
To combine all the above for me works:
URL="http://cdimage.debian.org/debian-cd/current/i386/iso-dvd/debian-9.5.0-i386-DVD-1.iso"
curl --head --silent --location "$URL" | grep -i "content-length:" | tr -d " \t" | cut -d ':' -f 2
This will return just the content length in bytes:
3767500800
You can kinda do it like this, including auto-following 301/302 redirections :
curl -ILs 'https://twitter.com/i/csp_report?a=ORTGK%3D%3D%3D&ro=fals' |
mawk 'NF*=!_<NF' \
OFS= FS='^[Cc][Oo][Nn][Tt][Ee][Nn][Tt]-[Ll][Ee][Nn][Gg][Tt][Hh]: '
1 41
It's very brute force but gets the job done - but that's whatever raw value being reported by the server, so you may have to make adjustments to it as you see fit.
You may also have to add the -g flag so it can auto handle switchover from vanilla http to https :
curl -gILs 'http://apple.com' |
mawk 'NF *= !_<NF' OFS= \
FS='^[Cc][Oo][Nn][Tt][Ee][Nn][Tt]-[Ll][Ee][Nn][Gg][Tt][Hh]: '
1 304
2 106049
'(I''m *guessing* this might be the main site,
and first item was the redirection page ? )'
Question is old and have been sufficiently answered , but let expand upon exisiting answer. If you want to automate this task ( for checking file sizes of multiple files) then here's a one liner.
first write the URL of the files in a file:
cat url_of_files.txt
https://stpubdata-jwst.stsci.edu/ero/jw02734/jw02734002001/jw02734002001_04101_00001-seg002_nis_x1dints.fits
https://stpubdata-jwst.stsci.edu/ero/jw02734/jw02734002001/jw02734002001_04101_00001-seg003_nis_calints.fits
https://stpubdata-jwst.stsci.edu/ero/jw02734/jw02734002001/jw02734002001_04102_00001-seg001_nis_calints.fits
https://stpubdata-jwst.stsci.edu/ero/jw02734/jw02734002001/jw02734002001_02101_00002-seg001_nis_cal.fits
...
then from the command line (from the same directory as your url_of_files.txt):
eval $(sed -rn '/^https/s/(https.*$)/curl -sI \1/p' url_of_files.txt) | awk '/[Cc]ontent-[Ll]ength/{kb=$2/1024;mb=kb/1024;gb=mb/1024;print ( $2>1024 ? ( kb>1024 ? ( mb>1024 ? gb " G" : mb " M") : kb " K" ) : $2 " B" ) }'
This is for checking file sizes ranging from bytes to Gbs. I use this line to check the fits data files being made available by the JWST team.
It checks the file size and depending on its size , roughly converts it to a an appropriate number with B,K,M,G extensions denoting the size in Bytes, Kilo bytes, Mega bytes, and Giga bytes.
result:
...
177.188 K
177.188 K
236.429 M
177.188 K
5.95184 M
1.83608 G
1.20326 G
130.059 M
1.20326 G
...
My solution is using awk END to ensure to grep only the last Content-length:
function curl2contentlength() {
curl -sI -L -H 'Accept-Encoding: gzip,deflate' $1 | grep -i Content-Length | awk 'END{print $2}'
}
curl2contentlength $#
./curl2contentlength.sh "https://chrt.fm/track/B63133/stitcher.simplecastaudio.com/ec74d48c-cbf1-4764-923e-7d584dce50fa/episodes/a85954a3-24c3-48ed-bced-ef0607b7149a/audio/128/default.mp3?aid=rss_feed&awCollectionId=ec74d48c-cbf1-4764-923e-7d584dce50fa&awEpisodeId=a85954a3-24c3-48ed-bced-ef0607b7149a&feed=qm_9xx0g"
10806508
In fact without it would have been
0
0
10806508
I use like this ([Cc]ontent-[Ll]ength:), because I got server give multiple Content-Length character at header response
curl -sI "http://someserver.com/hls/125454.ts" | grep [Cc]ontent-[Ll]ength: | awk '{ print $2 }'
Accept-Ranges: bytes
Access-Control-Expose-Headers: Date, Server, Content-Type, Content-Length
Server: WowzaStreamingEngine/4.5.0
Cache-Control: no-cache
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: OPTIONS, GET, POST, HEAD
Access-Control-Allow-Headers: Content-Type, User-Agent, If-Modified-Since, Cache-Control, Range
Date: Tue, 10 Jan 2017 01:56:08 GMT
Content-Type: video/MP2T
Content-Length: 666460
different solution:
ssh userName#IP ls -s PATH | grep FILENAME | awk '{print$1}'
gives you the size in KB

Resources