Newbie Alert!
I am trying to run a DNS record query for domain records in CSV via bash script. I want to find the MX records with host -t mx example.com and then record/output the result an another CSV.
Stuck at the stage to get the script to run the host -t mx$domain command because when running host -t mx**(space must be here)**example.com;
What I have:
#!/bin/bash
while IFS=, read -r domain
do
#echo ${domain/./\.}\
host -t mx${domain/./\.}
done < test1.csv
Thanks
Edit 1; Adding Sample Input and Output CSV
Input CSV
domain
24i.co.ke,
28feb.co.ke,
4thestatewire.co.ke,
aakenya.co.ke,
Expected Output
domain,mx
24i.co.ke,"24i.co.ke mail is handled by 20 alt2.aspmx.l.google.com.
24i.co.ke mail is handled by 30 aspmx3.googlemail.com.
24i.co.ke mail is handled by 10 aspmx.l.google.com."
28feb.co.ke,"28feb.co.ke mail is handled by 30 aspmx3.googlemail.com.
28feb.co.ke mail is handled by 30 aspmx5.googlemail.com.
28feb.co.ke mail is handled by 30 aspmx2.googlemail.com.
28feb.co.ke mail is handled by 10 aspmx.l.google.com.
28feb.co.ke mail is handled by 20 alt1.aspmx.l.google.com.
28feb.co.ke mail is handled by 20 alt2.aspmx.l.google.com.
28feb.co.ke mail is handled by 30 aspmx4.googlemail.com."
4thestatewire.co.ke,Host 4thestatewire.co.ke not found: 3(NXDOMAIN)
aakenya.co.ke,"aakenya.co.ke mail is handled by 20 ukns1.accesskenya.com.
aakenya.co.ke mail is handled by 10 smtpin.accesskenya.com."
abacus.co.ke,
Your substitution, ${domain/./\.} is probably not doing what you expect (though the result may be harmless). I can see that you've tried some debugging with an echo line. It would be interesting to know what you thought this substitution would achieve.
Your input file is CSV with two fields, the second one empty. I can't see anything that you would need to translate or change in that first field to make it compatible with a DNS lookup.
#!/usr/bin/env bash
file="${1:-test1.csv}"
if [[ ! -f "$file" ]]; then
printf 'No file: %s\n' "$file" >&2
exit 1
fi
(
read -r header; printf '%s\n' "$header"
while IFS=, read -r domain; do
line="$(host -t mx "$domain" | sort | head -1)"
printf '%s,"%s"\n' "$domain" "$line"
done
) < "$file"
So...
This takes an input file as an optional argument. If the input file (or test1.csv if none is provided) does not exist, the script exits.
It takes the MX records for the domain, sorts them, then selects the first one. By doing this, we put the lowest-numbered (highest priority) MX.
The while loop is in parentheses so that the header can be read from the same input stream as the loop. Note that parentheses denote a subshell, so variables set inside them will not be visible to the parts of the script outside the parentheses.
And finally, this actually prints some output, which your sample script did not. :-)
Give a try to this oneliner:
awk -F, 'FNR>1{ print $1 }' < input.txt \
| xargs -n 1 sh -c 'v="$(host -t mx $1)"; echo "$1,\"$v\""' argv0
Based on your input file, is reading after line 1, and using the first field (the domain) without the ,. The output is piped to xargs in where execute the command and store the value in a variable so that later can print it on your desired format domain,"output"
The only thing pending would be to add the first line "domain,mx" to the output.
Related
I am using a curl command to get json data from an application called "Jira".
Stupidly (in my view), you cannot use the api to return more than 50 values at a time.
The only choice is to do it in multiple commands and they call this "pagination". It is not possible to get more than 50 results, no matter the command.
This is the command here:
curl -i -X GET 'https://account_name.atlassian.net/rest/api/3/project/search?jql=ORDER%20BY%20Created&maxResults=50&startAt=100' --user 'scouse_bob#mycompany.com:<sec_token_deets>'
This is the key piece of what I am trying to work into a loop to avoid having to do this manually each time:
startAt=100
My goal is to "somehow" have this loop in blocks of fifty, so, startAt=50 then startAt=100, startAt=150 etc and append the entire output to a file until the figure 650 is reached and / or there is no further output available.
I have played around with a command like this:
#!/bin/ksh
i=1
while [[ $i -lt 1000 ]] ; do
curl -i -X GET 'https://account_name.atlassian.net/rest/api/3/project/search?jql=ORDER%20BY%20Created&maxResults=50&startAt=100' --user 'scouse_bob#mycompany.com:<sec_token_deets>'
echo "$i"
(( i += 1 ))
done
Which does not really get me far as although it will loop, I am uncertain as to how to apply the variable.
Help appreciated.
My goal is to "somehow" have this loop in blocks of fifty, so, startAt=50 then startAt=100, startAt=150 etc and append the entire output to a file until the figure 650 is reached and / or there is no further output available.
The former is easy:
i=0
while [[ $i -lt 650 ]]; do
# if you meant until 650 inclusive, change to -le 650 or -lt 700
curl "https://host/path?blah&startAt=$i"
# pipe to/through some processing if desired
# note URL is in " so $i is expanded but
# other special chars like & don't screw up parsing
# also -X GET is the default (without -d or similar) and can be omitted
(( i+=50 ))
done
The latter depends on just what 'no further output available' looks like. I'd expect you probably don't get an HTTP error, but either a contenttype indicating error or a JSON containing either an end or error indication or a no-data indication. How to recognize this depends on what you get, and I don't know this API. I'll guess you probably want something more or less like:
curl ... >tmpfile
if jq -e '.eof==true' tmpfile; then break; else cat/whatever tmpfile; fi
# or
if jq -e '.data|length==0' tmpfile; then break; else cat/whatever tmpfile; fi
where tmpfile is some suitable filename that won't conflict with your other files; the most general way is to use $(mktemp) (saved in a variable). Or instead of a file put the data in a variable var=$(curl ...) and then use <<<$var as input to anything that reads stdin.
EDIT: I meant to make this CW to make it easier for anyone to add/fix the API specifics, but forgot; instead I encourage anyone who knows to edit.
You may want to stop when you get partial output i.e. if you ask for 50 and get 37, it may mean there is no more after those 37 and you don't need to try the next batch. Again this depends on the API which I don't know.
I've written a pretty long, and moderately complex Bash script that enables me to start my Node server with chosen options very easily...the problem is that it's not working correctly.
The part that is giving me trouble is here...
if netstat -an | grep ":$REQUESTED_PORT" > /dev/null
then
SERVICE_PIDS_STRING=`lsof -i tcp:$REQUESTED_PORT -t`
OLD_IFS="$IFS"
IFS='
'
read -a SERVICE_PIDS <<< "${SERVICE_PIDS_STRING}"
IFS="$OLD_IFS"
printf 'Port is in use by the following service(s)...\n\n-------------------\n\nProcess : PID\n\n'
for PID in "${SERVICE_PIDS[#]}"
do
PROCESS_NAME=`ps -p $PID -o comm=`
printf "$PROCESS_NAME : $PID\n"
done
printf "\n-------------------\n\nPlease kill the procceses utilizing port $REQUESTED_PORT and run this script again...exiting.\n"
exit
The intended function of this script is to use netstat to test if the requested port is busy. If so, it reports the PIDs utilizing the port so that the user can kill them if they wish.
I'm fairly certain this is a problem with the way I'm using netstat. Occasionally, the netstat if statement will trigger, even though there is nothing using the port. lsof is working correctly, and doesn't report any PIDs using the port.
However, when the last time the script made this error, I declared the REQUESTED_PORT and then ran netstat -an | grep ":$REQUESTED_PORT". The shell did not report anything.
What is the problem with this condition that causes it to fire at inappropriate times?
EDIT
I should also mention that this machine is running Debian Jessie.
You're searching an awful lot of text, and your desired number could show up anywhere. Better to narrow the search down; and you can grab your PIDs and process names in the same step. Some other optimizations follow:
# upper case variable names should be reserved for the shell
if service_pids_string=$(lsof +c 15 -i tcp:$requested_port)
then
# make an array with newline separated string containing spaces
# note we're only setting IFS for this one command
IFS=$'\n' read -r -d '' -a service_pids <<< "$service_pids_string"
# remove the first element containing column headers
service_pids=("${service_pids[#]:1}")
printf 'Port is in use by the following service(s)...\n\n-------------------\n\nProcess : PID\n\n'
for pid in "${service_pids[#]}"
do
# simple space-separated text to array
pid=($pid)
echo "${pid[0]} : ${pid[1]}"
done
# printf should be passed variables as parameters
printf "\n-------------------\n\nPlease kill the procceses utilizing port %s and run this script again...exiting.\n" $requested_port
fi
You should run your script through shellcheck.net; it will probably find other potential issues that I haven't.
I am reading a long log file and splitting the columns in variables using bash.
cd $LOGDIR
IFS=","
while read LogTIME name md5
do
LogTime+="$(echo $LogTIME)"
Name+="$(echo $name)"
LOGDatamd5+="$(echo $md5)"
done < LOG.txt
But this is really slow and I don't need all the lines. The last 100 lines are enough (but the log file itself needs all the other lines for different programs).
I tried to use tail -n 10 LOG.txt | while read LogTIME name md5, but that takes really long as well and I had no output at all.
Another way I tested without success was:
cd $LOGDIR
foo="$(tail -n 10 LOG.txt)"
IFS=","
while read LogTIME name md5
do
LogTime+="$(echo $LogTIME)"
Name+="$(echo $name)"
LOGDatamd5+="$(echo $md5)"
done < "$foo"
But that gives me only the output of foo in total. Nothing was written into the variables inside the while loop.
There is probably a really easy way to do this, that I can't see...
Cheers,
BallerNacken
Process substitution is the common pattern:
while read LogTIME name md5 ; do
LogTime+=$LogTIME
Name+=$name
LogDatamd5+=$md5
done < <(tail -n100 LOG.txt)
Note that you don't need "$(echo $var)", you can assign $var directly.
I'm trying to write a simple script to print the first 5 lines of a webpage's source code, and then the request time it took for the page to load. Currently, I've been trying the following:
#!/bin/bash
# Enter a website, get data
output=$(curl $1 -s -w "%{time_connect}\n" | (head -n5; tail -n1))
echo "$output"
However, on some pages, the "tail" doesn't actually print, which should be the time to request, and I'm not sure why.
I've found that I can also use a while loop to iterate through lines and print the whole thing, but is there a way for me to just echo the first few lines of a variable and then the last line of that same variable, so I can precede the request time with a heading (ex: Request time: 0.489)?
I'd like to be able to format it as:
echo "HTML: $output\n"
echo "Request Time: $requestTime"
Thank you! Sorry if this seems very simple, I am really new to this language :). The main problem for me is getting this data all from the same request- doing two separate curl requests would be very simple.
head may read more than 5 lines of input in order to identify what it needs to output. This means the lines you intended to pass to tail may have already been consumed. It's safer to use a single process (awk, in this case) to handle all the output.
output=$(curl "$1" -s -w "%{time_connect}\n" | awk 'NR<=5 {print} END {print})
The carriage returns threw me. Try this:
echo "HTML: "
retn=$'\r'
i=0
while read item
do
item="${item/$retn/}" # Strip out the carriage-return
if (( i < 5 )); then # Only display the first 5 lines
echo "$item"
fi
(( i++ ))
requestTime="$item" # Grab the last line
done < <(curl $1 -s -w "%{time_connect}\n")
requestTime="${requestTime##*\>}" # Sanitise the last line
echo "Request Time: $requestTime"
If I may ask for some help, because I have no idea how to write this script. The script is supposed to be in one piece and every month check another file that only contains email addresses, use these addresses and send to each of them the message which contains another file as an attachement.
I know about cron, but the script would have to modify the cron file by itself, it cannot be done be the user. The only code I managed to put together is below, but it is not doing the job at all.
Here is an explanation what I want:
1. once a month the script takes email addresses from file1 (we do not need to worry about this file, it exists and contains email addresses)
2. the script creates email messages to an every email on the list in file1
3. to each of those messages the script attaches another file, file2 (we do not need to worry about file2, it exists) so file2 will be sent as an attachement
4) the script sends out these messages
Currently I managed to write the following code. It sends emails correctly, but the part that is responsible for rescheduling the next occurence returns errors. I presented these errors below.
#!/bin/bash
while read line
do
printf "Sending attachement " | mail -s 'plik' -a $2 $line
done <$1
nskip=31 #co ile dni ma się uruchomić
akt_miesiac=`date +"%m"`
nowy_miesiac=`date --date='$nskip days' +"%m"`
if [[ akt_miesiac = nowy_miesiac ]]
then
((nskip+=7))
fi
date=`date --date='$nskip days' +"9:00AM"` #ustalenie nast daty
at -m $date < $0 #ustalenie nast daty
date: wrong date: $nskip days'
date: wrong date:$nskip days'
Garbled time
Alternatively I came up with something like this, which also does not work:
#!/bin/bash
while read line
do
printf "Sending attachement " | mail -s 'plik' -a $2 $line
done <$1
if (date -d "%d" ==1) & (date "r" =="12:00:00")
then
date=`date --date='1 month'` #ustalenie nast daty
at -m $date < $0 #ustalenie nast daty
fi
read the email addresses from the file in a loop, save the current target recipient and the file to send as variables (maybe you could read the path of the file-to-send from another file that you can update, if you like), and use mutt ("man mutt") to see the syntax, it's not bad.