Right Now I am trying to parse the values from my get time and date and break it down by each number
Format of the date/time
#!/bin/bash
prevDateTime=$(date +'%Y-%m-%d-%H:%M:%S')
echo "${prevDateTime}"
I want to be able to list it out like so
echo "${prevYear}"
echo "${prevMonth}"
echo "${prevDay}"
echo "${prevHour}"
echo "${prevMinute}"
echo "${prevSecond}"
and then like
echo "${prevDate}"
echo "${precTime}"
But I am not sure how to parse out the information any help would be great
A regular expression is probably the simplest solution, given the format of prevDateTime.
[[ $prevDateTime =~ (.*)-(.*)-(.*)-(.*):(.*):(.*) ]]
prevYear=${BASH_REMATCH[1]}
prevMonth=${BASH_REMATCH[2]}
# etc.
Technically, there's a "one"-liner to do this using declare:
declare $(date +'prevDateTime=%Y-%m-%d:%H:%M:%S
prevYear=%Y
prevMonth=%m
prevDat=%d
prevHour=%H
prevMinute=%M
prevSecond=%S')
It uses date to output a block of parameter assignments which declare instantiates. (Note that the command substitution is not quoted, so that each assignment is seen as a separate argument to declare. If there was any whitespace in the values to assign, you would have to switch to using eval with slightly different output from date.)
You can use read command with IFS to break down date components:
prevDateTime=$(date +'%Y-%m-%d-%H:%M:%S')
IFS='-:' read -ra arr <<< "$prevDateTime"
# print array values
declare -p arr
# This outputs
# declare -a arr='([0]="2015" [1]="05" [2]="21" [3]="10" [4]="24" [5]="28")'
#assign to other variables
prevYear=${arr[0]}
prevMonth=${arr[1]}
prevDay=${arr[2]}
prevHour=${arr[3]}
prevMinute=${arr[4]}
prevSecond=${arr[5]}
Fast solution using cut:
#!/bin/bash
prevDateTime=$(date +'%Y-%m-%d-%H:%M:%S')
echo "${prevDateTime}"
prevYear=`echo $prevDateTime | cut -d- -f1`
prevMonth=`echo $prevDateTime | cut -d- -f2`
prevDay=`echo $prevDateTime | cut -d- -f3`
prevHour=`echo $prevDateTime | cut -d- -f4 | cut -d: -f1`
prevMinute=`echo $prevDateTime | cut -d- -f4 | cut -d: -f2`
prevSecond=`echo $prevDateTime | cut -d- -f4 | cut -d: -f3`
echo "Year: $prevYear; Month: $prevMonth; Day: $prevDay"
echo "Hour: $prevHour; Minute: $prevMinute; Second: $prevSecond"
Related
I have a text file with a whole bunch of lines (1000 exactly) and they all have 4 bits of text, seperated by a ;.
Here is the for loop I'm using, to go through each line:
while IFS= read -r line; do
let liner++
if [[ liner -eq "1" ]]; then
continue
fi
name=$(echo "${line}" | cut -d';' -f1)
fullname=$(echo "${line}" | cut -d';' -f2)
id=$(echo "${line}" | cut -d';' -f3)
test=$(echo "${line}" | cut -d';' -f4)
echo "${GREEN}$(($liner-1))) ${name} ${ORANGE}v${test} ${RED}(${id})${NC}"
stuff+=("${fullname}")
done < list.txt
It takes about 5 seconds before it finishes running and I believe it's from all those cut (name, fullname, id, test) variables. What would be the best solution to speed this up?
Awk undoubtedly provides a better solution, but if you don't want to learn Awk right now, you could speed your function up a lot by just using read to split the lines into fields:
liner=0
stuff=()
while IFS=\; read -r name fullname id test; do
echo "$GREEN$((++liner))) $name ${ORANGE}v$test $RED($id)$NC"
stuff+=("$fullname")
done < <(tail -n+2 1000num.txt)
I'm writing a script to handle multiple uses for searching lists. Long story short, I am querying a DB and have a basic list such as this:
sdc 10:0 KQJWBE11
sdd 10:1 KSDJFBQK
sde 10:2 13KN13DD
sdf 10:3 123DJN1O
sdg 10:4 213JBDKJ
sdh 10:5 N2QQWMNE
sdi 10:6 QKEWJDQJ
sdj 10:7 QKWJEDWE
sdk 20:0 QEDQWEDQ
sdl 20:1 1234E13L
sdm 20:2 KQNE2OUN
sdn 20:3 QN2NK3JN
sdo 20:4 23J23EN2
sdp 20:5 2WBNEKNW
sdq 20:6 QWEDKJNW
sdr 20:7 QWEDQEDD
These exist in the variable "${TABLE_FORMAT}" and are formatted as just as above into a table.
#... other logic above this
# Query via primary and secondary location. Example: DISK_ARG="10:1"
elif [[ ${DISK_ARG} =~ ([[:digit:]]:[[:digit:]])+$ ]]; then
DISK_ARG_PRIMARY=$(echo "${DISK_ARG}" | cut -d: -f1)
DISK_ARG_SECONDARY=$(echo "${DISK_ARG}" | cut -d: -f2)
echo -e "${HEADER}"
echo -e "${TABLE_FORMAT}" | grep -Ei "($DISK_ARG_PRIMARY):($DISK_ARG_SECONDARY)"
fi
# Query secondary location. Example: DISK_ARG="5"
elif [[ ${DISK_ARG} =~ ([[:digit:]])+$ ]]; then
DISK_ARG_F2="$(echo "${F2}" | grep -Ei "([[:digit:]]):(${DISK_ARG})")"
DISK_ARG_PRIMARY=$(echo "${DISK_ARG_F2}" | cut -d: -f1)
DISK_ARG_SECONDARY=$(echo "${DISK_ARG_F2}" | cut -d: -f2)
echo -e "${TABLE_FORMAT}" | grep -Ei "($DISK_ARG_PRIMARY):($DISK_ARG_SECONDARY)"
fi
else :
fi
The offending line that doesn't work is in the second elif:
echo -e "${TABLE_FORMAT}" | grep -Ei "($DISK_ARG_PRIMARY):($DISK_ARG_SECONDARY)"
grep: Unmatched ( or \(
The current variables at this point are:
DISK_ARG_PRIMARY="10 20"
DISK_ARG_SECONDARY="5 5"
I want the following rendered as output:
sdh 10:5 N2QQWMNE
sdp 20:5 2WBNEKNW
I'm not sure if this could be accomplished with building some type of array in grep or modifying the IFS somehow. I want the script to handle many inputs and look for matches off of the relevant fields.
In your case
DISK_ARG_PRIMARY=$(echo "${DISK_ARG}" | cut -d: -f1)
DISK_ARG_SECONDARY=$(echo "${DISK_ARG}" | cut -d: -f2)
could be replaced with
DISK_ARG_PRIMARY=${DISK_ARG%:*} # remove :* suffix
DISK_ARG_SECONDARY=${DISK_ARG#*:} # remove *: prefix
to avoid subshell pipe and cut
also add -w option to grep to avoid to match substrings for example 5 will not match 51.
DISK_ARG_PRIMARY="10 20"
DISK_ARG_SECONDARY="5 5"
To have
grep -Ew '(10|20):(5|5)'
grep -Ew "($(set -- $DISK_ARG_PRIMARY;IFS=\|;echo "$*")):($(set -- $DISK_ARG_SECONDARY;IFS=\|;echo "$*"))"
I'm trying to print domain and topLeveldomain variables (example.com)
$line = example.com
domain =$line | cut -d. -f 1
topLeveldomain = $line | cut -d. -f 2
However when I try and echo $domain, it doesn't display desired value
test.sh: line 4: domain: command not found
test.sh: line 5: topLeveldomain: command not found
I suggest:
line="example.com"
domain=$(echo "$line" | cut -d. -f 1)
topLeveldomain=$(echo "$line" | cut -d. -f 2)
The right code for this should be:
line="example.com"
domain=$(echo "$line" | cut -d. -f 1)
topLeveldomain=$(echo "$line" | cut -d. -f 2)
Consider the right syntax of bash:
variable=value
(there are no blanks allowed)
if you want to use the content of the variable you have to add a leading $
e.g.
echo $variable
You don't need external tools for this, just do this in bash
$ string="example.com"
# print everything upto first de-limiter '.'
$ printf "${string%%.*}\n"
example
# print everything after first de-limiter '.'
$ printf "${string#*.}\n"
com
Remove spaces around =:
line=example.com # YES
line = example.com # NO
When you create a variable, do not prepend $ to the variable name:
line=example.com # YES
$line=example.com # NO
When using pipes, you need to pass standard output to the next command. Than means, you usually need to echo variables or cat files:
echo $line | cut -d. -f1 # YES
$line | cut -d. -f1 # NO
Use the $() syntax to get the output of a command into a variable:
new_variable=$(echo $line | cut -d. -f1) # YES
new_variable=echo $line | cut -d. -f1 # NO
I would rather use AWK:
domain="abc.def.hij.example.com"
awk -F. '{printf "TLD:%s\n2:%s\n3:%s\n", $NF, $(NF-1), $(NF-2)}' <<< "$domain"
Output
TLD:com
2:example
3:hij
In the command above, -F option specifies the field separator; NF is a built-in variable that keeps the number of input fields.
Issues with Your Code
The issues with your code are due to invalid syntax.
To set a variable in the shell, use
VARNAME="value"
Putting spaces around the equal sign will cause errors. It is a good
habit to quote content strings when assigning values to variables:
this will reduce the chance that you make errors.
Refer to the Bash Guide for Beginners.
this also works:
line="example.com"
domain=$(echo $line | cut -d. -f1)
toplevel=$(cut -d. -f2 <<<$line)
echo "domain name=" $domain
echo "Top Level=" $toplevel
You need to remove $ from line in the beginning, correct the spaces and echo $line in order to pipe the value to cut . Alternatively feed the cut with $line.
Given a previously defined $LINE in a shell script, I do the following
var1=$(echo $LINE | cut -d, -f4)
var2=$(echo $LINE | cut -d, -f5)
var3=$(echo $LINE | cut -d, -f6)
Is there any way for me to combine it into one command, where the cut is run only once?
Something like
var1,var2,var3=$(echo $LINE | cut -d, -f4,5,6)
The builtin read command can assign to multiple variables:
IFS=, read _ _ _ var1 var2 var3 _ <<< "$LINE"
yes, if you're ok with arrays:
var= ( $(echo $LINE | cut -d, --output-delimiter=' ' -f4-6) )
Note that that make var 0-indexed.
Though it might just be quicker and easier to turn the CSV $LINE into something that bash parenthesis understand, and then just do var = ( $LINE ).
EDIT: The above will cause issues if you have spaces in your $LINE... if so, you need to be a bit more careful, and AWK might be a better choice to add quotes:
var= ( $( echo $LINE | awk IFS=, '{print "\"$4\" \"$5\" \"$6\""}' ) )
I have the following:
FILENAME=$1
cat $FILENAME | while read LINE
do
response="$LINE" | cut -c1-14
request="$LINE" | cut -c15-31
difference=($response - $request)/1000
echo "$difference"
done
When I run this script it returns blank lines. What am I doing wrong?
Might be simpler in awk:
awk '{print ($1 - $2)/1000}' "$1"
I'm assuming that the first 14 chars and the next 17 chars are the first two blank-separated fields.
You need to change it to:
response=`echo $LINE | cut -c1-14`
request=`echo $LINE | cut -c15-31`
difference=`expr $response - $request`
val=`expr $difference/1000`
You are basically doing everything wrong ;)
This should be better:
FILENAME="$1"
cat "$FILENAME" | while read LINE
do
response=$(echo "$LINE" | cut -c1-14) # or cut -c1-14 <<< "$line"
request=$(echo "$LINE" | cut -c15-31)
difference=$((($response - $request)/1000)
echo "$difference"
done