Tabulate part of text file written by shell - shell
I have a shell script that is writing(echoing) the output on an array to a file. The file is in the following format
The tansaction detials for today are 35
Please check the 5 biggest transactions below
-----------------------------------------------------------------------------------
Client Name,Account Number,Amount,Tran Time
Michael Press,20484,602117,11.41.02
Adam West,164121,50152,11.41.06
John Smith,15113,411700,11.41.07
Leo Anderson,2115116,350056,11.41.07
Wayne Clark,451987,296503,11.41.08
And i have multiple such line.
How do i tabulate the names after ---?
I tried using spaces while echoing the array elements. Also tried tabs. I tried using column -t -s options. But the text above the --- is interfering with the desired output.
The desired output is
The tansaction detials for today are 35
Please check the 5 biggest transactions below
-----------------------------------------------------------------------------------
Client Name Account Number Amount Tran Time
Michael Press 20484 602117 11.41.02
Adam West 164121 50152 11.41.06
John Smith 15113 411700 11.41.07
Leo Anderson 2115116 350056 11.41.07
Wayne Clark 451987 296503 11.41.08
The printing to a file is a part of a bigger script. So, i am looking for a simple solution to plug into this script.
Here's the snippet from that script where i am echoing to the file.
echo "The tansaction detials for today are 35 " >> log.txt
echo "" >> log.txt
echo " Please check the 5 biggest transactios below " >> log.txt
echo "" >> log.txt
echo "-----------------------------------------------------------------------------------" >> log.txt
echo "" >> log.txt
echo "" >> log.txt
echo "Client Name,Account Number,Amount,Tran Time" >> log.txt
array=( `output from a different script` )
x=1
for i in ${array[#]}
do
#echo "Array $x - $i"
Clientname=$(echo $i | cut -f1 -d',')
accountno=$(echo $i | cut -f2 -d',')
amount=$(echo $i | cut -f3 -d',')
trantime=$(echo $i | cut -f4 -d',')
echo "$Clientname,$accountno,$amount,$trantime" >> log.txt
(( x=$x+1 ))
done
I'm not sure to understand everythings =P
but to answer this question :
How do i tabulate the names after ---?
echo -e "Example1\tExample2"
-e means : enable interpretation of backslash escapes
So for your output, I suggest :
echo -e "$Clientname\t$accountno\t$amount\t$trantime" >> log.txt
Edit : If you need more space, you can double,triple,... it
echo -e "Example1\t\tExample2"
If I understand your question, in order to produce the output format of:
Client Name Account Number Amount Tran Time
Michael Press 20484 602117 11.41.02
Adam West 164121 50152 11.41.06
John Smith 15113 411700 11.41.07
Leo Anderson 2115116 350056 11.41.07
Wayne Clark 451987 296503 11.41.08
You should use the output formatting provided by printf instead of echo. For example, for the headings, you can use:
printf "Client Name Account Number Amount Tran Time\n" >> log.txt
instead of:
echo "Client Name,Account Number,Amount,Tran Time" >> log.txt
For writing the five largest amounts and details, you could use:
printf "%-14s%-17s%8s%s\n" "$Clientname" "$accountno" "$amount" "$trantime" >> log.txt
instead of:
echo "$Clientname,$accountno,$amount,$trantime" >> log.txt
If that isn't what you are needing, just drop a comment and let me know and I'm happy to help further.
(you may have to tweak the field widths a bit, I just did a rough count)
True Tabular Output Requires Measuring Each Field
If you want to insure that your data is always in tabular form, you need to measure each field width (including the heading) and then take the max of either the field width (or heading) to set the field width for your output. Below is an example of how that can be done (using your simulated other program input):
#!/bin/bash
ofn="log.txt" # set output filename
# declare variables as array and integer types
declare -a line_arr hdg name acct amt trn tmp
declare -i nmx=0 acmx=0 ammx=0 tmx=0
# set heading array (so you can measure lengths)
hdg=( "Client Name"
"Account Number"
"Ammount"
"Tran Time" )
## set the initial max based on headings
nmx="${#hdg[0]}" # max name width
acmx="${#hdg[1]}" # max account width
ammx="${#hdg[2]}" # max ammount width
tmx="${#hdg[3]}" # max tran width
{ IFS=$'\n' # your array=( `output from a different script` )
line_arr=($(
cat << EOF
Michael Press,20484,602117,11.41.02
Adam West,164121,50152,11.41.06
John Smith,15113,411700,11.41.07
Leo Anderson,2115116,350056,11.41.07
Wayne Clark,451987,296503,11.41.08
EOF
)
)
}
# write heading to file
cat << EOF > "$ofn"
The tansaction detials for today are 35
Please check the 5 biggest transactions below
-----------------------------------------------------------------------------------
EOF
# read line array into tmp, compare to max field widths
{ IFS=$','
for i in "${line_arr[#]}"; do
tmp=( $(printf "%s" "$i") )
((${#tmp[0]} > nmx )) && nmx=${#tmp[0]}
((${#tmp[1]} > acmx )) && acmx=${#tmp[1]}
((${#tmp[2]} > ammx )) && ammx=${#tmp[2]}
((${#tmp[3]} > tmx )) && tmx=${#tmp[3]}
name+=( "${tmp[0]}" ) # fill name array
acct+=( "${tmp[1]}" ) # fill account num array
amt+=( "${tmp[2]}" ) # fill amount array
trn+=( "${tmp[3]}" ) # fill tran array
done
}
printf "%-*s %-*s %-*s %s\n" "$nmx" "${hdg[0]}" "$acmx" "${hdg[1]}" \
"$ammx" "${hdg[2]}" "${hdg[3]}" >> "$ofn"
for ((i = 0; i < ${#name[#]}; i++)); do
printf "%-*s %-*s %-*s %s\n" "$nmx" "${name[i]}" "$acmx" "${acct[i]}" \
"$ammx" "${amt[i]}" "${trn[i]}" >> "$ofn"
done
(you can remove the extra space between each field in the final two printf statements if you only want a single space between them -- looked better with 2 to me)
Output to log.txt
$ cat log.txt
The tansaction detials for today are 35
Please check the 5 biggest transactions below
-----------------------------------------------------------------------------------
Client Name Account Number Ammount Tran Time
Michael Press 20484 602117 11.41.02
Adam West 164121 50152 11.41.06
John Smith 15113 411700 11.41.07
Leo Anderson 2115116 350056 11.41.07
Wayne Clark 451987 296503 11.41.08
Look things over and let me know if you have any questions.
Related
Bash count occurences based on parameters
I'm new to bash shell and I have to do a script with a csv file. The file is a list of the participants, countries, sports and medals achieved. when executing the script, I should give as parameters the nationality (column 3) and the sport (column 8). The script should return the amount of participants of that country for that sport, and the amount of medals achieved. The amount of medals achieved is the sum of the columns "gold" "silver" "bronze" of each row which are columns 9,10 and 11. I cannot use grep, awk, sed or csvkit. So far, I have this code but I'm stuck with the medal counting part. nacionality=$1 sport=$2 columns= cut -d, -f 3,8 athletes.csv echo columns | tr -cd $nacionality,$sport | wc -c Could anyone help me? The file is: https://github.com/flother/rio2016/blob/master/athletes.csv The name of the file is script2_4.sh An example of the output is: ./script2_4.sh POL rowing Participants, Medals 26, 6 A sample of the file: id,name,nationality,sex,date_of_birth,height,weight,sport,gold,silver,bronze,info 736041664,A Jesus Garcia,ESP,male,1969-10-17,1.72,64,athletics,0,0,0, 532037425,A Lam Shin,KOR,female,1986-09-23,1.68,56,fencing,0,0,0, 435962603,Aaron Brown,CAN,male,1992-05-27,1.98,79,athletics,0,0,1, 521041435,Aaron Cook,MDA,male,1991-01-02,1.83,80,taekwondo,0,0,0, 33922579,Aaron Gate,NZL,male,1990-11-26,1.81,71,cycling,0,0,0, 173071782,Aaron Royle,AUS,male,1990-01-26,1.80,67,triathlon,0,0,0, 266237702,Aaron Russell,USA,male,1993-06-04,2.05,98,volleyball,0,0,1, 382571888,Aaron Younger,AUS,male,1991-09-25,1.93,100,aquatics,0,0,0, 87689776,Aauri Lorena Bokesa,ESP,female,1988-12-14,1.80,62,athletics,0,0,0, 997877719,Ababel Yeshaneh,ETH,female,1991-07-22,1.65,54,athletics,0,0,0, 343694681,Abadi Hadis,ETH,male,1997-11-06,1.70,63,athletics,0,0,0, 591319906,Abbas Abubakar Abbas,BRN,male,1996-05-17,1.75,66,athletics,0,0,0, 258556239,Abbas Qali,IOA,male,1992-10-11,,,aquatics,0,0,0, 376068084,Abbey D'Agostino,USA,female,1992-05-25,1.61,49,athletics,0,0,0, 162792594,Abbey Weitzeil,USA,female,1996-12-03,1.78,68,aquatics,1,1,0, 521036704,Abbie Brown,GBR,female,1996-04-10,1.76,71,rugby sevens,0,0,0, 149397772,Abbos Rakhmonov,UZB,male,1998-07-07,1.61,57,wrestling,0,0,0, 256673338,Abbubaker Mobara,RSA,male,1994-02-18,1.75,64,football,0,0,0, 337369662,Abby Erceg,NZL,female,1989-11-20,1.75,68,football,0,0,0, 334169879,Abd Elhalim Mohamed Abou,EGY,male,1989-06-03,2.10,88,volleyball,0,0,0, 215053268,Abdalaati Iguider,MAR,male,1987-03-25,1.73,57,athletics,0,0,0, 763711985,Abdalelah Haroun,QAT,male,1997-01-01,1.85,80,athletics,0,0,0,
Here is a pure bash implementation. Build a hash from field name to position ($h): #!/bin/bash file=athletes.csv nationality=$1 sport=$2 IFS=, read -a l < "$file" declare -A h for pos in "h${!l[#]}" do h["${l[$pos]}"]=$pos done declare -i participants=0 declare -i medals=0 while IFS=, read -a l do if [ "${l[${h["nationality"]}]}" = "$nationality" ] && [ "${l[${h["sport"]}]}" = "$sport" ] then ((participants++)) medals=$(( $medals + "${l[${h["gold"]}]}" + "${l[${h["silver"]}]}" + "${l[${h["bronze"]}]}" )) fi done < "$file" echo "Participants, Medals" echo "$participants, $medals" and example output with the first 4 lines of input: $ ./script2_4.sh CAN athletics Participants, Medals 1, 1
How to cut a variable to sub variable
I have a variable which store consist of TITLE:AUTHOR:PRICE:QUANTITY:SOLDQUANTITY in this case its wrinkle in time:myauthor:50.00:20:50 I store it in Result=`awk -F : -v "Title=$1" -v "Author=$2" 'tolower($1) == tolower(Title) && tolower($2) == tolower(Author)' BookDB.txt` however I would like to separate it into 4 variables like TITLE= "wrinkle in time" AUTHOR= "myauthor" PRICE= "50.00" Quantity= "20" UNIT= "50" then I would like to do a calculation for unit sold enter unit sold: 3 wrinkle in time, myauthor, 50.00, 20, 50 after update wrinkle in time, myauthor, 50.00, 17, 53 thanks in advance your help is much appreciated!!!
You can separate $result into the different variables you describe by using read: IFS=: read TITLE AUTHOR PRICE QUANTITY UNIT <<< "$result" Example: $ result="wrinkle in time:myauthor:50.00:20:50" $ IFS=: read TITLE AUTHOR PRICE QUANTITY UNIT <<< "$result" $ echo "$TITLE - by - $AUTHOR" wrinkle in time - by - myauthor You can also use read to prompt a user for input: read -p "Enter units sold: " SOLDUNITS This will store the value entered in the variable $SOLDUNITS. You can then use this to alter $QUANTITY and $UNIT as desired. To do (integer) arithmetic in bash, you can use the $((expression)) construct: QUANTITY=$((QUANTITY-SOLDUNITS)) Or: ((QUANTITY-=SOLDUNITS))
Separating sections of a text file with a bash script
I have a list: ### To Read: One Hundred Years of Solitude | Gabriel García Márquez Moby-Dick | Herman Melville Frankenstein | Mary Shelley On the Road | Jack Kerouac Eyeless in Gaza | Aldous Huxley ### Read: The Name of the Wind (The Kingkiller Chronicles: Day One) | Patrick Rothfuss | 6-27-2013 The Wise Man’s Fear (The Kingkiller Chronicles: Day Two) | Patrick Rothfuss | 8-4-2013 Vampires in the Lemon Grove | Karen Russell | 12-25-2013 Brave New World | Aldous Huxley | 2-2014 I'd like to use something like python's string.split(' | ') to separate the various fields into separate strings, but since the two sections have different numbers of fields, I think I need to treat them differently. How do I go about selecting the lines in between '### To Read:' and '### Read:' and after '### Read:' and splitting them? Should I use awk or sed?
You have not specified any desired output. So, as I interpret your question, you want to read certain lines from a file, split the lines on '|' and, analogous to python lists, put the results in bash arrays. The specified lines include all lines after ### To Read: except for the line that reads ### Read:. The script below does this and then, to demonstrate success, displays the arrays (using declare): active= while read line do if [ "$line" = '### To Read:' ] then active=1 elif [ "$line" = '### Read:' ] then active=1 elif [ "$active" ] then IFS='|' my_array=($line) declare -p my_array fi done <mylist The output from your sample input is: declare -a my_array='([0]="One Hundred Years of Solitude " [1]=" Gabriel García Márquez")' declare -a my_array='([0]="Moby-Dick " [1]=" Herman Melville")' declare -a my_array='([0]="Frankenstein " [1]=" Mary Shelley")' declare -a my_array='([0]="On the Road " [1]=" Jack Kerouac")' declare -a my_array='([0]="Eyeless in Gaza " [1]=" Aldous Huxley")' declare -a my_array='([0]="The Name of the Wind (The Kingkiller Chronicles: Day One) " [1]=" Patrick Rothfuss " [2]=" 6-27-2013")' declare -a my_array='([0]="The Wise Man’s Fear (The Kingkiller Chronicles: Day Two) " [1]=" Patrick Rothfuss " [2]=" 8-4-2013")' declare -a my_array='([0]="Vampires in the Lemon Grove " [1]=" Karen Russell " [2]=" 12-25-2013")' declare -a my_array='([0]="Brave New World " [1]=" Aldous Huxley " [2]=" 2-2014")' Note that this approach easily handles the input even though the lines have different numbers of fields.
You are not telling us how to deliver the final output, but here is a skeleton for an Awk solution. awk -F ' \| ' '/^### To read:/ { s=1; next } /^### Read:/ { s=2; next } s==1 { print $1 "," $2 ",\"\"" } s == 2 { print $1 "," $2 "," $3 }' file This will simply print an empty third field from the first subsection. You can obviously adapt the actions to be anything you like, or rewrite this in Python if you are more familiar with that.
Color escape codes in pretty printed columns
I have a tab-delimited text file which I send to column to "pretty print" a table. Original file: 1<TAB>blablablabla<TAB>aaaa bbb ccc 2<TAB>blabla<TAB>xxxxxx 34<TAB>okokokok<TAB>zzz yyy Using column -s$'\t' -t <original file>, I get 1 blablablabla aaaa bbb xxx 2 blabla xxxxxx 34 okokokok zzz yyy as desired. Now I want to add colors to the columns. I tried to add the escape codes around each tab-delimited field in the original file. column successfully prints in color, but the columns are no longer aligned. Instead, it just prints the TAB separators verbatim. The question is: how can I get the columns aligned, but also with unique colors? I've thought of two ways to achieve this: Adjust the column parameters to make the alignment work with color codes Redirect the output of column to another file, and do a search+replace on the first two whitespace-delimited fields (the first two columns are guaranteed to not contain spaces; the third column most likely will contain spaces, but no TAB characters) Problem is, I'm not sure how to do either of those two... For reference, here is what I'm passing to column: Note that the fields are indeed separated by TAB characters. I've confirmed this with od. edit: There doesn't seem to be an issue with the colorization. I already have the file shown above with the color codes working. The issue is column won't align once I send it input with escape codes. I am thinking of passing the fields without color codes to column, then copying the exact number of spaces column output between each field, and using that in a pretty print scheme.
I wrote a bash version of column (similar to the one from util-linux) which works with color codes: #!/bin/bash which sed >> /dev/null || exit 1 version=1.0b editor="Norman Geist" last="04 Jul 2016" # NOTE: Brilliant pipeable tool to format input text into a table by # NOTE: an configurable seperation string, similar to column # NOTE: from util-linux, but we are smart enough to ignore # NOTE: ANSI escape codes in our column width computation # NOTE: means we handle colors properly ;-) # BUG : none addspace=1 seperator=$(echo -e " ") columnW=() columnT=() while getopts "s:hp:v" opt; do case $opt in s ) seperator=$OPTARG;; p ) addspace=$OPTARG;; v ) echo "Version $version last edited by $editor ($last)"; exit 0;; h ) echo "column2 [-s seperator] [-p padding] [-v]"; exit 0;; * ) echo "Unknow comandline switch \"$opt\""; exit 1 esac done shift $(($OPTIND-1)) if [ ${#seperator} -lt 1 ]; then echo "Error) Please enter valid seperation string!" exit 1 fi if [ ${#addspace} -lt 1 ]; then echo "Error) Please enter number of addional padding spaces!" exit 1 fi #args: string function trimANSI() { TRIM=$1 TRIM=$(sed 's/\x1b\[[0-9;]*m//g' <<< $TRIM); #trim color codes TRIM=$(sed 's/\x1b(B//g' <<< $TRIM); #trim sgr0 directive echo $TRIM } #args: len function pad() { for ((i=0; i<$1; i++)) do echo -n " " done } #read and measure cols while read ROW do while IFS=$seperator read -ra COLS; do ITEMC=0 for ITEM in "${COLS[#]}"; do SITEM=$(trimANSI "$ITEM"); #quotes matter O_o [ ${#columnW[$ITEMC]} -gt 0 ] || columnW[$ITEMC]=0 [ ${columnW[$ITEMC]} -lt ${#SITEM} ] && columnW[$ITEMC]=${#SITEM} ((ITEMC++)) done columnT[${#columnT[#]}]="$ROW" done <<< "$ROW" done #print formatted output for ROW in "${columnT[#]}" do while IFS=$seperator read -ra COLS; do ITEMC=0 for ITEM in "${COLS[#]}"; do WIDTH=$(( ${columnW[$ITEMC]} + $addspace )) SITEM=$(trimANSI "$ITEM"); #quotes matter O_o PAD=$(($WIDTH-${#SITEM})) if [ $ITEMC -ne 0 ]; then pad $PAD fi echo -n "$ITEM" if [ $ITEMC -eq 0 ]; then pad $PAD fi ((ITEMC++)) done done <<< "$ROW" echo "" done Example usage: bold=$(tput bold) normal=$(tput sgr0) green=$(tput setaf 2) column2 -s § << END ${bold}First Name§Last Name§City${normal} ${green}John§Wick${normal}§New York ${green}Max§Pattern${normal}§Denver END Output example:
I would use awk for the colorization (sed can be used as well): awk '{printf "\033[1;32m%s\t\033[00m\033[1;33m%s\t\033[00m\033[1;34m%s\033[00m\n", $1, $2, $3;}' a.txt and pipe it to column for the alignment: ... | column -s$'\t' -t Output:
A solution using printf to format the ouput as well : while IFS=$'\t' read -r c1 c2 c3; do tput setaf 1; printf '%-10s' "$c1" tput setaf 2; printf '%-30s' "$c2" tput setaf 3; printf '%-30s' "$c3" tput sgr0; echo done < file
In my case, I wanted to selectively colorise values in a column depending on its value. Let's say I want okokokok to be green and blabla to be red. I can do it such way (the idea is to colorise values of columns after columnisation): GREEN_SED='\\033[0;32m' RED_SED='\\033[0;31m' NC_SED='\\033[0m' # No Color column -s$'\t' -t <original file> | echo -e "$(sed -e "s/okokokok/${GREEN_SED}okokokok${NC_SED}/g" -e "s/blabla/${RED_SED}blabla${NC_SED}/g")" Alternatively, with a variable: DATA=$(column -s$'\t' -t <original file>) GREEN_SED='\\033[0;32m' RED_SED='\\033[0;31m' NC_SED='\\033[0m' # No Color echo -e "$(sed -e "s/okokokok/${GREEN_SED}okokokok${NC_SED}/g" -e "s/blabla/${RED_SED}blabla${NC_SED}/g" <<< "$DATA")" Take a note of that additional backslash in values of color definitions. It is made for sed to not interpret an origingal backsash. This is a result:
2021 Updated BASH Answer TL;DR I really liked #NORMAN GEIST's answer but was way too slow for what i needed... So i coded my own version of his script, this time written in Perl (stdin looping and formatting) + Bash (only for presentation/help). You can find the full code here with an explanation on how to use it. It is comprehensive of: A Bash column-like command interface (same parameters like -t, -s, -o) Exaustive help with column_ansi --help or column_ansi -h Option to horizontally center. The actual "core" code can broken down to only the Perl part. Background and differences I needed to format a very long awk-generated colored output (more than 300 lines) into a nice table. I first thought of using column, but as i discovered it didn't take into consideration ANSI characters, since the output would come out not aligned. After searching a bit on Google i found #NORMAN GEIST's interesting answer on SO which dynamically calculated the width of every single column in the output after removing the ANSI characters and THEN it built the table. It was all good, but it was taking way too long to load (as someone pointed in the comments)... So i tried to convert #NORMAN GEIST's column2 from bash to perl and my god if there was a change! After trying out this version in my production script the time used to display data dropped from 30s to <1s!! Enjoy!
Converting Dates in Shell
How can I convert one date format to another format in a shellscript? Example: the old format is MM-DD-YY HH:MM but I want to convert it into YYYYMMDD.HHMM
Like "20${D:6:2}${D:0:2}${D:3:2}.${D:9:2}${D:12:2}00", if the old date in the $D variable.
Take advantage of the shell's word splitting and the positional parameters: date="12-31-11 23:59" IFS=" -:" set -- $date echo "20$3$1$2.$4$5" #=> 20111231.2359
myDate="21-12-11 23:59" #fmt is DD-MM-YY HH:MM outDate="20${myDate:6:2}${myDate:3:2}${myDate:0:2}.${myDate:9:2}${myDate:12:2}00" case "${outDate}" in 2[0-9][0-9][0-9][0-1][0-9][0-3][0-9].[0-2][0-9][0-5][[0-9][0-5][[0-9] ) : nothing_date_in_correct_format ;; * ) echo bad format for ${outDate} >&2 ;; esac Note that if you have a large file to process, then the above is an expensive(ish) process. For filebased data I would recommend something like cat infile ....|....|21-12-11 23:59|22-12-11 00:01| ...| awk ' function reformatDate(inDate) { if (inDate !~ /[0-3][0-9]-[0-1][0-9]-[0-9][0-9] [0-2][0-9]:[0-5][[0-9]/) { print "bad date format found in inDate= "inDate return -1 } # in format assumed to be DD-MM-YY HH:MM(:SS) return (2000 + substr(inDate,7,2) ) substr(inDate,4,2) substr(inDate, 1,2) \ "." substr(inDate,10,2) substr(inDate,13,2) \ ( substr(inDate,16,2) ? substr(inDate,16,2) : "00" ) } BEGIN { #add or comment out for each column of data that is a date value to convert # below is for example, edit as needed. dateCols[3]=3 dateCols[4]=4 # for awk people, I call this the pragmatic use of associative arrays ;-) #assuming pipe-delimited data for columns #....|....|21-12-11 23:59|22-12-11 00:01| ...| FS=OFS="|" } # main loop for each record { for (i=1; i<=NF; i++) { if (i in dateCols) { #dbg print "i=" i "\t$i=" $i $i=reformatDate($i) } } print $0 }' infile output ....|....|20111221.235900|20111222.000100| ...| I hope this helps.
There is a good answer down already, but you said you wanted an alternative in the comments, so here is my [rather awful in comparison] method: read sourcedate < <(echo "12-13-99 23:59"); read sourceyear < <(echo $sourcedate | cut -c 7-8); if [[ $sourceyear < 50 ]]; then read fullsourceyear < <(echo -n 20; echo $sourceyear); else read fullsourceyear < <(echo -n 19; echo $sourceyear); fi; read newsourcedate < <(echo -n $fullsourceyear; echo -n "-"; echo -n $sourcedate | cut -c -5); read newsourcedate < <(echo -n $newsourcedate; echo -n $sourcedate | cut -c 9-14); read newsourcedate < <(echo -n $newsourcedate; echo :00); date --date="$newsourcedate" +%Y%m%d.%H%M%S So, the first line just reads a date in, then we get the two-digit year, then we append it to '20' or '19' based on if it's less than 50 (so this would give you years from 1950 to 2049 - feel free to shift the line). Then we append a hyphen and the month and date. Then we append a space and the time, and lastly we append ':00' as the seconds (again feel free to make your own default). Lastly we use GNU date to read it in (since it's been standardized now) and print it in a different format (which you can edit). It's a lot longer and uglier than cutting up the string, but having the format in the last line may be worth it. Also you could shorten it significantly with the shorthand you just learned in the first answer. Good luck.