Automatic update data file and display? - bash

I am trying to develop a system where this application allows user to book ticket seat. I am trying to implement an automatic system(a function) where the app can choose the best seats for the user.
My current database(seats.txt) file store in this way(not sure if its a good format:
X000X
00000
0XXX0
where X means the seat is occupied, 0 means nothing.
After user login to my system, and choose the "Choose best for you", the user will be prompt to enter how many seats he/she want (I have done this part), now, if user enter: 2, I will check from first row, see if there is any empty seats, if yes, then I assign(this is a simple way, once I get this work, I will write a better "automatic-booking" algorithm)
I try to play with sed, awk, grep.. but it just cant work (I am new to bash programming, just learning bash 3 days ago).
Anyone can help?
FYI: The seats.txt format doesn't have to be that way. It can also be, store all seats in 1 row, like: X000X0XXX00XXX
Thanks =)

Here's something to get you started. This script reads in each seat from your file and displays if it's taken or empty, keeping track of the row and column number all the while.
#!/bin/bash
let ROW=1
let COL=1
# Read one character at a time into the variable $SEAT.
while read -n 1 SEAT; do
# Check if $SEAT is an X, 0, or other.
case "$SEAT" in
# Taken.
X) echo "Row $ROW, col $COL is taken"
let COL++
;;
# Empty.
0)
echo "Row $ROW, col $COL is EMPTY"
let COL++
;;
# Must be a new line ('\n').
*) let ROW++
let COL=1
;;
esac
done < seats.txt
Notice that we feed in seats.txt at the end of the script, not at the beginning. It's weird, but that's UNIX for ya. Curiously, the entire while loop behaves like one big command:
while read -n 1 SEAT; do {stuff}; done < seats.txt
The < at the end feeds in seats.txt to the loop as a whole, and specifically to the read command.

It's not really clear what help you're asking for here. "Anyone can help?" is a very broad question.
If you're asking if you're using the right tools then yes, the text processing tools (sed/awk/grep et al) are ideal for this given the initial requirement that it be done in bash in the first place. I'd personally choose a different baseline than bash but, if that's what you've decided, then your tool selection is okay.
I should mention that bash itself can do a lot of the things you'll probably be doing with the text processing tools and without the expense of starting up external processes. But, since you're using bash, I'm going to assume that performance is not your primary concern (don't get me wrong, bash will probably be fast enough for your purposes).
I would probably stick with the multi-line data representation for two reasons. The first is that simple text searches for two seats together will be easier if you keep the rows separate from each other. Otherwise, in the 5seat-by-2row XXXX00XXXX, a simplistic search would consider those two 0 seats together despite the fact they're nowhere near each other:
XXXX0
0XXXX
Secondly, some people consider the row to be very important. I won't sit in the first five rows at the local cinema simply because I have to keep moving my head to see all the action.
By way of example, you can get the front-most row with two consecutive seats with (commands are split for readability):
pax> cat seats.txt
X000X
00000
0XXX0
pax> expr $(
(echo '00000';cat seats.txt)
| grep -n 00
| tail -1
| sed 's/:.*//'
) - 1
2
The expr magic and extra echo are to ensure you get back 0 if no seats are available. And you can get the first position in that row with:
pax> cat seats.txt
| grep 00
| tail -1
| awk '{print index($0,"00")}'
3

Related

How can i save in a list/array first parts of a line and then sort them depending on the second part?

I have a school project that gives me several lines of string in a text like this:
team1-team2:2-1
team3-team1:2-2
etc
it wants me to determine what team won (or drew) and then make a league table with them, awarding points for wins/draws.
this is my first time using bash. what i did was save team1/team2 names in a variable and then do the same for goals. how should i make the table? i managed to make my script create a new file that saves in there all team names (And checking for no duplicates) but i dont know how to continue. should i make an array for each team saving in there their results? and then how do i implement the rankings, for example
team1 3p
team2 1p
etc.
im not asking for actual code, just a guide as to how i should implement it. is making a new file the right move? should i try making a new array with the teams instead? or something else?
The problem can be divided into 3 parts:
Read the input data into memory in a format that can be manipulated easily.
Manipulate the data in memory
Output the results in the desired format.
When reading the data into memory, you might decide to read all the data in one go before manipulating it. Or you might decide to read the input data one line at a time and manipulate each line as it is read. When using shell scripting languages, like bash, the second option usually results in simpler code.
The most important decision to make here is how you want to structure the data in memory. You normally want to avoid duplication of data, and you usually want a data structure that is easy to transform into your desired output. In this case, the most logical data structure is an associative array, using the team name as the key.
Assuming that you have to use bash, here is a framework for you to build upon:
#!/bin/bash
declare -A results
while IFS=':-' read team1 team2 score1 score2; do
if [ ${score1} -gt ${score2} ]; then
((results[${team1}]+=2))
elif [ ...next test... ]; then
...
else
...
fi
done < scores.txt
# Now you have an associative array containing the points for each team.
# You can either output it as it stands, or sort it by piping through the
# 'sort' command.
for key in $[!results[#]}; do
echo ...
done
I would use awk for this
AWK is an interpreted programming language(AWK stands for Aho, Weinberger, Kernighan) designed for text processing and typically used as a data extraction and reporting tool. AWK is used largely with Unix systems.
Using pure bash scripting is often messy for that kind of jobs.
Let me show you how easy it can be using awk
Input file : scores.txt
team1-team2:2-1
team3-team1:2-2
Code :
awk -F'[:-]' ' # set delimiters to ':' or '-'
{
if($3>$4){teams[$1] += 3} # first team gets 3 points
else if ($3<$4){teams[$2] += 3} # second team gets 3 points
else {teams[$1]+=1; teams[$2]+=1} # both teams get 1 point
}
END{ # after scanning input file
for(team in teams){
print(team OFS teams[team]) # print total points per team
}
}' scores.txt | sort -rnk 2 > ranking.txt # sort by nb of points
Output (ranking.txt):
team1 4
team3 1

How to format output of a select statement in Postgres back into a delimited file?

I am trying to work with some oddly created 'dumps' of some tables in postgres. Due to the tables containing specific data I will have to refrain from posting the exact information but I can give an example.
To give a bit more information, someone though that this exact command was a good way to backup a table.
echo 'select * from test1'|psql > test1.date.txt
However, in this one example that gives a lot of information that no one neeeds. To also be even more fun the person saw fit to remove the | that is normally seen with the data.
So what I end up with is something like this.
rowid test1
-------+----------------------
1 hi
2 no
(2 rows)
To also note, for this customer there are multiple tables here. My thoughts here was to use some simple python to figure out where in each line the + was and then mark those points. Then apply those points to each line throughout the file.
I was able to make this work for one set of files but for some reason the next set of files just doesn't work. What happens instead is that on most lines a pipe gets thrown in the middle of data
Maybe there is something I missing here, but does anyone see an easy way to put something like the above back into a normal delimiter file that I could then just load into the database?
Any python or bash related suggestions would also work in this case. Thank you.
As mentioned above, without a real example of where the '|' are that are causing problems, or a real example of where you are having problem, it is hard to know whether we are addressing your actual issue. That said, your two primary swiss-army=knives for text processing are sed and awk. If you have data similar to your example, with pipes between data fields you need to discard, then awk provides a fairly easy solution.
Take for example your short example and add a pipe in the middle that needs to be discarded, e.g.
$ cat dat/pgsql2.txt
rowid test1
-------+----------------------
1 | hi
2 | no
To process the file in awk discarding the '|' and outputting the remaining records in comma-separated-value format, you could do something like the following:
awk '{
if (NR > 2) {
for (i = 1; i <= NF; i++) {
if ($i != "|") {
if (i == 1)
printf "%s", $i
else
printf ",%s", $i
}
printf "\n"
}
}
}' inputfile
Which simply reads from inputfile (last line) and loops over the number of fields (NF) (3 in this case) and if the row-number is > 2 (to omit the heading) and the field $i is not "|", then it simply checks if this is the first field and outputs it without a comma, otherwise all other fields are output with a preceding comma.
Example Output
1,hi
2,no
awk is a bit awkward at first, but as far as text processing goes, there isn't much that will top it.
After trying multiple methods the only way I could make this work sadly was to just use the import feature for Excel and then play with that to get the columns I needed.

Use cut and grep to separate data while printing multiple fields

I want to be able to separate data by weeks, and the week is stated in a specific field on every line and would like to know how to use grep, cut, or anything else that's relevant JUST on that field the week is specified in while still being able to save the rest of the data that's being given to me. I need to be able to pipe the information into it via | because that's how the rest of my program needs it to be.
as the output gets processed, it should look something like this
asset.14548.extension 0
asset.40795.extension 0
asset.98745.extension 1
I want to be able to sort those names by their week number while still being able to keep the asset name in my output because the number of times that asset shows up is counted up, but my problem is I can't make my program smart enough to take just the "1" from the week number but smart enough to ignore the "1" located in the asset name.
UPDATE
The closest answer I found was
grep "^.........................$week" ;
That's good, but it relies on every string being the same length. Is there a way I can have it start from the right instead of the left? Because if so then that'd answer my question.
^ tells grep to start checking from the left and . tells grep to ignore whatever's in that space
I found what I was looking for in some documentation. Anchor matches!
grep "$week$" file
would output this if $week was 0
asset.14548.extension 0
asset.40795.extension 0
I couldn't find my exact question or a closely similar question with a simple answer, so hopefully it helps the next person scratching their head on this.

How can I create a list from a single url and append a numerical value to the end of every line

I need to provide a listing of a website's pages. The only thing to change per line is the page number at the end of the line. So for example, I need to take:
mywebsite.com/things/stuff/?q=content&page=1
And from that generate a sequential listing of pages:
mywebsite.com/things/stuff/?q=content&page=1
mywebsite.com/things/stuff/?q=content&page=2
mywebsite.com/things/stuff/?q=content&page=3
I need to list all pages between 1 - 120.
I have been using bash but any shell that gets the job done is fine. I don't have any code to show because I simply just don't know how to begin. It sounds simple enough but so far I'm completely at a loss as to how I can accomplish this.
With GNU bash 4:
printf '%s\n' 'mywebsite.com/things/stuff/?q=content&page='{1..120}
You can simply use:
for i in $(seq 120); do echo 'mywebsite.com/things/stuff/?q=content&page='"$i"; done > list.txt

Bash history enumerated from the end

In bash one can !-1 to execute the 1 command from the history enumerated from the end starting at 0. But what how can one view history so that mentioned enumeration is shown instead of usual one by built-in means? Of course one can write a simple script for that but I feel like there should be some kind of option for that.
Another question is if there is a way to somehow switch history expansion so that - sign would be unnessasary. Say exchanging meaning of !1 and !-1.
Showing negative indices is simple to implement. Just take the length of history (or get it from history 1), and subtract it from the index of all other history items.
neghistory() {
local i n s
read n s < <(history 1)
history "$#" | while read i s; do
printf '%5d %s\n' $((i-n-1)) "$s"
done
}
I don't see any built-in ways to affect history's output like this nor change how indices in history expansion works.

Resources