combining multiple lines into one and start new line after x lines - bash

I have a file which has a long list of data with a specific number of lines, below is an example:
111
222
333
444
555
666
777
888
999
i need to rearrange the files in which it'll put the output in one line (comma separated) and after x lines insert an new line in the output using shell:
111,222,333
444,555,666
777,888,999
Thanks
Ali

Related

How to add 2 to all integers bigger then 100 in a .txt file in bash

I have a .txt file with bookmarks and all bookmarks above 100 have to be placed 2 pages down from where they are now, because I added two pages in the document. How do I write a bash script that adds 2 to all integers it finds in the document?
I'm new to writing code in general, but I already know that I should make a for loop to read each line, then determine if each word is an integer or not and then with an if statement add 2 to each integer above 100.
The problem is that i don't exactly know how to access (read and write) to the file and I also don't know how to determine if something is a number or not.
Here is the link to the .txt file. A small sample:
The Tortle Package; 24
Tortle; 25
Elemental Evil Player's Companion; 27
Aarakocra; 28
Deep Gnome (gnome subrace); 30
Eberron\: Rising from the Last War; 84
Changelings; 85
Gnomes; 91
Goblinoids; 92
Bugbear; 93
Goblin; 94
Hobgoblin; 94
Half-Elves; 94
I did some research and this is the code I've come up with:
#!/bin/bash
cd /home/dexterdy/Documents/
i=$(grep -ho '[0-9]*' bookmarks.txt)
if [ "$i" -gt 100 ]; then
i += 2
fi
It seems that the grep variable outputs one large string with all the numbers. I also can't get the if-statement to work for some reason and I don't know how to actually write the numbers into the file.
From the shape of your input file, I suggest the following magic:
awk 'BEGIN{FS=OFS=";"}($NF>100){$NF+=2}1' input_file > output_file
This will remove that space just after the ;, which can be set back when doing:
awk 'BEGIN{FS=OFS=";"}($NF>100){$NF=" "($NF+2)}1' input_file > output_file
If you want to ensure that malformatted lines such as
foo;20
bar\; car;105
are all correctly converted into
foo; 20
bar\; car; 107
You have to do:
awk 'BEGIN{FS=OFS=";"}{$NF=" "($NF+($NF>100?2:0))}1' input_file > output_file

How to get column of line in file when items have spaces?

Does anyone know a command that can get me the nth column of a tab-delimited file, when the items in the columns of the file contain spaces? I tried awk and cut, but I think they are interpreting the spaces in the items as tabs and so are giving me incorrect values. I double checked by manually counting columns and I think this is the case.
You can set tab as a delimiter in the cut command like this:
cut -d$'\t' -f2 file.txt
Input (tab separated columns that contain spaces):
first item second item third item
123 456 789 987 654 321 741 852 933
Output (when selecting 2nd columnn):
second item
987 654 321
As you can see, the spaces didn't interfere with the column separation.

Split a file containing rows of CSV into several files in shell or Sheets

Have a file (longlist.csv) containing 333 rows of 4 comma-separated values. Instead, I'll like it to be 4 files - 3 of 100 rows and 1 of the remainder (33).
What if you didn't know how many lines the file was and you wanted it to be split into separate files each comprised of 100 rows and one additional file for the remaining lines?
How can one do this using a shell command/script or Google Sheets?
longlist.csv
1A, 1B,1C, 1D
2A,2B, 2C, 2D
...
333A, 333B, 333C, 333D
I think, you can do it as follows with using cat and piping appropriate combinations of head and tail commands:
cat longlist.csv | head -n 100 > list_01.csv
cat longlist.csv | head -n 200 | tail -n 100 > list_02.csv
cat longlist.csv | tail -n 133 | head -n 100 > list_03.csv
cat longlist.csv | tail -n 33 > list_04.csv
I haven't tested on a file with 333 rows. I tested this concept on the smaller file and mapped it into data provided in your question.
EDIT
Generic solution to this problem is easier than I thought.
We can simply use split unix program as follows:
split -l 100 longlist.csv new
This command will generate files with names newaa, newab, newac and so on. Each file will contain 100 lines from the original file and the last file will contain remaining lines (in case we don't have round number of the initial lines). We can also omit new keyword in the end and then default prefix for the new files will be x.

Removing text between two strings over multiple lines

I have a log file that generates timestamp and command logging on separate lines. I'd like to strip out the timestamp and save just the "user: command" list. I've tried several permutations of sed to replace or delete the data between strings, but it always oversteps the bounds of the command. Current log output similar to:
USER 001
6:32am
USER 001
random bash command or output 001
USER 002
7:41am
USER 002
random bash command or output 002
USER 001
7:43am
USER 001
random bash command or output 003
USER 002
7:43am
USER 002
random bash command or output 004
Desired output:
USER 001
random bash command or output 001
USER 002
random bash command or output 002
USER 001
random bash command or output 003
USER 002
random bash command or output 004
Looks like this will do:
sed -ri 'N; /^.*\n[0-9]/d'
(Assumes GNU sed.)
It processes the file two lines at a time.
On each cycle:
sed automatically reads one line into the pattern space.
The N command appends to the pattern space a newline and the next line.
If the pattern space matches "any text, newline, digit", then delete
it (and therefore don't auto-print it).
Otherwise, auto-print it.
If file is in same format all time, you can just remove the line like this:
awk 'NR%4!=1 && NR%4!=2' file
USER 001
random bash command or output 001
USER 002
random bash command or output 002
USER 001
random bash command or output 003
USER 002
random bash command or output 004
Or you can use it like this:
awk '!(NR%4==1 || NR%4==2)' file

Unix command to add character between Line 100 and 300

I need to add # at the beggining of every line in unix file between line numbers 115 and 315, how to do it ??
I tried the below command:
awk '{print "#" $0;}'>filename
but this added # to every line of file. The file has more than 1000 lines. I only want # to be added between line no 115 and 315. Kindly help
Thanks,
Sen
Try
sed -i '115,315 s/^/#/' filename
This will add a # to the beginning of lines numbered 115 - 315.

Resources