Bash Script Nested loop error - bash

Code:
#! /bin/bash
while [ 1 -eq 1 ]
do
while [ $(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) != $(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) ]
$(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) > /tmp/cmdb;obexftp -b $1 -B 6 -p /tmp/cmdb
done
done
This code give me this error:
btcmdserver: 6: Syntax error: "done" unexpected (expecting "do")

Your second while loop is missing a do keyword.
Looks like you didn't close your while condition ( the [ has no matching ]), and that your loop has no body.

You cannot compare whole files like that. Anyway, you seem to be comparing a file to itself.
#!/bin/bash
while true
do
newest=~/Gene1/$(ls -t1 ~/Gene1 | head -n 1)
while ! cmp "$newest" "$newest" # huh? you are comparing a file to itself
do
# huh? do you mean this:
cat "$newest" > /tmp/cmdb
obexftp -b $1 -B 6 -p /tmp/cmdb
done
done
This has the most troubling syntax errors and antipatterns fixed, but is virtually guaranteed to not do anything useful. Hope it's still enough to get you a little bit closer to your goal. (Stating it in the question might help, too.)
Edit: If you are attempting to copy the newest file every time a new file appears in the directory you are watching, try this. There's still a race condition; if multiple new files appear while you are copying, you will miss all but one of them.
#!/bin/sh
genedir=$HOME/Gene1
previous=randomvalue_wehavenobananas
while true; do
newest=$(ls -t1 "$genedir" | head -n 1)
case $newest in
$previous) ;; # perhaps you should add a sleep here
*) obexftp -b $1 -B 6 -p "$genedir"/"$newest"
previous="$newest" ;;
esac
done
(I changed the shebang to /bin/sh mainly to show that this no longer contains any bashisms. The main change was to use ${HOME} instead of ~.)
A more robust approach would be to find all the files which have appeared since the previous time you copied, and copy them over. Then you could run this a little less aggressively (say, once per 5 minutes maybe, instead of the spin lock you have here, with no sleep at all between iterations). You could use a sentinel file in the watch directory to keep track of when you last copied some files, or just run a for loop over the ls -t1 output until you see a file you have seen before. (Note the comment about the lack of robustness with parsing ls output, though.)

Related

Delete duplicate commands of zsh_history keeping last occurence

I'm trying to write a shell script that deletes duplicate commands from my zsh_history file. Having no real shell script experience and given my C background I wrote this monstrosity that seems to work (only on Mac though), but takes a couple of lifetimes to end:
#!/bin/sh
history=./.zsh_history
currentLines=$(grep -c '^' $history)
wordToBeSearched=""
currentWord=""
contrastor=0
searchdex=""
echo "Currently handling a grand total of: $currentLines lines. Please stand by..."
while (( $currentLines - $contrastor > 0 ))
do
searchdex=1
wordToBeSearched=$(awk "NR==$currentLines - $contrastor" $history | cut -d ";" -f 2)
echo "$wordToBeSearched A BUSCAR"
while (( $currentLines - $contrastor - $searchdex > 0 ))
do
currentWord=$(awk "NR==$currentLines - $contrastor - $searchdex" $history | cut -d ";" -f 2)
echo $currentWord
if test "$currentWord" == "$wordToBeSearched"
then
sed -i .bak "$((currentLines - $contrastor - $searchdex)) d" $history
currentLines=$(grep -c '^' $history)
echo "Line deleted. New number of lines: $currentLines"
let "searchdex--"
fi
let "searchdex++"
done
let "contrastor++"
done
^THIS IS HORRIBLE CODE NOONE SHOULD USE^
I'm now looking for a less life-consuming approach using more shell-like conventions, mainly sed at this point. Thing is, zsh_history stores commands in a very specific way:
: 1652789298:0;man sed
Where the command itself is always preceded by ":0;".
I'd like to find a way to delete duplicate commands while keeping the last occurrence of each command intact and in order.
Currently I'm at a point where I have a functional line that will delete strange lines that find their way into the file (newlines and such):
#sed -i '/^:/!d' $history
But that's about it. Not really sure how get the expression to look for into a sed without falling back into everlasting whiles or how to delete the duplicates while keeping the last-occurring command.
The zsh option hist_ignore_all_dups should do what you want. Just add setopt hist_ignore_all_dups to your zshrc.
I wanted something similar, but I dont care about preserving the last one as you mentioned. This is just finding duplicates and removing them.
I used this command and then removed my .zsh_history and replacing it with the .zhistory that this command outputs
So from your home folder:
cat -n .zsh_history | sort -t ';' -uk2 | sort -nk1 | cut -f2- > .zhistory
This effectively will give you the file .zhistory containing the changed list, in my case it went from 9000 lines to 3000, you can check it with wc -l .zhistory to count the number of lines it has.
Please double check and make a backup of your zsh history before doing anything with it.
The sort command might be able to be modified to sort it by numerical value and somehow archieve what you want, but you will have to investigate further about that.
I found the script here, along with some commands to avoid saving duplicates in the future
I didn't want to rename the history file.
# dedupe_lines.zsh
if [ $# -eq 0 ]; then
echo "Error: No file specified" >&2
exit 1
fi
if [ ! -f $1 ]; then
echo "Error: File not found" >&2
exit 1
fi
sort $1 | uniq >temp.txt
mv temp.txt $1
Add dedupe_lines.zsh to your home directory, then make it executable.
chmod +x dedupe_lines.zsh
Run it.
./dedupe_lines.zsh .zsh_history

Using Source file for "for file in"

I'm trying to run the following code on files that I choose and put into a variable source file. This is for ease of the user when I export it out. This is the code before trying to add a source file:
for file in ~/Twitter/Users/New/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > ~/Twitter/Users/TBA/"${file##*/}"
done
So I tried adding a source file like so:
#!/bin/bash
source ~/MYBASHSCRIPTS/Tests/scriptsettings.in
for file in $loctorem/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > $locdone
done
echo $loctorem
echo $locdone
with the scriptsettings.in configured as such:
loctorem="~/Twitter/Users/New"
locdone='~/Twitter/Users/TBA/"${file##*/}"'
I have tried both half old/half new code but neither work. does it really need to be hard coded in order to run? This will throw my whole "noob friendly" idea in the trash if so...
EDIT--- I only echo it at the end so that I can verify that it is calling the correct locations.
EDIT2--- Here is the exact script I original ran.
#!/bin/bash
for file in ~/Anon/Twitter/OpISIS/New/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > ~/Anon/Twitter/OpISIS/TBA/"${file##*/}"
done
And the new variant:
source ~/MYBASHSCRIPTS/Tests/scriptsettings.in
for file in $loctorem/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > "$(locdone_path "$file")"
done
with the source file being:
loctorem=/home/matrix/Anon/Twitter/OpISIS/New
locdone_path() { printf '%s\n' ~/Twitter/Users/TBA/"${1##*/}
as I said before, I'm still pretty new so sorry ifI'm doing an insanely stupid thing in here..
I'm trying to make the input and output folder/file set to a variable that the user can change. in the end this script will be ~80 lines and I want anyone to be able to run it instead of forcing everyone to have directories/files set up like mine. Then I'll have a setup script that makes the file with the variables stored in them so there is a one time setup, or the user can later change locations but they dont have to go into the entire code and change everything just to fit their system.
You've got two problems here. The first are the quotes, which prevent tilde expansion:
# this stores a path with a literal ~ character
loctorem='~/Twitter/Users/New'
# this works
loctorem=~/Twitter/Users/New
# this works too
loctorem="$HOME/Twitter/Users/New"
The second issue is that you're depending on $file before it's available. If you want to store code (an algorithm on how to calculate something, for instance), in your configuration, define a function:
# this can be put in your sourced file
locdone_path() { printf '%s\n' ~/Twitter/Users/TBA/"${1##*/}"; }
...and, later, to use that code, invoke the function:
... | head -n 9 >"$(locdone_path "$file")"
However, if you only want to make the directory customizable, you might do something much simpler:
loctorem=~/Twitter/Users/New
locdone=~/Twitter/Users/TBA
and:
... | head -n 9 >"$locdone/${file##*/}"

More elegant method of sorting a file after the first X lines?

I've been doing a lot of searching on Stack Overflow today for a solution for this and have found many questions about sorting after skipping X lines, but no really solid generic answers, so I threw together my own slipshod method of doing so:
head -n 15 FILE.EXT > temp.txt
tail -n+16 FILE.EXT | sort >> temp.txt
mv temp.txt FILE.EXT
This will sort the file (take your pick of the options for sort), while preserving the order of the first 15 lines of it. This is obviously fairly inelegant, with three file references and two different values to enter. Ideally I'd like to come up with a command that is less cumbersome, if possible, because this seems like a pretty common desire with not much support.
Does anyone have a simpler solution than mine?
Is there anything wrong with what I've done? Potential issues?
This problem lends itself more strongly to using a script, but my command is still probably slightly more quick than creating and executing a script for a one-off.
I'm not even close to a bash expert, so I'm hoping there is some bash-fu that can make this a quick one-liner. Is there a way to create and reference variables in a single command so that a user only needs to put in something like the name and line number?
This 'one-liner' generates the output:
awk 'NR <= 15 { print; next } { print | "sort" }'
Overwriting the original file cleanly is harder, and generally involves something that writes to a temporary file and renames it when that's complete.
As sputnick points out, if you have GNU awk, you can use the -i option to overwrite in-place:
gawk -i 'NR <= 15 { print; next } { print | "sort" }' FILE.EXT
(And gawk is often also installed as awk.)
If you don't have GNU awk, then I have a script ow derived from a script overwrite from Kernighan & Pike The UNIX Programming Environment that does just that.
Usage:
ow FILE.EXT awk 'NR <= 15 { print; next } { print | "sort" }' FILE.EXT
Code:
: "#(#)$Id: ow.sh,v 1.6 2005/06/30 18:14:08 jleffler Exp $"
#
# Overwrite file
# From: The UNIX Programming Environment by Kernighan and Pike
# Amended: remove PATH setting; handle file names with blanks.
case $# in
0|1)
echo "Usage: $0 file command [arguments]" 1>&2
exit 1;;
esac
file="$1"
shift
new=${TMPDIR:-/tmp}/ovrwr.$$.1
old=${TMPDIR:-/tmp}/ovrwr.$$.2
trap "rm -f '$new' '$old' ; exit 1" 0 1 2 15
if "$#" >"$new"
then
cp "$file" "$old"
trap "" 1 2 15
cp "$new" "$file"
rm -f "$new" "$old"
trap 0
exit 0
else
echo "$0: $1 failed - $file unchanged" 1>&2
rm -f "$new" "$old"
trap 0
exit 1
fi
It's old code; I haven't modified it for almost a decade now, but I have used it quite a lot. As noted by Charles Duffy, it could do with some modernization if you're likely to face file names starting with dashes (because those could be mistaken for command-line options to cp or mv), and it should have a shebang line, etc.
It also shows trapping signals (though nowadays, I usually trap '0 1 2 3 13 15', equivalent to 'EXIT HUP INT QUIT PIPE TERM') and naming temporary files for preventing casual interference (using $$ rather than mktemp — like I said, it is old code).
you can do a sort that skips some lines at the start of the file like this:
{ head -n 15 && sort; } < file > tempfile
it works because because head stops reading after 15 lines, so sort sees the rest of the file.
so to solve the full original problem.
{ head -n 15 && sort; } < file > tempfile && mv tempfile file
What about :
{ head -n 15 file; tail -n+16 file | sort ; }

Unix shell script for increment the extension

I have file getting generated (testfile) eachtime a script is run in working directory. I have to copy that generated file to a directory (testfolder) and have it incremented by a .ext
If the script is run for first time then copy the testfile to testfolder as "testfile.0" and when run second time copy the testfile to testfolder as "testfile.1" and so on.
My script:
#!/bin/sh
file="testfile"
n=1
ls folder/${file}* | while read i
do
if [ folder/${file}.${n} = ${i} ]
then
n=$(( $n + 1 ))
fi
done
cp testfile folder/${file}.${n}
this is only working for first increment "folder/testfile.0"
I won't correct your solution, since mklement0 does it well.
Here is another solution, without any loop:
file="testfile"
n_max=$(ls -1 "folder/${file}"* | egrep -o '[0-9]+$' | sort -rn | head -n 1)
cp "${file}" "folder/${file}.$((n_max+1))"
Here is the thing of the second line: first you list the files, then egrep extracts the digit extension, then sort -rg sorts them decreasingly, and last head catchs only the first line, i.e. the largest used index. Finally, in the third line, you add one to this max to build your new index. It is ok on my script if the list is empty.
By the way, listing a directory may be quite long, so I suggest you to store somewhere the last index you used for later use. Save it as a variable in the script, or even in a file ... it can save you also some time.
The problem is that your while loop is executed in a subshell due to use of a pipe, so your modifications of n do not work as intended.
In general, you could use process substitution with while to avoid this problem, but in the case at hand a simple for loop is the right approach:
#!/bin/sh
file="testfile"
n=1
for i in folder/${file}*
do
if [ folder/${file}.${n} = ${i} ]
then
n=$(( $n + 1 ))
fi
done
cp testfile folder/${file}.${n}
Just another suggestion - in easy to understand code:
#!/bin/bash
file="testfile"
n=1
# Check for existing files with filename and count it
n=$(ls -latr folder/ |grep -i ${file} | wc -l)
# Increment above number
n=$(( $n + 1 ))
# Copy over file with the incremented number
cp $file folder/${file}.${n}

Shell script: Get name of last file in a folder by alphabetical order

I have a folder with backups from a MySQL database that are created automatically. Their name consists of the date the backup was made, like so:
2010-06-12_19-45-05.mysql.gz
2010-06-14_19-45-05.mysql.gz
2010-06-18_19-45-05.mysql.gz
2010-07-01_19-45-05.mysql.gz
What is a way to get the filename of the last file in the list, i.e. of the one which in alphabetical order comes last?
In a shell script, I would like to do something like
LAST_BACKUP_FILE= ???
gunzip $LAST_BACKUP_FILE;
ls -1 | tail -n 1
If you want to assign this to a variable, use $(...) or backticks.
FILE=`ls -1 | tail -n 1`
FILE=$(ls -1 | tail -n 1)
#Sjoerd's answer is correct, I'll just pick a few nits from it:
you don't need the -1 option to enforce one path per line if you pipe the output somewhere:
ls | tail -n 1
you can use -r to get the listing in reverse order, and take the first one:
ls -r | head -n 1
gunzip some.log.gz will write uncompressed data into some.log and remove some.log.gz, which may or may not be what you want (probably isn't). if you want to keep the compressed source, pipe it into gunzip:
gunzip < some.file.gz
you might want to protect the script against situation when the dir contains no files, since
gunzip $empty_variable
expands to just
gunzip
and such invocation will wait indefinitely for data on standard input:
latest="$(ls -r /some/where/*.gz | head -1)"
if test -z "$latest"; then
# there's no logs yet, bail out
exit
fi
gunzip < $latest
ls can yield unexpected results when parsed by other commands if the filenames have unusual characters. The following always works:
for LAST_BACKUP_FILE in *; do : ; done
for LAST_BACKUP_FILE in * loops through every filename (and folder name, if there are any) in order in the current directory, storing each in $LAST_BACKUP_FILE
do : does nothing
done finishes after the last file
Now, the last file is stored in $LAST_BACKUP_FILE.
If you happen to want the first file, use this:
for FIRST_BACKUP_FILE in *; do break; done
The break statement jumps out of the loop after the first file is stored in $FIRST_BACKUPT_FILE.
(from comment below) If you want hidden files included in the search, then use the command shopt -s dotglob before running the loops.
The shell is more powerful than many think. Just let it work for you. Assuming filenames without spaces,
set -- $(ls -r *.gz)
LAST_BACKUP_FILE=$1
does the trick with a single fork, no pipes, and you can even avoid the fork if your shell supports arithmetic expansion as in
set -- *.gz
shift $(($# - 1))
LAST_BACKUP_FILE=$1

Resources