how to create same line if else statement around wc -l output - bash

I'm not at all familiar with tcsh, so really hoping someone could drop some knowledge on what i'm missing here. I'm trying to build a single lined if else statement that checks the number of rows in a file, and outputs a touch file if the number of rows > 1. I'm checking if rows is greater than 1 because the file can be generated with just header info. (Separately if anyone has any useful online guides that I may use for all things tcsh, that would also be very helpful)
Example file 1 (has data):
title1,title2
data1,data2
Example file 2 (empty):
title1,title2
Expected output:
If file has data, generates a has_data.txt, else does not generate file
What I've tried:
wc -l dummy_file_empty.txt >1 && touch has_data.txt - does not really evaluate greater than
update `` is used for evaluating statements in line, and $output is an array? So needed to reference index 1 to get the line count
set output=`wc -l dummy_file_empty.txt` | if($output[1] >1) touch has_data.txt - returns expression syntax error
if ( wc -l dummy_file_empty.txt ) > 10 touch has_data.txt - returns expression syntax error
i can invoke bash shell as-well if a solution exists in bash ( saw examples where bash is piped at the end of statement |bash and tested it to ensure it works on my machine)

[[ $(cat file.txt | wc -l) -gt 1 ]] && touch has_data.txt
When you use > this will attempt to write to a file. In this case a file named 1 or 10. You want to use -gt which stands for greater than.
If you're running this in a tcsh terminal, then you can invoke this via bash:
bash -c '[[ $(cat file.txt | wc -l) -gt 1 ]] && touch has_data.txt'

Just wanted to add a solution where I've gotten somewhat close..albeit i'm not really able to handle scenarios where the file might not exist. Hopefully somebody has guidance on whether this a good way to do it.
set output=`wc -l file.txt` && if ( $output[1] > 1 ) touch has_data.txt

Related

Delete duplicate commands of zsh_history keeping last occurence

I'm trying to write a shell script that deletes duplicate commands from my zsh_history file. Having no real shell script experience and given my C background I wrote this monstrosity that seems to work (only on Mac though), but takes a couple of lifetimes to end:
#!/bin/sh
history=./.zsh_history
currentLines=$(grep -c '^' $history)
wordToBeSearched=""
currentWord=""
contrastor=0
searchdex=""
echo "Currently handling a grand total of: $currentLines lines. Please stand by..."
while (( $currentLines - $contrastor > 0 ))
do
searchdex=1
wordToBeSearched=$(awk "NR==$currentLines - $contrastor" $history | cut -d ";" -f 2)
echo "$wordToBeSearched A BUSCAR"
while (( $currentLines - $contrastor - $searchdex > 0 ))
do
currentWord=$(awk "NR==$currentLines - $contrastor - $searchdex" $history | cut -d ";" -f 2)
echo $currentWord
if test "$currentWord" == "$wordToBeSearched"
then
sed -i .bak "$((currentLines - $contrastor - $searchdex)) d" $history
currentLines=$(grep -c '^' $history)
echo "Line deleted. New number of lines: $currentLines"
let "searchdex--"
fi
let "searchdex++"
done
let "contrastor++"
done
^THIS IS HORRIBLE CODE NOONE SHOULD USE^
I'm now looking for a less life-consuming approach using more shell-like conventions, mainly sed at this point. Thing is, zsh_history stores commands in a very specific way:
: 1652789298:0;man sed
Where the command itself is always preceded by ":0;".
I'd like to find a way to delete duplicate commands while keeping the last occurrence of each command intact and in order.
Currently I'm at a point where I have a functional line that will delete strange lines that find their way into the file (newlines and such):
#sed -i '/^:/!d' $history
But that's about it. Not really sure how get the expression to look for into a sed without falling back into everlasting whiles or how to delete the duplicates while keeping the last-occurring command.
The zsh option hist_ignore_all_dups should do what you want. Just add setopt hist_ignore_all_dups to your zshrc.
I wanted something similar, but I dont care about preserving the last one as you mentioned. This is just finding duplicates and removing them.
I used this command and then removed my .zsh_history and replacing it with the .zhistory that this command outputs
So from your home folder:
cat -n .zsh_history | sort -t ';' -uk2 | sort -nk1 | cut -f2- > .zhistory
This effectively will give you the file .zhistory containing the changed list, in my case it went from 9000 lines to 3000, you can check it with wc -l .zhistory to count the number of lines it has.
Please double check and make a backup of your zsh history before doing anything with it.
The sort command might be able to be modified to sort it by numerical value and somehow archieve what you want, but you will have to investigate further about that.
I found the script here, along with some commands to avoid saving duplicates in the future
I didn't want to rename the history file.
# dedupe_lines.zsh
if [ $# -eq 0 ]; then
echo "Error: No file specified" >&2
exit 1
fi
if [ ! -f $1 ]; then
echo "Error: File not found" >&2
exit 1
fi
sort $1 | uniq >temp.txt
mv temp.txt $1
Add dedupe_lines.zsh to your home directory, then make it executable.
chmod +x dedupe_lines.zsh
Run it.
./dedupe_lines.zsh .zsh_history

IF test against ARG_MAX and number of files in directory

I have potentially a large number of generated files and subdirectories in a directory for which the number is not known ahead of execution. For simplicity, lets say I just want to invoke
mv * /another_dir/.
But for large numbers of files and subdirectories, it'll come back with the dreaded 'argument too long'. Yes I know find and xargs is a way to deal with it. But I want to test that number against ARG_MAX and try it that way first.
I'm on a Mac and so I can't up the ulimit setting.
So far I have
# max_value
if ((expr `getconf ARG_MAX` - `env|wc -c` - `env|wc -l` \* 4 - 2048) ) ; then echo "ok" ; fi
which offers me a value to test against.
Lets say my test value to compute number of files or subdirectories in directory is based on
# test_value
(find . -maxdepth 1 | wc -l)
How can I get a working expression no matter how many files
if (test_value < max_value ) ; then echo "do this" else echo "do that" ; fi
Every way I try to construct the if test, the syntax fails for some reason in trying to set the max_value and test_value parameters and then test them together. Grateful for help.
When writing shell scripts, you have to pay a lot of attention to what context you're in, and use the right syntax for that context. The thing that goes between if and then is treated as a command, so you could use e.g. if expr 2 \> 1; then. But the modern replacement of the expr command is the (( )) arithmetic expression, so you'd use if (( 2 > 1 )); then. Note that you don't need to escape the > because it's no longer part of a regular command, and that you cannot put spaces between the parentheses ( ( something ) ) does something completely different from (( something )).
(( )) lets you run a calculation/comparison/etc as though it were a command. It's also common to want to use the result of a calculation as part of a command; for that, use $(( )):
max_value=$(($(getconf ARG_MAX) - $(env|wc -c) - $(env|wc -l) * 4 - 2048))
test_value=$(ls -A | wc -l)
Note that I used $( ) instead of backticks; it does essentially the same thing, with somewhat cleaner syntax. And again, the arrangement of parentheses matter: $(( )) does a calculation and captures its result; $( ) runs a command and captures its output. And $( ( ) ) would run a command in a pointless sub-subshell and capture its output. Oh, and I used ls -A because find . -maxdepth 1 will include . (the current directory) in its output, giving an overcount of 1. (And both the find and ls versions will miscount files with linefeeds in their name; oh, well.)
Then, to do the final comparison:
if ((test_value < max_value)) ; then echo "do this" else echo "do that" ; fi

Bash conditional on command exit code

In bash, I want to say "if a file doesn't contain XYZ, then" do a bunch of things. The most natural way to transpose this into code is something like:
if [ ! grep --quiet XYZ "$MyFile" ] ; then
... do things ...
fi
But of course, that's not valid Bash syntax. I could use backticks, but then I'll be testing the output of the file. The two alternatives I can think of are:
grep --quiet XYZ "$MyFile"
if [ $? -ne 0 ]; then
... do things ...
fi
And
grep --quiet XYZ "$MyFile" ||
( ... do things ...
)
I kind of prefer the second one, it's more Lispy and the || for control flow isn't that uncommon in scripting languages. I can see arguments for the first one too, although when the person reads the first line, they don't know why you're executing grep, it looks like you're executing it for it's main effect, rather than just to control a branch in script.
Is there a third, more direct way which uses an if statement and has the grep in the condition?
Yes there is:
if grep --quiet .....
then
# If grep finds something
fi
or if the grep fails
if ! grep --quiet .....
then
# If grep doesn't find something
fi
You don't need the [ ] (test) to check the return value of a command. Just try:
if ! grep --quiet XYZ "$MyFile" ; then
This is a matter of taste since there obviously are multiple working solutions. When I deal with a problem like this, I usually apply wc -l after grep in order to count the lines that match. Then you have a single integer number that you can evaluate within a test condition. If the question only is whether there is a match at all (the number of matching lines does not matter), then applying wc probably is OTT and evaluation of grep's return code seems to be the best solution:
Normally, the exit status is 0 if selected lines are found and 1
otherwise. But the exit status is 2 if an error occurred, unless the
-q or --quiet or --silent option is used and a selected line is found. Note, however, that POSIX only mandates, for programs such as grep,
cmp, and diff, that the exit status in case of error be greater than
1; it is therefore advisable, for the sake of portability, to use
logic that tests for this general condition instead of strict equality
with 2.

How to interrupt bash pipeline on error?

In the following example echo statement gets executed regardless of exit code of previous command in pipeline:
asemenov#cpp-01-ubuntu:~$
asemenov#cpp-01-ubuntu:~$ false|echo 123
123
asemenov#cpp-01-ubuntu:~$ true|echo 123
123
asemenov#cpp-01-ubuntu:~$
I want echo command to execute only on zero exit code of previous command, that is I want to achieve this behavior:
asemenov#cpp-01-ubuntu:~$ false|echo 123
asemenov#cpp-01-ubuntu:~$
Is it possible in bash?
Here is a more practical example:
asemenov#cpp-01-ubuntu:~$ find SomeNotExistingDir|xargs ls -1
find: `SomeNotExistingDir': No such file or directory
..
..
files list from my current directory
..
..
asemenov#cpp-01-ubuntu:~$
There is no reason to execute xargs ls -1 if find failed.
The components of a pipeline are always run unconditionally and logically in parallel; you cannot make the second (or later) processes in the pipeline only if the first (or earlier) process completes successfully.
In the specific case you show with find, you have at least two options:
find SomeNotExistingDir ... -exec ls -1 {} +
Or you can use a very useful feature of GNU xargs (not present in POSIX):
find SomeNotExistingDir ... | xargs -r ls -1
The -r option is equivalent to --no-run-if-empty option, which explains fairly precisely what it does. If you're using GNU find and GNU xargs, you should use the extensions -print0 and -0:
find SomeNotExistingDir ... -print0 | xargs -r -0 ls -1
This handles every character that can appear in a file name correctly.
In terms of command flow, the easiest way to do what you want would be to use the logical OR operator, like this:
[pierrep#DEVELOPMENT8 ~]: false || echo 123
123
[pierrep#DEVELOPMENT8 ~]: true || echo 123
[pierrep#DEVELOPMENT8 ~]:
This works since the || operator is evaluated in a lazy fashion, meaning that the right statement is only evaluated when the left statement evaluated to false or 1.
note: commands which are run successfully return exit status 0 when successful. Something other than 0 when they are not. in your example with find:
[pierrep#DEVELOPMENT8 ~]: find somedir || echo 123
find: `somedir': No such file or directory
123
[pierrep#DEVELOPMENT8 ~]: find .profile || echo 123
.profile
Using || wont redirect any kind of output from the command on the left of the ||.
If you want to run some command only when one succeeds you should just do a basic exit code check and temporarily store the output of one command in a variable in your script in order to feed it to the next command, like so:
$result=( $(find SomeNotExistingDir) )
$exit_code=$?
if [ $exit_code -eq 0 ]; then
for path in ${result[#]}; do
#do some stuff with the find results here...
echo $path;
done
fi
What this does: When find is run, it puts its results into the $result array. $? holds the exit code of the last run command, so here it is the find command. If find found SomeNotExisitingDir then loop through its results (since it might have found multiple instances of it) and do stuff with those paths. Else do nothing. Here else would be triggered when an error occurred in the execution of the find command or when the file/dir could not be found.
You can't do that with pipes because pipe creation will not wait for completion, other wise how could cat | tr 'a-z' 'A-Z' work?. Simulating pipes with test and temp files:
file1=$(mktemp)
file2=$(mktemp)
false > $file1 && (echo 123 > $file2) < $file1 && (prog3 > $file1) < $file2 && #....
rm $file1 $file2
the point is that when the first command fails there is no output for the second command and no reason to execute it - the result of this behavior becomes unexpected
If there is NO output on stdout when exit code is non-zero, then this information itself can be used for piping the data. No need to check for exit code. (Except for the optimization part off course.)
e.g. If you ignore optimization part, consider only the correctness part,
find SomeNotExistingDir|xargs ls -1
Can be changed to
find SomeNotExistingDir| while read x; do ls -1 "$x"; done
Except for while loop, the commands inside it will not be executed. The downfall of this approach is, some information (like line numbers) will be lost for commands like awk/sed/head etc. to be used in place of ls. Plus, ls will be executed N number of times, instead of 1, in case of xargs approach.
For the particular example you give, it's sufficient to simply check if there is any data on the pipe. The problem you experience is that xargs is getting no input, so it invokes ls with no arguments, and ls be default prints the contents of the current directory. anishsane's solution is almost sufficient, but it is not quite the same since it will invoke ls for each line of output, which is not at all what xargs does. However, you can do:
find /bad/path | xargs sh -c 'test $# = 0 || exec ls -1 "$#"'
Now, this pipe line will always succeed, and perhaps that is not desirable (this is the same behavior you get with just find /bad/path | xargs ls -l, though) To ensure that the pipeline fails, you can do:
find /bad/path | xargs sh -c 'test $# = 0 && exit 1; exec ls -1 "$#"'
There are some concerns however. xargs will quite happily invoke its command with many arguments (that is the point of it!), but some shells will handle a much smaller number of arguements than xargs, so it is quite possible that the shell will truncate the arguments. However, that is possibly an academic concern.

Bash Script Nested loop error

Code:
#! /bin/bash
while [ 1 -eq 1 ]
do
while [ $(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) != $(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) ]
$(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) > /tmp/cmdb;obexftp -b $1 -B 6 -p /tmp/cmdb
done
done
This code give me this error:
btcmdserver: 6: Syntax error: "done" unexpected (expecting "do")
Your second while loop is missing a do keyword.
Looks like you didn't close your while condition ( the [ has no matching ]), and that your loop has no body.
You cannot compare whole files like that. Anyway, you seem to be comparing a file to itself.
#!/bin/bash
while true
do
newest=~/Gene1/$(ls -t1 ~/Gene1 | head -n 1)
while ! cmp "$newest" "$newest" # huh? you are comparing a file to itself
do
# huh? do you mean this:
cat "$newest" > /tmp/cmdb
obexftp -b $1 -B 6 -p /tmp/cmdb
done
done
This has the most troubling syntax errors and antipatterns fixed, but is virtually guaranteed to not do anything useful. Hope it's still enough to get you a little bit closer to your goal. (Stating it in the question might help, too.)
Edit: If you are attempting to copy the newest file every time a new file appears in the directory you are watching, try this. There's still a race condition; if multiple new files appear while you are copying, you will miss all but one of them.
#!/bin/sh
genedir=$HOME/Gene1
previous=randomvalue_wehavenobananas
while true; do
newest=$(ls -t1 "$genedir" | head -n 1)
case $newest in
$previous) ;; # perhaps you should add a sleep here
*) obexftp -b $1 -B 6 -p "$genedir"/"$newest"
previous="$newest" ;;
esac
done
(I changed the shebang to /bin/sh mainly to show that this no longer contains any bashisms. The main change was to use ${HOME} instead of ~.)
A more robust approach would be to find all the files which have appeared since the previous time you copied, and copy them over. Then you could run this a little less aggressively (say, once per 5 minutes maybe, instead of the spin lock you have here, with no sleep at all between iterations). You could use a sentinel file in the watch directory to keep track of when you last copied some files, or just run a for loop over the ls -t1 output until you see a file you have seen before. (Note the comment about the lack of robustness with parsing ls output, though.)

Resources