I'm trying to run the following code on files that I choose and put into a variable source file. This is for ease of the user when I export it out. This is the code before trying to add a source file:
for file in ~/Twitter/Users/New/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > ~/Twitter/Users/TBA/"${file##*/}"
done
So I tried adding a source file like so:
#!/bin/bash
source ~/MYBASHSCRIPTS/Tests/scriptsettings.in
for file in $loctorem/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > $locdone
done
echo $loctorem
echo $locdone
with the scriptsettings.in configured as such:
loctorem="~/Twitter/Users/New"
locdone='~/Twitter/Users/TBA/"${file##*/}"'
I have tried both half old/half new code but neither work. does it really need to be hard coded in order to run? This will throw my whole "noob friendly" idea in the trash if so...
EDIT--- I only echo it at the end so that I can verify that it is calling the correct locations.
EDIT2--- Here is the exact script I original ran.
#!/bin/bash
for file in ~/Anon/Twitter/OpISIS/New/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > ~/Anon/Twitter/OpISIS/TBA/"${file##*/}"
done
And the new variant:
source ~/MYBASHSCRIPTS/Tests/scriptsettings.in
for file in $loctorem/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > "$(locdone_path "$file")"
done
with the source file being:
loctorem=/home/matrix/Anon/Twitter/OpISIS/New
locdone_path() { printf '%s\n' ~/Twitter/Users/TBA/"${1##*/}
as I said before, I'm still pretty new so sorry ifI'm doing an insanely stupid thing in here..
I'm trying to make the input and output folder/file set to a variable that the user can change. in the end this script will be ~80 lines and I want anyone to be able to run it instead of forcing everyone to have directories/files set up like mine. Then I'll have a setup script that makes the file with the variables stored in them so there is a one time setup, or the user can later change locations but they dont have to go into the entire code and change everything just to fit their system.
You've got two problems here. The first are the quotes, which prevent tilde expansion:
# this stores a path with a literal ~ character
loctorem='~/Twitter/Users/New'
# this works
loctorem=~/Twitter/Users/New
# this works too
loctorem="$HOME/Twitter/Users/New"
The second issue is that you're depending on $file before it's available. If you want to store code (an algorithm on how to calculate something, for instance), in your configuration, define a function:
# this can be put in your sourced file
locdone_path() { printf '%s\n' ~/Twitter/Users/TBA/"${1##*/}"; }
...and, later, to use that code, invoke the function:
... | head -n 9 >"$(locdone_path "$file")"
However, if you only want to make the directory customizable, you might do something much simpler:
loctorem=~/Twitter/Users/New
locdone=~/Twitter/Users/TBA
and:
... | head -n 9 >"$locdone/${file##*/}"
Related
I'm trying to write a shell script that deletes duplicate commands from my zsh_history file. Having no real shell script experience and given my C background I wrote this monstrosity that seems to work (only on Mac though), but takes a couple of lifetimes to end:
#!/bin/sh
history=./.zsh_history
currentLines=$(grep -c '^' $history)
wordToBeSearched=""
currentWord=""
contrastor=0
searchdex=""
echo "Currently handling a grand total of: $currentLines lines. Please stand by..."
while (( $currentLines - $contrastor > 0 ))
do
searchdex=1
wordToBeSearched=$(awk "NR==$currentLines - $contrastor" $history | cut -d ";" -f 2)
echo "$wordToBeSearched A BUSCAR"
while (( $currentLines - $contrastor - $searchdex > 0 ))
do
currentWord=$(awk "NR==$currentLines - $contrastor - $searchdex" $history | cut -d ";" -f 2)
echo $currentWord
if test "$currentWord" == "$wordToBeSearched"
then
sed -i .bak "$((currentLines - $contrastor - $searchdex)) d" $history
currentLines=$(grep -c '^' $history)
echo "Line deleted. New number of lines: $currentLines"
let "searchdex--"
fi
let "searchdex++"
done
let "contrastor++"
done
^THIS IS HORRIBLE CODE NOONE SHOULD USE^
I'm now looking for a less life-consuming approach using more shell-like conventions, mainly sed at this point. Thing is, zsh_history stores commands in a very specific way:
: 1652789298:0;man sed
Where the command itself is always preceded by ":0;".
I'd like to find a way to delete duplicate commands while keeping the last occurrence of each command intact and in order.
Currently I'm at a point where I have a functional line that will delete strange lines that find their way into the file (newlines and such):
#sed -i '/^:/!d' $history
But that's about it. Not really sure how get the expression to look for into a sed without falling back into everlasting whiles or how to delete the duplicates while keeping the last-occurring command.
The zsh option hist_ignore_all_dups should do what you want. Just add setopt hist_ignore_all_dups to your zshrc.
I wanted something similar, but I dont care about preserving the last one as you mentioned. This is just finding duplicates and removing them.
I used this command and then removed my .zsh_history and replacing it with the .zhistory that this command outputs
So from your home folder:
cat -n .zsh_history | sort -t ';' -uk2 | sort -nk1 | cut -f2- > .zhistory
This effectively will give you the file .zhistory containing the changed list, in my case it went from 9000 lines to 3000, you can check it with wc -l .zhistory to count the number of lines it has.
Please double check and make a backup of your zsh history before doing anything with it.
The sort command might be able to be modified to sort it by numerical value and somehow archieve what you want, but you will have to investigate further about that.
I found the script here, along with some commands to avoid saving duplicates in the future
I didn't want to rename the history file.
# dedupe_lines.zsh
if [ $# -eq 0 ]; then
echo "Error: No file specified" >&2
exit 1
fi
if [ ! -f $1 ]; then
echo "Error: File not found" >&2
exit 1
fi
sort $1 | uniq >temp.txt
mv temp.txt $1
Add dedupe_lines.zsh to your home directory, then make it executable.
chmod +x dedupe_lines.zsh
Run it.
./dedupe_lines.zsh .zsh_history
I need to use this command
/usr/local/bin/mcl find -f .bz2
which returns me this
:???????? '/Cloud Drive/test1.bz2'
:???????? '/Cloud Drive/test2.bz2'
into a BASH script. The problem is that I need the last parameter (.bz2) to be a variable.
I've tried with this
FILENAME=".bz2"
UPLOADED=$(/usr/local/bin/mcl find -f $FILENAME)
# do something with $UPLOADED
But obviously it is not working. After some research on StackOverflow and on the web I have found several ways to do something like that (even using backticks), but still I can't manage to make it work.
What is the correct way to do that?
You mean like this?
uploaded=$(mcl find -f "$FILENAME" | cut -d"'" -f2)
for u in $uploaded; do
echo "$u"
# process "$u"
done
You can try save the following as e.g. ./script.sh
filename="${1:-.bz2}" #<-- your variable as 1st argument, defaults to .bz2
do_my_work() {
local uploaded="$1"
#do whatever you want with the "uploaded"
printf "got:==%s==\n" "$uploaded"
}
while IFS= read -r __mc1path
do
do_my_work "$__mc1path"
done < <(mc1 find -f "$filename" | sed "s/.*'\(.*\)'.*/\1/")
# variable----^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^- keep only stuff inside of quotes
and use it as
./script.sh .bz2 #or anything, defaults to ".bz2"
and will print
got:==/Cloud Drive/test1.bz2==
got:==/Cloud Drive/test2.bz2==
I think you want that :
UPLOADED=`/usr/local/bin/mcl find -f $FILENAME`
Code:
#! /bin/bash
while [ 1 -eq 1 ]
do
while [ $(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) != $(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) ]
$(cat ~/Genel/$(ls -t1 ~/Genel | head -n1)) > /tmp/cmdb;obexftp -b $1 -B 6 -p /tmp/cmdb
done
done
This code give me this error:
btcmdserver: 6: Syntax error: "done" unexpected (expecting "do")
Your second while loop is missing a do keyword.
Looks like you didn't close your while condition ( the [ has no matching ]), and that your loop has no body.
You cannot compare whole files like that. Anyway, you seem to be comparing a file to itself.
#!/bin/bash
while true
do
newest=~/Gene1/$(ls -t1 ~/Gene1 | head -n 1)
while ! cmp "$newest" "$newest" # huh? you are comparing a file to itself
do
# huh? do you mean this:
cat "$newest" > /tmp/cmdb
obexftp -b $1 -B 6 -p /tmp/cmdb
done
done
This has the most troubling syntax errors and antipatterns fixed, but is virtually guaranteed to not do anything useful. Hope it's still enough to get you a little bit closer to your goal. (Stating it in the question might help, too.)
Edit: If you are attempting to copy the newest file every time a new file appears in the directory you are watching, try this. There's still a race condition; if multiple new files appear while you are copying, you will miss all but one of them.
#!/bin/sh
genedir=$HOME/Gene1
previous=randomvalue_wehavenobananas
while true; do
newest=$(ls -t1 "$genedir" | head -n 1)
case $newest in
$previous) ;; # perhaps you should add a sleep here
*) obexftp -b $1 -B 6 -p "$genedir"/"$newest"
previous="$newest" ;;
esac
done
(I changed the shebang to /bin/sh mainly to show that this no longer contains any bashisms. The main change was to use ${HOME} instead of ~.)
A more robust approach would be to find all the files which have appeared since the previous time you copied, and copy them over. Then you could run this a little less aggressively (say, once per 5 minutes maybe, instead of the spin lock you have here, with no sleep at all between iterations). You could use a sentinel file in the watch directory to keep track of when you last copied some files, or just run a for loop over the ls -t1 output until you see a file you have seen before. (Note the comment about the lack of robustness with parsing ls output, though.)
Sometimes I need to rename some amount of files, such as add a prefix or remove something.
At first I wrote a python script. It works well, and I want a shell version. Therefore I wrote something like that:
$1 - which directory to list,
$2 - what pattern will be replacement,
$3 - replacement.
echo "usage: dir pattern replacement"
for fname in `ls $1`
do
newName=$(echo $fname | sed "s/^$2/$3/")
echo 'mv' "$1/$fname" "$1/$newName&&"
mv "$1/$fname" "$1/$newName"
done
It works but very slowly, probably because it needs to create a process (here sed and mv) and destroy it and create same process again just to have a different argument. Is that true? If so, how to avoid it, how can I get a faster version?
I thought to offer all processed files a name (using sed to process them at once), but it still needs mv in the loop.
Please tell me, how you guys do it? Thanks. If you find my question hard to understand please be patient, my English is not very good, sorry.
--- update ---
I am sorry for my description. My core question is: "IF we should use some command in loop, will that lower performance?" Because in for i in {1..100000}; do ls 1>/dev/null; done creating and destroying a process will take most of the time. So what I want is "Is there any way to reduce that cost?".
Thanks to kev and S.R.I for giving me a rename solution to rename files.
Every time you call an external binary (ls, sed, mv), bash has to fork itself to exec the command and that takes a big performance hit.
You can do everything you want to do in pure bash 4.X and only need to call mv
pat_rename(){
if [[ ! -d "$1" ]]; then
echo "Error: '$1' is not a valid directory"
return
fi
shopt -s globstar
cd "$1"
for file in **; do
echo "mv $file ${file//$2/$3}"
done
}
Simplest first. What's wrong with rename?
mkdir tstbin
for i in `seq 1 20`
do
touch tstbin/filename$i.txt
done
rename .txt .html tstbin/*.txt
Or are you using an older *nix machine?
To avoid re-executing sed on each file, you could instead setup two name streams, one original, and one transformed, then sip from the ends:
exec 3< <(ls)
exec 4< <(ls | sed 's/from/to/')
IFS=`echo`
while read -u3 orig && read -u4 to; do
mv "${orig}" "${to}";
done;
I think you can store all of file names into a file or string, and use awk and sed do it once instead of one by one.
I'm using this script to monitor the downloads folder for new .bin files being created. However, it doesn't seem to be working. If I remove the grep, I can make it copy any file created in the Downloads folder, but with the grep it's not working. I suspect the problem is how I'm trying to compare the two values, but I'm really not sure what to do.
#!/bin/sh
downloadDir="$HOME/Downloads/"
mbedDir="/media/mbed"
inotifywait -m --format %f -e create $downloadDir -q | \
while read line; do
if [ $(ls $downloadDir -a1 | grep '[^.].*bin' | head -1) == $line ]; then
cp "$downloadDir/$line" "$mbedDir/$line"
fi
done
The ls $downloadDir -a1 | grep '[^.].*bin' | head -1 is the wrong way to go about this. To see why, suppose you had files named a.txt and b.bin in the download directory, and then c.bin was added. inotifywait would print c.bin, ls would print a.txt\nb.bin\nc.bin (with actual newlines, not \n), grep would thin that to b.bin\nc.bin, head would remove all but the first line leaving b.bin, which would not match c.bin. You need to be checking $line to see if it ends in .bin, not scanning a directory listing. I'll give you three ways to do this:
First option, use grep to check $line, not the listing:
if echo "$line" | grep -q '[.]bin$'; then
Note that I'm using the -q option to supress grep's output, and instead simply letting the if command check its exit status (success if it found a match, failure if not). Also, the RE is anchored to the end of the line, and the period is in brackets so it'll only match an actual period (normally, . in a regular expression matches any single character). \.bin$ would also work here.
Second option, use the shell's ability to edit variable contents to see if $line ends in .bin:
if [ "${line%.bin}" != "$line" ]; then
the "${line%.bin}" part gives the value of $line with .bin trimmed from the end if it's there. If that's not the same as $line itself, then $line must've ended with .bin.
Third option, use bash's [[ ]] expression to do pattern matching directly:
if [[ "$line" == *.bin ]]; then
This is (IMHO) the simplest and clearest of the bunch, but it only works in bash (i.e. you must start the script with #!/bin/bash).
Other notes: to avoid some possible issues with whitespace and backslashes in filenames, use while IFS= read -r line; do and follow #shellter's recommendation about double-quotes religiously.
Also, I'm not very familiar with inotifywait, but AIUI its -e create option will notify you when the file is created, not when its contents are fully written out. Depending on the timing, you may wind up copying partially-written files.
Finally, you don't have any checking for duplicate filenames. What should happen if you download a file named foo.bin, it gets copied, you delete the original, then download a different file named foo.bin. As the script is now, it'll silently overwrite the first foo.bin. If this isn't what you want, you should add something like:
if [ ! -e "$mbedDir/$line" ]; then
cp "$downloadDir/$line" "$mbedDir/$line"
elif ! cmp -s "$downloadDir/$line" "$mbedDir/$line"; then
echo "Eeek, a duplicate filename!" >&2
# or possibly something more constructive than that...
fi