How to retrieve filename from ls output - bash

I'm trying to automate the selection of a usb interface on macOS for avrdude. I want to select the first output of ls /dev/tty.usb*.
I'm trying to automate this.
However, I can't seem to get cut to work, and other solutions to similar problems are unfortunately too complex for me abstract to my problem. It would seem something like awk or sed is the correct approach, but I am not familiar with either of these.
For example, I want to get /dev/tty.usbmodem002021332 from running
$ ls /dev/tty.usb*
/dev/tty.usbmodem002021332 /dev/tty.usbmodem002021334 /dev/tty.usbserial-DAYO5CGB

Your struggles with the output ls are an example of why it's never a good idea to rely on ls within a shell script. In order to fetch the list of files that match the pattern, /dev/tty.usb*, an alternative is to assign the result of a glob expression to an array:
#!/bin/bash
shopt -s nullglob
list=(/dev/tty.usb*)
...and then fetch its first element:
echo "${list[0]}"
For more on why ls is problematic, see https://mywiki.wooledge.org/ParsingLs

Related

concatenating an element of array in a for loop

I have a script in which I am attempting to match strings in filenames on either side of a word. The keywords which are meant for pattern matchin with the wildcard character, as in:
ls *spin*.txt
This will of course match any of the following filenames:
one_spin.txt
4_5342spin-yyy.txt
fifty_spinout.txt
...etc.
What I would like to do is use the word 'spin' and a number of other words as match cases in an array that I can pass through a loop. I'd like these matches to be case-insensitive I attempt this like so:
types=(spin wheel rotation)
for file in $types; do
ls '*'${file}'*.txt'
done
EDIT: Because I'm looking for a solution that is maleable, I'd also like to be able to do something like:
types=(spin wheel rotation)
for file in $types; do
find . -iname "*$file*.txt"
done
I'm not sure how bash interprets either of these except seeing that it does not list the desired files. Can someone clarify what is happening in my script, and offer a solution that meets the aforementioned criteria?
Your attempt will work with a little more tweaks. As you are assigning types
as an array, you need to access it as an array.
Would you please try:
types=(spin wheel rotation)
for file in "${types[#]}"; do
ls *${file}*.txt
done
If your bash supports shopt builtin, you can also say:
shopt -s extglob
ls *#(spin|wheel|rotation)*.txt
If you want to make it match in a case-insensitive way, please try:
shopt -s extglob nocaseglob
ls *#(spin|wheel|rotation)*.txt
which will match one_Spin.txt, fifty_SPINOUT.TXT, etc.
Hope this helps.
don't make it complicate please try this instead
ls *{spin,wheel,rotation}*.txt
This also helpfull in creating files
touch 1_{spin,wheel,rotation,ads,sad,zxc}_2.txt
Or dirs
mkdir -p {test,test2/log,test3}

Using AWK to change a variable located in a script

This is the bash script.
Counter.sh:
#!/bin/bash
rm -rf home/pi/temp.mp3
cd /home/pi/
now=$(date +"%d-%b-%Y")
count="countshift1.sh"
mkdir $(date '+%d-%b-%Y')
On row 5 of this script, the count variable... I just want to know how to use AWK to change the integer 1 (the 18th character, thanks for the response) into a 3 and then save the Counter.sh file.
This is basically http://mywiki.wooledge.org/BashFAQ/050 -- assuming your script actually does something with $count somewhere further down, you should probably refactor that to avoid this antipattern. See the linked FAQ for much more on this topic.
Having said that, it's not hard to do what you are asking here without making changes to live code. Consider something like
awk 'END { print 5 }' /dev/null > file
in a cron job or similar (using Awk just because your question asks for it, not because it's the best tool for this job) and then in your main script, using that file;
read index <file
count="countshift$index.sh"
While this superficially removes the requirement to change the script on the fly (which is a big win) you still have another pesky problem (code in a variable!), and you should probably find a better way to solve it.
I don't think awk is the ideal tool for that. There are many ways to do it.
I would use Perl.
perl -pi -e 's/countshift1/countshift3/' Counter.sh

Shell script for searching a string pattern in file name

Hi i want to write a script that will go to a directory with many files and search a filename e.g. test_HTTP_abc.txt for this and search for HTTP string pattern, if it contains this string then set a variable equal to something:
something like:
var1=0
search for 06
if it contains 06 then
var1=1
else
var1=0
end if
but in unix script . Thanks
Probably the simplest thing is:
if test "${filename#*HTTP}" = "$filename"; then
# the variable does not contain the string `HTTP`
var=0
else
var=1
fi
Some shells allow regex matches in [[ comparisons, but it's not necessary to introduce that sort of non-portable code into your script.
Like this?
var=0
if fgrep -q 06 /path/to/dir/*HTTP*
then
var=1
fi
fgrep will return 0 ("truth") if there is a match in one of the files, and non-true otherwise (including the case of no matching input files).
If you want a list of matching files, try fgrep -l.
Well, I'm not going to write the script for you, you have to learn :)
Its easy if you break it down into smaller tasks;
The ls command is for looking at a directorie's contents. You can also use the find command to be a bit more intuitive, like find /some/folder -name "*string*"
To sift through the output of a command. You could store the output of a command to a variable or at using pipes.
You can search this output with something like awk (link), grep (link) an so on.
Setting variables is easy also in bash; http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-5.html
foundit=1
Why don't you have a go at trying to solve this puzzle first rather than someone telling you :D Show us where you get stuck in the puzzle.

How do I wrap the results of a command in quotes to pass it to another command?

This is for the Apple platform. My end goal is to do a find and replace for a line inside of the firefox preference file "prefs.js" to turn off updates. I want to be able to do this for all accounts on the Mac, including the user template (didn't include that in the examples). So far I've been able to get a list of all the paths that have the prefs.js file with this:
find /Users -name prefs.js
I then put the old preference and new preference in variables:
oldPref='user_pref("app.update.enabled", false);'
newPref='user_pref("app.update.enabled", true);'
I then have a "for loop" with the sed command to replace the old preference with the new preference:
for prefs in `find /Users -name prefs.js`
do
sed "s/$oldPref/$newPref/g" "$prefs"
done
The problem I'm running into is that the "find" command returns the full paths with the stupid "Application Support" in the path name like this:
/Users/admin/Library/Application Support/Firefox/Profiles/437cwg3d.default/prefs.js
When the command runs, I get these errors:
sed: /Users/admin/Library/Application: No such file or directory
sed: Support/Firefox/Profiles/437cwg3d.default/prefs.js: No such file or directory
I'm assuming that I somehow need to get the "find" command to wrap the outputted path in quotes for the "sed" command to parse it correctly? I'm I on the right path? I've tried to pipe the find command into sed to wrap quotes, but I can't get anything to work correctly. Please let me know if I should go about this differently. Thank you.
You don't want to for prefs in ... on a list of files that are output from find. For a more complete explanation of why this is bad, see Greg's wiki page about parsing ls. You would only use a for loop in bash if you could match the files using a glob, which is difficult if you want to do it recursively.
It would be better, if you can swing it, to use find ... -exec ... instead. Perhaps something like:
find /Users -name prefs.js -exec sed -i.bak -e "s/$oldPref/$newPref/" {} \;
The sed command line is executed once for each file found by find. The {} gets replaced with the filename. Sed's -i option lets you run it in-place, rather than requiring stdin/stdout. Check the man page for usage details.
(Grain of salt: I'm basing this on my experience with linux)
I think it less to do with sed and more to do with the way the for loop array is formed. When the the results of find are converted to an array, the space between Application and Support is treated as a delimiter.
There are several ways to work around this, but the easiest is probably to change the IFS variable. The IFS variable is an internal variable that your command line interpreter uses to separate fields (more info). You can change the IFS variable of the environment before running the find command.
Modified example from here:
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for f in `find /Users -name prefs.js`
do
echo "$f"
done
# restore $IFS
IFS=$SAVEIFS

Using bash command line, how to add "import package.name.*;" to many java files?

I'm thinking of using find or grep to collect the files, and maybe sed to make the change, but what to do with the output? Or would it be better to use "argdo" in vim?
Note: this question is asking for command line solutions, not IDE's. Answers and comments suggesting IDE's will be calmly, politely and serenely flagged. :-)
I am huge fan of the following
export MYLIST=`find . -type f -name *.java`
for a in $MYLIST; do
mv $a $a.orig
echo "import.stuff" >> $a
cat $a.orig >> $a
chmod 755 $a
done;
mv is evil and eventually this will get you. But I use this same construct for a lot of things and it is my utility knife of choice.
Update: This method also backs up the files which you should do using any method. In addition it does not use anything but the shell's features. You don't have to jog your memory about tools you don't use often. It is simple enough to teach a monkey (and believe me I have) to do. And you are generally wise enough to just throw it away because it took four seconds to write.
you can use sed to insert a line before the first line of the file:
sed -ie "1i import package.name.*;" YourClass.java
use a for loop to iterate through all your files and run this expression on them. but be careful if you have packages, because the import statements must be after the package declaration. you can use a more complex sed expression, if that's the case.
I'd suggest sed -i to obviate the need to worry about the output. Since you don't specify your platform, check your man pages; the semantics of sed -i vary from Linux to BSD.
I would use sed if there was a decent way to so "do this for the first line only" but I don't know of one off of the top of my head. Why not use perl instead. Something like:
find . -name '*.java' -exec perl -p -i.bak -e '
BEGIN {
print "import package.name.*;\n"
}' {} \;
should do the job. Check perlrun(1) for more details.
for i in `ls *java`
do
sed -i '.old' '1 i\
Your include statement here.
' $i
done
Should do it. -i does an in place replacement and .old saves the old file just in case something goes wrong. Replace the iterator *java as necessary (maybe 'find . | grep java' or something instead.)
You may also use the ed command to do in-file search and replace:
# delete all lines matching foobar
ed -s test.txt <<< $'g/foobar/d\nw'
see: http://bash-hackers.org/wiki/doku.php?id=howto:edit-ed
I've actually starting to do it using "argdo" in vim. First of all, set the args:
:args **/*.java
The "**" traverses all the subdir, and the "args" sets them to be the arg list (as if you started vim with all those files in as arguments to vim, eg: vim package1/One.java package1/Two.java package2/One.java)
Then fiddle with whatever commands I need to make the transform I want, eg:
:/^package.*$/s/$/\rimport package.name.*;/
The "/^package.*$/" acts as an address for the ordinary "s///" substitution that follows it; the "/$/" matches the end of the package's line; the "\r" is to get a newline.
Now I can automate this over all files, with argdo. I hit ":", then uparrow to get the above line, then insert "argdo " so it becomes:
:argdo /^package.*$/s/$/\rimport package.name.*;/
This "argdo" applies that transform to each file in the argument list.
What is really nice about this solution is that it isn't dangerous: it hasn't actually changed the files yet, but I can look at them to confirm it did what I wanted. I can undo on specific files, or I can exit if I don't like what it's done (BTW: I've mapped ^n and ^p to :n and :N so I can scoot quickly through the files). Now, I commit them with ":wa" - "write all" files.
:wa
At this point, I can still undo specific files, or finesse them as needed.
This same approach can be used for other refactorings (e.g. change a method signature and calls to it, in many files).
BTW: This is clumsy: "s/$/\rtext/"... There must be a better way to append text from vim's commandline...

Resources