I have never in my life made a shell script (although I have had to find and delete multiple troll ones my friends put on my school account when i am not looking).
I have some .o files under the working directory. I want a shell script that, by being given a simple (no path) .o file name, finds the matching file under the current directory and then runs the shell command
arm-none-eabi-objdump -D <found file>
So if I give it example.o, it will find dir1/dir2/example.o and then run
arm-none-eabi-objdump -D dir1/dir2/example.o
A shell script isn't especially needed for this, but I will attempt to cover all approaches. This assumes that the shell of choice is bash, but this may work for other shells as well. First, you need to consider whether you may have multiple object files with the same name, and what you may want to do if you do. If you want to dump only the first match, then this should work for you:
find ./ -name example.o -exec arm-none-eabi-objdump -D '{}' \; -quit
If, however, you want to dump all found matches, you can either remove the -quit (which will concatenate the output) or put the command in a loop:
find ./ -name example.o |
while read file; do
arm-none-eabi-objdump -D "$file" | less
done
If you wish to save yourself the typing (or reverse search) and put this in a shell script, all you need to do is put the same text in a file, add at the beginning of the file #!/bin/bash on its own line, and then make the file executable via chmod a+rx my-script.sh. Then you can run the script by typing ./my-script.sh example.o (assuming you are in the same directory as the script). Note that unless you put the script somewhere in your PATH environment variable, then you do need the ./ before the file name.
Related
First I made a question here: Unzip a file and then display it in the console in one step
It works and helped me a lot. (please read)
Now I have a second issue. I do not have a single zipped log file but I have a lot of them in defferent folders, which I need to find first. The files have the same names. For example:
/somedir/server1/log.gz
/somedir/server2/log.gz
/somedir/server3/log.gz
and so on...
What I need is a way to:
find all the files like: find /somedir/server* -type f -name log.gz
unzip the files like: gunzip -c log.gz
use grep on the content of the files
Important! The whole should be done in one step.
I cannot first store the extracted files in the filesystem because it is a readonly filesystem. I need somehow to connect, with pipes, the output from one command to the input of the next.
Before, the log files were in text format (.txt), therefore I had not to unzip them first. In this case it was easy:
ex.
find /somedir/server* -type f -name log.txt | xargs grep "term"
Now I have to deal with zipped files. That means, after I find the files, I need first somehow do unzip them and then send the contents to grep.
With one file I do:
gunzip -p /somedir/server1/log.gz | grep term
But for multiple files I don't know how to do it. For example how to pass the output of find to gunzip and the to grep?!
Also if there is another way / "best practise" how to do that, it is welcome :)
find lets you invoke a command on the files it finds:
find /somedir/server* -type f -name log.gz -exec gunzip -c '{}' + | grep ...
From the man page:
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
{} is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the + form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.
I have some settings file which need to go to appropriate location.
eg. settings.location1, settings.location2, settings.location3 which need to go to the appropriate folders namely location1, location2 and location3 and be renamed as simple settings.
eg.
/some/path/settings.location1 -> /other/path/location1/settings
/some/path/settings.location2 -> /other/path/location2/settings
I came up with the following comand:
find /some/path/settings.* -exec cp {} /other/path/$(echo {} | sed 's/.*\.//')/settings \;
But for some reason the sed does not get executed. And the result is
cp: cannot create regular file `/other/path//some/path/settings.location1/settings': No such file or director
It seems that if I run them separately all commands get executed well, but not if I put in exec. What am I doing wrong?
to make this work you need to create the intermediate folder when they are not present in the file system.
You can try:
find /some/path/settings.* -exec mkdir -p /other/path/$(ls -m1 {})/ && cp {} /other/path/$(ls -m1 {})/settings \;
I solved this by writing a very simple shell script which modified its parameters and did the operation, then having find do an exec on that script, passing {} multiple times.
In terms of your action, I think it would be something like:
find /some/path/settings.* -exec /bin/bash -c 'cp "$0" \
/other/path/"${1:s^/some/path/settings.^^}"/settings' {} {} \;
N.B. I inserted a line break after "$1" for ease of reading, but this should all be on one line. And I added double quotes around paths in case there are spaces in your paths. You might not need that precaution.
The Bash man page says,
-c If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages.
The contents of the -exec script are: the bash interpreter path, the -c option, a quoted argument, find's path {}, then find's path {} again.
The quoted argument contains the shell script. Bash sets the next two arguments to $0 and $1, then runs the shell script. The script uses history substitution ${1:s^/some/path/settings.^^} to modify the path from find to keep only the part you need.
I am flattening a directory of nested folders/picture files down to a single folder. I want to move all of the nested files up to the root level.
There are 3,381 files (no directories included in the count). I calculate this number using these two commands and subtracting the directory count (the second command):
find ./ | wc -l
find ./ -type d | wc -l
To flatten, I use this command:
find ./ -mindepth 2 -exec mv -i -v '{}' . \;
Problem is that when I get a count after running the flatten command, my count is off by 46. After going through the list of files before and after (I have a backup), I found that the mv command is overwriting files sometimes even though I'm using -i.
Here's details from the log for one of these files being overwritten...
.//Vacation/CIMG1075.JPG -> ./CIMG1075.JPG
..more log
..more log
..more log
.//dog pics/CIMG1075.JPG -> ./CIMG1075.JPG
So I can see that it is overwriting. I thought -i was supposed to stop this. I also tried a -n and got the same number. Note, I do have about 150 duplicate filenames. Was going to manually rename after I flattened everything I could.
Is it a timing issue?
Is there a way to resolve?
NOTE: it is prompting me that some of the files are overwrites. On those prompts I just press Enter so as not to overwrite. In the case above, there is no prompt. It just overwrites.
Apparently the manual entry clearly states:
The -n and -v options are non-standard and their use in scripts is not recommended.
In other words, you should mimic the -n option yourself. To do that, just check if the file exists and act accordingly. In a shell script where the file is supplied as the first argument, this could be done as follows:
[ -f "${1##*/}" ]
The file, as first argument, contains directories which can be stripped using ##*/. Now simply execute the mv using ||, since we want to execute when the file doesn't exist.
[ -f "${1##*/}" ] || mv "$1" .
Using this, you can edit your find command as follows:
find ./ -mindepth 2 -exec bash -c '[ -f "${0##*/}" ] || mv "$0" .' '{}' \;
Note that we now use $0 because of the bash -c usage. It's first argument, $0, can't be the script name because we have no script. This means the argument order is shifted with respect to a usual shell script.
Why not check if file exists, prior move? Then you can leave the file where it is or you can rename it or do something else...
Test -f or, [] should do the trick?
I am on tablet and can not easyly include the source.
In bash I want to copy all .yml.sample files in a Git repository (recursively) and rename them to just have a .yml extension.
Eg. test.yml.sample would be copied to test.yml
Here’s as close as I’ve got, but I'm not clear on how to strip .sample off the end of the file name when I copy.
find . -depth -name "*.yml.sample" -exec sh -c 'cp "$1" "${1%/.sample/}"' _ {} \;
This should work:
find . -depth -name "*.yml.sample" -exec sh -c 'cp -p "$1" "${1%.yml.sample}.yml"' _ {} \;
The first *.yml.sample finds the files via find. Then after the -exec part, the magic happens via cp taking the results of that find via $1 and then the file extension for the copied file is set via ${1%.yml.sample}.yml where .yml.sample is the source extension, and .yml is the new destination extension.
Note I also added the -p attribute to preserve the attributes from the source file to the copied file. You might not need that, but I think it can be helpful when doing copies like this.
And—since this shell logic can be confusing—in terms of the _ {} \;, it breaks down as this:
_ {}: As explained in this answer on the Unix/Linux Stack Exchange site, “The way this works is bash takes the parameters after -c as arguments, _ {} is needed so that the contents of {} is assigned to $1 not l.”
\;: When you run find with a -exec parameter, everything that happens after that is parsed through a new shell. Meaning the main find command runs in one parent shell and stuff after -exec runs in another child shell command. If you run it as _ {} ;, the child shell command would terminate. So instead, you escape it as \; so you get _ {} \; which means only the parent sell find would interpret that ; as a “terminate” and thus the paren find command can successfully run iterative commands via -exec without stopping that child shell command. Read up on -exec command ; here.
I think you can use a tool like mmv, to mass rename all the files you need.
mmv \*.yml.sample \#1.yml
The above line should work... just make sure to test it first. Hope this helps!
Edit: If you want to copy and rename, all in one step, you can use the -c flag. That will preserve the original file, and will make a copy using the rename mask.
mmv -c \*.yml.sample \#1.yml
Its a interview question. Interviewer asked this "basic" shell script question when he understand i don't have experience in shell scripting. Here is question.
Copy files from one directory which has size greater than 500 K to another directory.
I can do it immediately in c lang but seems difficult in shell script as never tried it.I am familiar with unix basic commands so i tried it, but i can just able to extract those file names using below command.
du -sk * | awk '{ if ($1>500) print $2 }'
Also,Let me know good shell script examples book.
It can be done in several ways. I'd try and use find:
find $FIRSTDIRECTORY -size +500k -exec cp "{\} $SECONDDIRECTORY \;
To limit to the current directory, use -maxdepth option.
du recurses into subdirectories, which is probably not desired (you could have asked for clarification if that point was ambiguous). More likely you were expected to use ls -l or ls -s to get the sizes.
But what you did works to select some files and print their names, so let's build on it. You have a command that outputs a list of names. You need to put the output of that command into the command line of a cp. If your du|awk outputs this:
Makefile
foo.c
bar.h
you want to run this:
cp Makefile foo.c bar.h otherdirectory
So how you do that is with COMMAND SUBSTITUTION which is written as $(...) like this:
cd firstdirectory
cp $(du -sk * | awk '{ if ($1>500) print $2 }') otherdirectory
And that's a functioning script. The du|awk command runs first, and its output is used to build the cp command. There are a lot of subtle drawbacks that would make it unsuitable for general use, but that's how beginner-level shell scripts usually are.
find . -mindepth 1 -maxdepth 1 -type f -size +BYTESc -exec cp -t DESTDIR {}\+
The c suffix on the size is essential; the size is in bytes. Otherwise, you get probably-unexpected rounding behaviour in determining the result of the -size check. If the copying is meant to be recursive, you will need to take care of creating any destination directory also.