I found the following script from someone else's project.
Can someone please explain what the below script does?
for libtocopy in $LIBS_TO_COPY ; do
libfile=`ldd bin/foo.so | grep lib${libtocopy} | cut -d' ' -f 3`
if [ "x$libfile" != "x" ] ; then
#echo "$libtocopy : copying $libfile in libs"
cp $libfile foo/libs
fi
done
In short the script will extract a selection of dynamically resolved shared libraries from the bin/foo.so binary mentioned in the script.
The command driving the whole script there is ldd. This will resolve and print any dymanic dependencies for an executable.
E.g. this is an example output (on a Raspberry Pi that was handy, but the output is compatible):
ldd /bin/grep
linux-vdso.so.1 (0x7ef36000)
/usr/lib/arm-linux-gnueabihf/libarmmem.so (0x76f8b000)
libpcre.so.3 => /lib/arm-linux-gnueabihf/libpcre.so.3 (0x76efe000)
libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0x76eeb000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0x76dac000)
/lib/ld-linux-armhf.so.3 (0x76fa1000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0x76d83000)
So for each of the names in the LIBS_TO_COPY variable, (e.g. pthread on the last line above), it will find the line with a lib prefix (e.g. libpthread).
Each of these matching lines is piped into cut, which will pick out the third field in the line (using space as the delimiter) - i.e. the resolved path to that library.
Then those resolved dependencies are copied to the selected directory.
For example, with the echo in the script uncommented, and bin/foo.so switched to /bin/grep:
$ export LIBS_TO_COPY='pthread c dl'
$ bash libextract.bash
pthread : copying /lib/arm-linux-gnueabihf/libpthread.so.0 in libs
c : copying /lib/arm-linux-gnueabihf/libc.so.6 in libs
dl : copying /lib/arm-linux-gnueabihf/libdl.so.2 in libs
(Both the bash and sh shells give the same output).
Related
My Goal
I'm writing a small Bash script, which uses entr, which is a utility to re-run arbitrary commands when it detects file-system events. My immediate goal is to pass entr a command which converts a given markdown file to HTML. entr will run this command every time the markdown file changes. A simplified but working script looks like:
# script 1
in="$1"
out="${in%.md}.html"
echo "$in" | entr pandoc "${in}" -o "${out}"
This works fine. The filename to be watched is supplied to entr on stdin. On detecting changes in that file, entr runs the command specified by its args. In this example that is pandoc, and all the args after it, to convert the markdown file to an HTML file.
For future reference, set -x shows that entr was invoked as we'd expect. (Throughout, lines starting with + show the output from set -x):
+ entr pandoc 'READ ME.md' -o 'READ ME.html'
The problem
I want to look-up the command given to entr depending on the file-type of the
given input file. So the file-conversion command ends up in a variable, and I want to use that variable as the command-line args to entr. But I can't get the quoting right.
Again, simplified:
# script 2
in="$1"
out="${in%.md}.html"
cmd="pandoc \"${in}\" -o \"${out}\""
echo "$in" | entr "$cmd"
(shellcheck.net detects no issues on the above)
This fails. Because "$cmd" in the final line is in quotes, the entirety of $cmd
is treated as a single arg to entr:
+ entr 'pandoc "READ ME.md" -o "READ ME.html"'
entr tries to interpret the whole thing as the name of an executable, which
it cannot find:
entr: exec pandoc "READ ME.md" -o "READ ME.html": No such file or directory
So how should I modify script 2, to use the content of $cmd as the args to
entr?
What have I tried?
Check that $cmd is being formed as I expect? If I echo "$cmd" right after
it is defined in script 2, it looks exactly how I'd hope:
pandoc "READ ME.md" -o "READ ME.html"
I tried messing around with alternate ways of constructing cmd, such as:
cmd='pandoc "'"${in}"'" -o "'"${out}"'"'
but variations like this produce identical values of $cmd, and identical
behavior as script2.
Try not quoting the use of $cmd?
Since the final line of script 2 erroneously treats the whole of "$cmd"
as a single arg, and we want it to split up the words into seprate args
instead, maybe removing the quotes and using a bare $cmd is a step in the
right direction?
echo "$in" | entr $cmd
Predictably enough though, this splits $cmd up on every space, even the
ones inside our double-quotes:
+ entr pandoc '"READ' 'ME.md"' -o '"READ' 'ME.html"'
This makes Pandoc try, and fail, to open a file called "READ:
pandoc: "READ: openBinaryFile: does not exist (No such file or directory)
Try constructing $cmd using printf?
I notice printf -v can store output in a variable. How about using that
instead of assiging to cmd?
printf -v cmd 'pandoc "%s" -o "%s"' "$in" "$out"
Predictably enough, this produces the same results as script2. I tried some
speculative variations, such as %q in the format string, or using $in
and $out directly in the format string, but didn't stumble on anything
that seemed to help.
Try using the ${var#Q} form of parameter expansion.
echo "$in" | entr ${cmd#Q}
Tried with and without double quotes around the use of ${cmd#q}. No joy,
I guess I'm misunderstanding what #Q is for.
+ entr ''\''pandoc' '"READ' 'ME.md"' -o '"READ' 'ME.html"'\'''
entr: exec 'pandoc: No such file or directory
Details
I'm using Bash v5.1.16, in Pop!_OS 22.04, derived from Ubuntu 22.04 (Jammy).
The current 'apt' version of entr (v5.1) in Ubuntu Jammy (22.04) is too old
for my needs (e.g. the -z flag doesn't work.) so I'm compiling my own from
the latest v5.3 source release.
I know there are a lot of questions about quoting in Bash, but I don't see any that seem to match this. Apologies if I'm wrong.
Assemble the command as an array, instead of a string.
I read somewhere that maybe $# might do what I need, so I put the parts of $cmd into an array:
in="$1"
out="${in%.md}.html"
cmd=(pandoc "$in" -o "$out")
echo "$in" | entr "${cmd[#]}"
This correctly quotes the items in ${cmd[#]} which require it (e.g. have spaces in.)
+ entr pandoc 'READ ME.md' -o 'READ ME.html'
So ‘entr’ successfully calls ‘pandoc’, which successfully converts the documents. It works! I confess I did not expect that.
This approach seems viable for other similar situations, not just when invoking entr.
So I have a solution. It doesn't seem completely ideal for my future plans. I had visions of these 'file conversion commands' being configurable, and hence defined in a text file somewhere, so that users (==me, probably) could override them and define their own, and I'm not fluent enough with Bash to be sure how to go about that when commands are defined as arrays instead of strings.
I can't help but feel I've overlooked something simpler.
Use a shell to interpret the value of "$cmd":
echo "$in" | entr sh -c "$cmd"
This approach seems viable for other similar situations, not just when invoking entr.
Similarly, entr has a -s option which invokes a shell for you (chosen using the first word in $SHELL):
echo "$in" | entr -s "$cmd"
These both work well, at the minor cost of spawning an extra shell process.
I would like to get just the filename (with extension) of the output file I pass to my bash script:
a=$1
b=$(basename -- "$a")
echo $b #for debug
if [ "$b" == "test" ]; then
echo $b
fi
If i type in:
./test.sh /home/oscarchase/test.sh > /home/oscarchase/test.txt
I would like to get:
test.txt
in my output file but I get:
test.sh
How can I procede to parse this first argument to get the right name ?
Try this:
#!/bin/bash
output=$(readlink /proc/$$/fd/1)
echo "output is performed to \"$output\""
but please remember that this solution is system-dependent (particularly for Linux). I'm not sure that /proc filesystem has the same structure in e.g. FreeBSD and certainly this script won't work in bash for Windows.
Ahha, FreeBSD obsoleted procfs a while ago and now has a different facility called procstat. You may get an idea on how to extract the information you need from the following screenshot. I guess some awk-ing is required :)
Finding out the name of the file that is opened on file descriptor 1 (standard output) is not something you can do directly in bash; it depends on what operating system you are using. You can use lsof and awk to do this; it doesn't rely on the proc file system, and although the exact call may vary, this command worked for both Linux and Mac OS X, so it is at least somewhat portable.
output=$( lsof -p $$ -a -d 1 -F n | awk '/^n/ {print substr($1, 2)}' )
Some explanation:
-p $$ selects open files for the current process
-d 1 selects only file descriptor 1
-a is use to require both -p and -d apply (the default is to show all files that match either condition
-F n modifies the output so that you get one line per field, prefixed with an identifier character. With this, you'll get two lines: one beginning with p and indicating the process ID, and one beginning with `n indicating the file name of the file.
The awk command simply selects the line starting with n and outputs the first field minus the initial n.
I'm trying to define an Automake rule that will generate a text file containing the full path to a libtool library that will be built and installed by the same Makefile. Is there a straightforward way of retrieving the output filename for a libtool library (with the correct extension for the platform the program is being built on)?
For example, I am trying to write something like this:
lib_LTLIBRARIES = libfoo.la
bar.txt:
echo $(prefix)/lib/$(libfoo_la) >$#
Where $(libfoo_la) would expand to libfoo.so, libfoo.dylib or libfoo.dll (or whatever else), depending on the platform. This is essentially the value of the dlname parameter in the resulting libtool library file. I could potentially extract the filename directly from that, but I was hoping there was a simpler way of achieving this.
Unfortunately, there's not a way I've found of doing this.
Fortunately, for you, I did have a little sed script hacked together that did kind
of what you want, and hacked it so it does do what you want.
foo.sed
# kill non-dlname lines
/^\(dlname\|libdir\)=/! { d }
/^dlname=/ {
# kill leading/trailing junk
s/^dlname='//
# kill from the last quote to the end
s/'.*$//
# kill blank lines
/./!d
# write out the lib on its own line
s/.*/\/&\n/g
# kill the EOL
s/\n$//
# hold it
h
}
/^libdir=/ {
# kill leading/trailing junk
s/^libdir='//
# kill from the last quote to the end
s/'.*$//
# paste
G
# kill the EOL
s/\n//
p
}
Makefile.am
lib_LTLIBRARIES = libfoo.la
bar.txt: libfoo.la foo.sed
sed -n -f foo.sed $< > $#
I have a shell script and i read all .s files in the specified folder first and then compile them to object file with a loop and after that link them to executable file.
this:
FILES=PTscalar_1.0/mibenchforpt/security/sha/*.s
for sfile in $FILES
do
echo "------------------------------------------------"
echo $sfile
objectFile="${sfile%.s}.o"
exefile="${objectFile%.o}.ex"
simplescalar/bin/sslittle-na-sstrix-as -o $objectFile $sfile
done
but I have a problem: in sha mibench program we have 2 files that each of them is in this flow:
.c -> .s -> .o
but at the last stage two .o files should be linked into one executable file.
how I can get two file names at the same time and create a command to link them.
main code is this:
simplescalar/bin/sslittle-na-sstrix-ld -o __sha.ex _sha.o _sha_driver.o
is there any way to see inside of FILES like this:
OFILES=PTscalar_1.0/mibenchforpt/security/sha/*.o
simplescalar/bin/sslittle-na-sstrix-ld -o $exefile OFILES[0] OFILES[1]
and after that doing that in a loop for all files with this pattern
first file is like *.o or *_main.o
second is: *_driver.o
Thanks
Obviously this is possible in shell. However many people find that the make utility is better for building software than shell scripts simply because of these dependencies. take a look at GNU Make. Its documentation contains numerous examples of what you're trying to do.
Caveat: Your tags "linux shell" do not specify a specific shell. POSIX sh, the standard specifying minimum required behavior for /bin/sh, does not support arrays; you should use a specific shell, such as bash or ksh, which does. To do this, you need to start your script with an appropriate shebang (such as #!/bin/bash instead of #!/bin/sh), and do any manual invocations with the correct shell (so bash -x myscript if you would otherwise use sh -x myscript... though if you've set the shebang correctly and have +x permissions, you can always just ./myscript)
# this is broken
FILES=PTscalar_1.0/mibenchforpt/security/sha/*.s
...does not create an array.
# this works in bash, ksh, and zsh
files=( PTscalar_1.0/mibenchforpt/security/sha/*.s )
does create an array, which can be expanded as "${files[#]}". So:
# this works in bash and ksh, and probably zsh
for file in "${files[#]}"; do
...
done
However, in this particular case, you don't have a reason to use an array at all:
# this works with absolutely any POSIX-compatible shell
for file in PTscalar_1.0/mibenchforpt/security/sha/*.s; do
echo "$sfile"
objectFile=${sfile%.s}.o
exefile=${objectFile%.o}.ex
simplescalar/bin/sslittle-na-sstrix-as -o "$objectFile" "$sfile"
done
Note a few corrections made in the above:
The right-hand-side of assignments in with no literal whitespace in their syntax do not need to be quoted.
All expansions (such as $objectFile) do need to be quoted, so, "$objectFile".
...yes, this does include echo; to test this, run s='*' and compare the output of echo $s to echo "$s".
To address the follow-up question you edited in:
ofiles=( PTscalar_1.0/mibenchforpt/security/sha/*.o )
simplescalar/bin/sslittle-na-sstrix-ld -o "$exefile" "${ofiles[0]}" "${ofiles[1]}"
...is a literal answer, but this would need to be edited if you had two or more outputs. Much better to do it this way instead:
ofiles=( PTscalar_1.0/mibenchforpt/security/sha/*.o )
simplescalar/bin/sslittle-na-sstrix-ld -o "$exefile" "${ofiles[#]}"
I created this file and it worked:
#!/bin/bash
#compile to assembly:
FILES=*_driver.s
for sdriverfile in $FILES
do
echo "------------------------------------------------"
# s file
echo $sdriverfile
sfile="${sdriverfile%_driver.s}.s"
echo $sfile
# object files
obj="${sfile%.s}.o"
obj_driver="${sdriverfile%.s}.o"
#exe file
exefile="${sfile%.s}_as.ex"
echo $exefile
#compile
/home/mahdi/programs/simplescalar/bin/sslittle-na-sstrix-as -o $obj $sfile
/home/mahdi/programs/simplescalar/bin/sslittle-na-sstrix-as -o $obj_driver $sdriverfile
#link
/home/mahdi/programs/simplescalar/bin/sslittle-na-sstrix-ld -o $exefile $obj $obj_driver -L /home/mahdi/programs/simplescalar/sslittle-na-sstrix/lib -lc -L /home/mahdi/programs/simplescalar/lib/gcc-lib/sslittle-na-sstrix/2.7.2.3/ -lgcc
done
thanks for answers.
I've been handed a project that consists of several dozen (probably over 100, I haven't counted) bash scripts. Most of the scripts make at least one call to another one of the scripts. I'd like to get the equivalent of a call graph where the nodes are the scripts instead of functions.
Is there any existing software to do this?
If not, does anybody have clever ideas for how to do this?
Best plan I could come up with was to enumerate the scripts and check to see if the basenames are unique (they span multiple directories). If there are duplicate basenames, then cry, because the script paths are usually held in variable names so you may not be able to disambiguate. If they are unique, then grep the names in the scripts and use those results to build up a graph. Use some tool (suggestions?) to visualize the graph.
Suggestions?
Wrap the shell itself by your implementation, log who called you wrapper and exec the original shell.
Yes you have to start the scripts in order to identify which script is really used. Otherwise you need a tool with the same knowledge as the shell engine itself to support the whole variable expansion, PATHs etc -- I never heard about such a tool.
In order to visualize the calling graph use GraphViz's dot format.
Here's how I wound up doing it (disclaimer: a lot of this is hack-ish, so you may want to clean up if you're going to use it long-term)...
Assumptions:
- Current directory contains all scripts/binaries in question.
- Files for building the graph go in subdir call_graph.
Created the script call_graph/make_tgf.sh:
#!/bin/bash
# Run from dir with scripts and subdir call_graph
# Parameters:
# $1 = sources (default is call_graph/sources.txt)
# $2 = targets (default is call_graph/targets.txt)
SOURCES=$1
if [ "$SOURCES" == "" ]; then SOURCES=call_graph/sources.txt; fi
TARGETS=$2
if [ "$TARGETS" == "" ]; then TARGETS=call_graph/targets.txt; fi
if [ ! -d call_graph ]; then echo "Run from parent dir of call_graph" >&2; exit 1; fi
(
# cat call_graph/targets.txt
for file in `cat $SOURCES `
do
for target in `grep -v -E '^ *#' $file | grep -o -F -w -f $TARGETS | grep -v -w $file | sort | uniq`
do echo $file $target
done
done
)
Then, I ran the following (I wound up doing the scripts-only version):
cat /dev/null | tee call_graph/sources.txt > call_graph/targets.txt
for file in *
do
if [ -d "$file" ]; then continue; fi
echo $file >> call_graph/targets.txt
if file $file | grep text >/dev/null; then echo $file >> call_graph/sources.txt; fi
done
# For scripts only:
bash call_graph/make_tgf.sh call_graph/sources.txt call_graph/sources.txt > call_graph/scripts.tgf
# For scripts + binaries (binaries will be leaf nodes):
bash call_graph/make_tgf.sh > call_graph/scripts_and_bin.tgf
I then opened the resulting tgf file in yEd, and had yEd do the layout (Layout -> Hierarchical). I saved as graphml to separate the manually-editable file from the automatically-generated one.
I found that there were certain nodes that were not helpful to have in the graph, such as utility scripts/binaries that were called all over the place. So, I removed these from the sources/targets files and regenerated as necessary until I liked the node set.
Hope this helps somebody...
Insert a line at the beginning of each shell script, after the #! line, which logs a timestamp, the full pathname of the script, and the argument list.
Over time, you can mine this log to identify likely candidates, i.e. two lines logged very close together have a high probability of the first script calling the second.
This also allows you to focus on the scripts which are still actually in use.
You could use an ed script
1a
log blah blah blah
.
wq
and run it like so:
find / -perm +x -exec ed {} <edscript
Make sure you test the find command with -print instead of the exec clause. And / is probably not the path that you want to use. If you have to include bin directories then you will probably need to switch to grep in order to identify the pathnames to include, then when you have a file full of the right names, use xargs instead of find to run the script.