What is a shar file? [closed] - shell

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Reading the make's manual page:
`shar'
Create a shell archive (shar file) of the source files.
What is a shar?

It's a shell archive, a self extracting executable shell script which is meant to be a convenient way to ship archives of files and have them appear simply by running the script.
An example is shown in the transcript below which gives only one file from the archive, output.txt:
pax> cat shar.bash
#!/bin/bash
tr '[A-Za-z]' '[N-ZA-Mn-za-m]' >output.txt <<EOF
Uryyb sebz Cnk.
EOF
pax> ./shar.bash
pax> cat output.txt
Hello from Pax.
That's a fairly simplistic one since it only delivers one file, and it doesn't compress it at all, but it should give you the general idea.
A real one would probably give you something like a set of files combined with tar, gzip and uuencode, which would then be passed through uudecode, gunzip and tar to deliver the original content.

A self-extracting archive: a shell script that extracts some data contained in it.
Wikipedia has more: http://en.wikipedia.org/wiki/Shar

It's a kind of self-extracting archive.
It's a bit dangerous (like a self-extracting .exe on Windows), because it runs itself to extract itself, so it could potentially do all kinds of other things that you did not expect.
I think this is what Oracle uses to distribute the JVM on Linux (to make you click through a license agreement first).
Normally, people would just use tar archives (which cannot execute arbitrary code, but also not show any dialogs).

Related

Bash - How to execute paths from file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
How i could execute paths from .txt in my script?
For Example:
###################
foo.txt
/home/foo_1/public/
/home/foo_2/public/
[...]
/home/foo_n/public/
Then I want my script to look for optional file in every path from the .txt.
How I can do this?
Some loop?
Greetings
Using xargs and find Utilities
Assuming your file has no extraneous data (it's hard to tell from your original post) and holds one directory path per line, you can simply use the -n or -L flags with xargs. For example, given a source file like:
/home/foo_1/public/
/home/foo_2/public/
you could invoke find like so:
xargs -I{} -L find "{}" -name "filename_to_find" < file_paths.txt
There may be ways to do this more efficiently, of course, but this seems conceptually simpler to me than writing your own read and loop statements. Your mileage (and input data quality) may vary.
You can just concatenate all of them into a single path variable
$ path=$(paste -ds: foo.txt)

Bash - wait for a process (here gcc) [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
On my Ubuntu machine I want to create a custom command for compiling a c file.
At the moment I have something liks this which does not work like I want to:
#compile the file
gcc $1 -o ~/.compile-c-output
#run the program
./~/.compile-c-output
#delete the output file
rm ~/.compile-c-output
The problem is that the run command is executed before gcc is ready and so the file does not exist. How can I wait until gcc is ready and I can run the file normaly?
Btw how can I add a random number to the output file so this script also works if I run it on two different terminals?
./~/.compile-c-output
Get rid of the leading ./. That's why the file doesn't exist.
~/.compile-c-output
To get a random file name, use mktemp. mktemp guarantees not to overwrite existing files.
file=$(mktemp) # unspecified file name in /tmp
gcc "$1" "$file" && "$file"
rm "$file"

why doesn't me script take the last updated input file when running in unix bash? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have an input file and a script running in unix bash.
the problem is every time i edit the input file in vi , the script takes the input file as it was inputed the first time.
How can i fix this ?
run
cat inputFile
to make sure it looks correct before passing it to your script. Try doing :wq! To make sure it will save the file even if the read only perms are set on the file. The "!" after wq will force a write despite permissions on the file.
Try typing ls -ltr inputFile and check the perms. If they look like below this then run chmod a+w inputFile
-r-r-r--
Use :w in vi to save your input file before executing the script.
Pure speculation, since many details are missing, but if your script opens the file and keeps it open, it will not see updates. If there is only one (hard) link to the file, then vi (assuming vi is actually vim, although I suspect most editors behave this way) will create a new file and change the link to it, but the script still has the original file open. A simple technique that might work is to create a second link to the file before you run the script:
$ ln input-file foo # Create a second link
$ script input-file # Run the script
$ vi input-file # Edit the file
This causes vim to modify its behavior so that it actually updates the file rather than creating a new one.
#user2613272: its either that you have not saved the file before executing it or you are executing some other file with similar name.
as suggested by #bjackfly, i guess you first "cat" your file before execution.

Building Program-Specific Shortcuts in UNIX [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a program, called carmel, which I can run from the command line via:
carmel -h
or whichever suffix I chose. When loading a file, I can say:
carmel fsa1.fst where fsa1.fst is located in my heme folder, /Users/adam/.
I would prefer to have the default file location be, e.g., /Users/adam/carmel/files, and would prefer to not type that in every time. Is there a way to let UNIX know, when I type carmel to then look in that location?
There is no standard Unix shortcut for this behaviour. Some applications will check an environment variable to see where their files are. but looking at carmel/src/carmel.cc on GitHub, I'd say you'd have to write a wrapper script. Like this:
#!/usr/bin/env bash
# Save as ${HOME}/bin/carmel and ensure ${HOME}/bin is before
# ${carmel_bin_dir} in your ${PATH}. Also ensure this script
# has the executable bit set.
carmel_bin_dir=/usr/local/bin # TODO change this?
working_directory=${CARMEL_HOME-${HOME}/carmel/files}
if [[ ! -d "${working_directory}" ]]; then
echo "${working_directory} does not exist. Creating."
mkdir -p "${working_directory}" || echo "Failed to create ${working_directory}"
fi
pushd "${working_directory}"
echo "Launching ${carmel_bin_dir}/carmel ${#} from $(pwd)..."
${carmel_bin_dir}/carmel ${#}
popd
Alternatively, since the source is freely available, you could add some code to read ${CARMEL_HOME} (or similar) and submit this as a pull request.
Good luck!

In Bash scripting, what's a good way to append multiple files of the same names together? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So, I'm building my first script, it's to unzip files from two different directories and merge said directories together and then to append all the files of the same name together. The only part I'm struggling with is the appending multiple files of the same name together. What's a good way to go about that?
That depends on the directory structure of each archive, is it the same? In that case, assume unzipp'ed files are in a/ and b/, do something like this:
mkdir c
for f in a/*; do
cat a/"$f" b/"$f" > c/"$f"
done
Instead of using cp for the second directory, do
for file in source/* ; do
cat "$file" >> target/"${file#*/}"
done

Resources