How do we convert the below shell script so that the same result can be achieved on Mac OS X?
# To generate secure SSH deploy key for a github repo to be used from Travis
# https://gist.github.com/floydpink/4631240
base64 --wrap=0 ~/.ssh/id_rsa_deploy > ~/.ssh/id_rsa_deploy_base64
ENCRYPTION_FILTER="echo \$(echo \"- secure: \")\$(travis encrypt \"\$FILE='\`cat $FILE\`'\" -r floydpink/harimenon.com)"
split --bytes=100 --numeric-suffixes --suffix-length=2 --filter="$ENCRYPTION_FILTER" ~/.ssh/id_rsa_deploy_base64 id_rsa_
# To reconstitute the private SSH key once running inside Travis (typically from 'before_script')
echo -n $id_rsa_{00..30} >> ~/.ssh/id_rsa_base64
base64 --decode --ignore-garbage ~/.ssh/id_rsa_base64 > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
echo -e "Host github.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
I could figure out the equivalent base64 command to be:
base64 --break=0 id_rsa_deploy > id_rsa_deploy_base64
But it looks like the split command on Mac OS X is a little different from Linux/Unix and does not have the --filter option.
EDIT: This is a gist I stumbled on to from this blog entry that details how to auto-deploy an Octopress blog to GitHub using Travis CI.
I had successfully done this from Ubuntu Linux and had blogged about it as well in the past, but could not repeat it from a Mac.
You may want to install core-utils from brew (The missing package manager for OS X) and then use gsplit:
$ brew install coreutils
$ gsplit --help
Usage: gsplit [OPTION]... [INPUT [PREFIX]]
Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, ...; default
size is 1000 lines, and default PREFIX is 'x'. With no INPUT, or when INPUT
is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
-a, --suffix-length=N generate suffixes of length N (default 2)
--additional-suffix=SUFFIX append an additional SUFFIX to file names.
-b, --bytes=SIZE put SIZE bytes per output file
-C, --line-bytes=SIZE put at most SIZE bytes of lines per output file
-d, --numeric-suffixes[=FROM] use numeric suffixes instead of alphabetic.
FROM changes the start value (default 0).
-e, --elide-empty-files do not generate empty output files with '-n'
--filter=COMMAND write to shell COMMAND; file name is $FILE
-l, --lines=NUMBER put NUMBER lines per output file
-n, --number=CHUNKS generate CHUNKS output files. See below
-u, --unbuffered immediately copy input to output with '-n r/...'
--verbose print a diagnostic just before each
output file is opened
--help display this help and exit
--version output version information and exit
Related
I am writing a Jenkins job that will move files between two chrooted directories on a remote server.
This uses a Jenkins multiline string variable to store one or more file name, one per line.
The following will work for files without special characters or spaces:
## Jenkins parameters
# accountAlias = "test"
# sftpDir = "/path/to/chrooted home"
# srcDir = "/path/to/get/files"
# destDir = "/path/to/put/files"
# fileName = "file names # multiline Jenkins shell parameter, one file name per
#!/bin/bash
ssh user#server << EOF
#!/bin/bash
printf "\nCopying following file(s) from "${accountAlias}"_old account to "${accountAlias}"_new account:\n"
# Exit if no filename is given so Rsync does not copy all files in src directory.
if [ -z "${fileName}" ]; then
printf "\n***** At least one filename is required! *****\n"
exit 1
else
# While reading each line of fileName
while IFS= read -r line; do
printf "\n/"${sftpDir}"/"${accountAlias}"_old/"${srcDir}"/"\${line}" -> /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"/"\${line}"\n"
# Rsync the files from old account to new account
# -v | verbose
# -c | replace existing files based on checksum, not timestamp or size
# -r | recursively copy
# -t | preserve timestamps
# -h | human readable file sizes
# -P | resume incomplete files + show progress bars for large files
# -s | Sends file names without interpreting special chars
sudo rsync -vcrthPs /"${sftpDir}"/"${accountAlias}"_old/"${srcDir}"/"\${line}" /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"/"\${line}"
done <<< "${fileName}"
fi
printf "\nEnsuring all new files are owned by the "${accountAlias}"_new account:\n"
sudo chown -vR "${accountAlias}"_new:"${accountAlias}"_new /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"
EOF
Using the file name "sudo bash -c 'echo "hello" > f.txt'.txt" as a test, my script will fail after the "sudo" in the file name.
I believe my problem that my $line variable are not properly quoted or escaped, resulting in bash not treating the $line value as one string.
I have tried single quotes or using awk/sed to insert back slashes in variable string, but this hasn't worked.
My theory is I am running into a problem with special chars and heredocs.
Although it's unclear to me from your description exactly what error you are encountering or where, you do have several problems in the script presented.
The main one might simply be the sudo command that you're trying to execute on the remote side. Unless user has passwordless sudo privilege (rather dangerous) sudo will prompt for a password and attempt to read it from the user's terminal. You are not providing a password. You could probably just interpolate it into the command stream (in the here doc) if in fact you collect it. Nevertheless, there is still a potential problem with that, as you perform potentially many sudo commands, and they may or may not request passwords depending on remote sudo configuration and the time between sudo commands. Best would be to structure the command stream so that only one sudo execution is required.
Additional considerations follow.
## Jenkins parameters
# accountAlias = "test"
# sftpDir = "/path/to/chrooted home"
# srcDir = "/path/to/get/files"
# destDir = "/path/to/put/files"
# fileName = "file names # multiline Jenkins shell parameter, one file name per
#!/bin/bash
The #!/bin/bash there is not the first line of the script, so it does not function as a shebang line. Instead, it is just an ordinary comment. As a result, when the script is executed directly, it might or might not be bash that runs it, and if if it is bash, it might or might not be running in POSIX compatibility mode.
ssh user#server << EOF
#!/bin/bash
This #!/bin/bash is not a shebang line either, because that applies only to scripts read from regular files. As a result, the following commands are run by user's default shell, whatever that happens to be. If you want to ensure that the rest is run by bash, then perhaps you should execute bash explicitly.
printf "\nCopying following file(s) from "${accountAlias}"_old account to "${accountAlias}"_new account:\n"
The two expansions of $accountAlias (by the local shell) result in unquoted text passed to printf in the remote shell. You could consider just removing the de-quoting, but that would still leave you susceptible to malicious accountAlias values that included double-quote characters. Remember that these will be expanded on the local side, before the command is sent over the wire, and then the data will be processed by a remote shell, which is the one that will interpret the quoting.
This can be resolved by
Outside the heredoc, preparing a version of the account alias that can be safely presented to the remote shell
accountAlias_safe=$(printf %q "$accountAlias")
and
Inside the heredoc, expanding it unquoted. I would furthermore suggest passing it as a separate argument instead of interpolating it into the larger string.
printf "\nCopying following file(s) from %s_old account to %s_new account:\n" ${accountAlias_safe} ${accountAlias_safe}
Similar applies to most of the other places where variables from the local shell are interpolated into the heredoc.
Here ...
# Exit if no filename is given so Rsync does not copy all files in src directory.
if [ -z "${fileName}" ]; then
... why are you performing this test on the remote side? You would save yourself some trouble by performing it on the local side instead.
Here ...
printf "\n/"${sftpDir}"/"${accountAlias}"_old/"${srcDir}"/"\${line}" -> /"${sftpDir}"/"${accountAlias}"_new/"${destDir}"/"\${line}"\n"
... remote shell variable $line is used unquoted in the printf command. Its appearance should be quoted. Also, since you use the source and destination names twice each, it would be cleaner and clearer to put them in (remote-side) variables. AND, if the directory names have the form presented in comments in the script, then you are introducing excess / characters (though these probably are not harmful).
Good for you, documenting the meaning of all the rsync options used, but why are you sending all that over the wire to the remote side?
Also, you probably want to include rsync's -p option to preserve the same permissions. Possibly you want to include the -l option too, to copy any symbolic link as symbolic links.
Putting all that together, something more like this (untested) is probably in order:
#!/bin/bash
## Jenkins parameters
# accountAlias = "test"
# sftpDir = "/path/to/chrooted home"
# srcDir = "/path/to/get/files"
# destDir = "/path/to/put/files"
# fileName = "file names # multiline Jenkins shell parameter, one file name per
# Exit if no filename is given so Rsync does not copy all files in src directory.
if [ -z "${fileName}" ]; then
printf "\n***** At least one filename is required! *****\n"
exit 1
fi
accountAlias_safe=$(printf %q "$accountAlias")
sftpDir_safe=$(printf %q "$sftpDir")
srcDir_safe=$(printf %q "$srcDir")
destDir_safe=$(printf %q "$destDir")
fileName_safe=$(printf %q "$fileName")
IFS= read -r -p 'password for user#server: ' -s -t 60 password || {
echo 'password not entered in time' 1>&2
exit 1
}
# Rsync options used:
# -v | verbose
# -c | replace existing files based on checksum, not timestamp or size
# -r | recursively copy
# -t | preserve timestamps
# -h | human readable file sizes
# -P | resume incomplete files + show progress bars for large files
# -s | Sends file names without interpreting special chars
# -p | preserve file permissions
# -l | copy symbolic links as links
ssh user#server /bin/bash << EOF
printf "\nCopying following file(s) from %s_old account to %s_new account:\n" ${accountAlias_safe} ${accountAlias_safe}
sudo /bin/bash -c '
while IFS= read -r line; do
src=${sftpDir_safe}${accountAlias_safe}_old${srcDir_safe}"/\${line}"
dest="${sftpDir_safe}/${accountAlias_safe}_new${destDir_safe}/"\${line}"
printf "\n\${src} -> \${dest}\n"
rsync -vcrthPspl "\${src}" "\${dest}"
done <<<'${fileName_safe}'
printf "\nEnsuring all new files are owned by the %s_new account:\n" ${accountAlias_safe}
chown -vR ${accountAlias_safe}_new:${accountAlias_safe}_new ${sftpDir_safe}/${accountAlias_safe}_new${destDir_safe}
'
${password}
EOF
I have two .txt files that I want to read line per line simultaneously in .sh script. Both .txt files have the same number of lines. Inside the loop I want to use the sed-command to change the full_sample_name and sample_name in another file.
I know how this works if you just read one file, but I cannot get it work for two files.
#! /bin/bash
FULL_SAMPLE="file1.txt"
SAMPLE="file2.txt"
while read ... && ...
do
sed -e "s/\<full_sample_name\>/$FULL_SAMPLE/g" -e "s/\<sample_name\>/$SAMPLE/g" pipeline.sh > $SAMPLE.sh
done < ...?
Charles provided a very good answer.
You could use paste to join the lines of the files with some delimiter (that shouldn't appear in the files):
paste -d ":" file1.txt file2.txt | while IFS=":" read -r full samp; do
do_stuff_with "$full" and "$samp"
done
#!/bin/bash
full_sample_file="file1.txt"
sample_file="file2.txt"
while read -r -u 3 full_sample_name && read -r -u 4 sample_name; do
sed -e "s/\<full_sample_name\>/$full_sample_name/g" \
-e "s/\<sample_name\>/$sample_name/g" \
pipeline.sh >"$sample_name.sh"
done 3<"$full_sample_file" 4<"$sample_file" # automatically closed on loop exit
In this case, I'm assigning file descriptor 3 to file1.txt and file descriptor 4 to file2.txt.
By the way, with bash 4.1 or newer, you no longer need to handle file descriptors manually:
# opening explicitly, since even if opened on the loop, these need
# to be explicitly closed.
exec {full_sample_fd}<file1.txt
exec {sample_fd}<file2.txt
while read -r -u "$full_sample_fd" full_sample_name \
&& read -r -u "$sample_fd" sample_name; do
: do stuff here with "$full_sample_name" and "$sample_name"
done
# close the files explicitly
exec {full_sample_fd}>&- {sample_fd}>&-
One more note: You could make this a bit more efficient (and also more correct, if your sample_name and full_sample_name values aren't guaranteed to evaluate to themselves when interpreted as regular expressions, if your input file contains no literal NULs [which, as a shell script, it shouldn't], and if the arrow brackets are intended to be literal rather than word-boundary regex characters) by not using sed at all, but just reading the input to be converted into a shell variable, and doing the replacements there!
exec {full_sample_fd}<file1.txt
exec {sample_fd}<file2.txt
IFS= read -r -d '' input_file <pipeline.sh
while read -r -u "$full_sample_fd" full_sample_name \
&& read -r -u "$sample_fd" sample_name; do
output=${input_file//'<full_sample_name>'/${full_sample_name}}
output=${output//'<sample_name>'/${sample_name}}
printf '%s' "$output" >"${sample_name}.sh"
done
# close the files explicitly
exec {full_sample_fd}>&- {sample_fd}>&-
With GNU Parallel it will look like this:
#! /bin/bash
do_sed() {
sed -e "s/\<full_sample_name\>/$1/g" -e "s/\<sample_name\>/$2/g" pipeline.sh > "$2".sh
}
export -f do_sed
parallel --xapply do_sed {1} {2} :::: file1.txt file2.txt
The added benefit is that you get it run in parallel. Depending on your storage system this may speed up the processing: On a raid6 I have seen a 6x speedup by running 10 jobs in parallel. YMMV, so the only way to know for sure is to test and measure.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
I would like to get just the filename (with extension) of the output file I pass to my bash script:
a=$1
b=$(basename -- "$a")
echo $b #for debug
if [ "$b" == "test" ]; then
echo $b
fi
If i type in:
./test.sh /home/oscarchase/test.sh > /home/oscarchase/test.txt
I would like to get:
test.txt
in my output file but I get:
test.sh
How can I procede to parse this first argument to get the right name ?
Try this:
#!/bin/bash
output=$(readlink /proc/$$/fd/1)
echo "output is performed to \"$output\""
but please remember that this solution is system-dependent (particularly for Linux). I'm not sure that /proc filesystem has the same structure in e.g. FreeBSD and certainly this script won't work in bash for Windows.
Ahha, FreeBSD obsoleted procfs a while ago and now has a different facility called procstat. You may get an idea on how to extract the information you need from the following screenshot. I guess some awk-ing is required :)
Finding out the name of the file that is opened on file descriptor 1 (standard output) is not something you can do directly in bash; it depends on what operating system you are using. You can use lsof and awk to do this; it doesn't rely on the proc file system, and although the exact call may vary, this command worked for both Linux and Mac OS X, so it is at least somewhat portable.
output=$( lsof -p $$ -a -d 1 -F n | awk '/^n/ {print substr($1, 2)}' )
Some explanation:
-p $$ selects open files for the current process
-d 1 selects only file descriptor 1
-a is use to require both -p and -d apply (the default is to show all files that match either condition
-F n modifies the output so that you get one line per field, prefixed with an identifier character. With this, you'll get two lines: one beginning with p and indicating the process ID, and one beginning with `n indicating the file name of the file.
The awk command simply selects the line starting with n and outputs the first field minus the initial n.
I am very, very new to UNIX programming (running on MacOSX Mountain Lion via Terminal). I've been learning the basics from a bioinformatics and molecular methods course (we've had two classes) where we will eventually be using perl and python for data management purposes. Anyway, we have been tasked with writing a shell script to take data from a group of files and write it to a new file in a format that can be read by a specific program (Migrate-N).
I have gotten a number of functions to do exactly what I need independently when I type them into the command line, but when I put them all together in a script and try to run it I get an error. Here are the details (I apologize for the length):
#! /bin/bash
grep -f Samples.NFCup.txt locus1.fasta > locus1.NFCup.txt
grep -f Samples.NFCup.txt locus2.fasta > locus2.NFCup.txt
grep -f Samples.NFCup.txt locus3.fasta > locus3.NFCup.txt
grep -f Samples.NFCup.txt locus4.fasta > locus4.NFCup.txt
grep -f Samples.NFCup.txt locus5.fasta > locus5.NFCup.txt
grep -f Samples.Salmon.txt locus1.fasta > locus1.Salmon.txt
grep -f Samples.Salmon.txt locus2.fasta > locus2.Salmon.txt
grep -f Samples.Salmon.txt locus3.fasta > locus3.Salmon.txt
grep -f Samples.Salmon.txt locus4.fasta > locus4.Salmon.txt
grep -f Samples.Salmon.txt locus5.fasta > locus5.Salmon.txt
grep -f Samples.Cascades.txt locus1.fasta > locus1.Cascades.txt
grep -f Samples.Cascades.txt locus2.fasta > locus2.Cascades.txt
grep -f Samples.Cascades.txt locus3.fasta > locus3.Cascades.txt
grep -f Samples.Cascades.txt locus4.fasta > locus4.Cascades.txt
grep -f Samples.Cascades.txt locus5.fasta > locus5.Cascades.txt
echo 3 5 Salex_melanopsis > Smelanopsis.mig
echo 656 708 847 1159 779 >> Smelanopsis.mig
echo 154 124 120 74 126 NFCup >> Smelanopsis.mig
cat locus1.NFCup.txt locus2.NFCup.txt locus3.NFCup.txt locus4.NFCup.txt locus5.NFCup.txt >> Smelanopsis.mig
echo 32 30 30 18 38 Salmon River >> Smelanopsis.mig
cat locus1.Salmon.txt locus2.Salmon.txt locus3.Salmon.txt locus4.Salmon.txt locus5.Salmon.txt >> Smelanopsis.mig
echo 56 52 24 29 48 Cascades >> Smelanopsis.mig
cat locus1.Cascades.txt locus2.Cascades.txt locus3.Cascades.txt locus4.Cascades.txt locus5.Cascades.txt >> Smelanopsis.mig
The series of greps are just pulling out DNA sequence data for each site for each locus into new text files. The Samples...txt files have the sample ID numbers for a site, the .fasta files have the sequence information organized by sample ID; the grepping works just fine in command line if I run it individually.
The second group of code creates the actual new file I need to end up with, that ends in .mig. The echo lines are data about counts (basepairs per locus, populations in the analysis, samples per site, etc.) that the program needs information on. The cat lines are to mash together the locus by site data created by all the grepping below the site-specific information dictated in the echo line. You no doubt get the picture.
For creating the shell script I've been starting in Excel so I can easily copy-paste/autofill cells, saving as tab-delimited text, then opening that text file in TextWrangler to remove the tabs before saving as a .sh file (Line breaks: Unix (LF) and Encoding: Unicode (UTF-8)) in the same directory as all the files used in the script. I've tried using chmod +x FILENAME.sh and chmod u+x FILENAME.sh to try to make sure it is executable, but to no avail. Even if I cut the script down to just a single grep line (with the #! /bin/bash first line) I can't get it to work. The process only takes a moment when I type it directly into the command line as none of these files are larger than 160KB and some are significantly smaller. This is what I type in and what I get when I try to run the file (HW is the correct directory)
localhost:HW Mirel$ MigrateNshell.sh
-bash: MigrateNshell.sh: command not found
I've been at this impass for two days now, so any input would be greatly appreciated! Thanks!!
For security reasons, the shell will not search the current directory (by default) for an executable. You have to be specific, and tell bash that your script is in the current directory (.):
$ ./MigrateNshell.sh
Change the first line to the following as pointed out by Marc B
#!/bin/bash
Then mark the script as executable and execute it from the command line
chmod +x MigrateNshell.sh
./MigrateNshell.sh
or simply execute bash from the command line passing in your script as a parameter
/bin/bash MigrateNshell.sh
Make sure you are not using "PATH" as a variable, which will override the existing PATH for environment variables.
Also try to dos2unix the shell script, because sometimes it has Windows line endings and the shell does not recognize it.
$ dos2unix MigrateNshell.sh
This helps sometimes.
#! /bin/bash
^---
remove the indicated space. The shebang should be
#!/bin/bash
Unix has a variable called PATH that is a list of directories where to find commands.
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Users/david/bin
If I type a command foo at the command line, my shell will first see if there's an executable command /usr/local/bin/foo. If there is, it will execute /usr/local/bin/foo. If not, it will see if there's an executable command /usr/bin/foo and if not there, it will look to see if /bin/foo exists, etc. until it gets to /Users/david/bin/foo.
If it can't find a command foo in any of those directories, it tell me command not found.
There are several ways I can handle this issue:
Use the commandbash foo since foo is a shell script.
Include the directory name when you eecute the command like /Users/david/foo or $PWD/foo or just plain ./foo.
Change your $PATH variable to add the directory that contains your commands to the PATH.
You can modify $HOME/.bash_profile or $HOME/.profile if .bash_profile doesn't exist. I did that to add in /usr/local/bin which I placed first in my path. This way, I can override the standard commands that are in the OS. For example, I have Ant 1.9.1, but the Mac came with Ant 1.8.4. I put my ant command in /usr/local/bin, so my version of antwill execute first. I also added $HOME/bin to the end of the PATH for my own commands. If I had a file like the one you want to execute, I'll place it in $HOME/bin to execute it.
Try chmod u+x MigrateNshell.sh
There have been a few good comments about adding the shebang line to the beginning of the script. I'd like to add a recommendation to use the env command as well, for additional portability.
While #!/bin/bash may be the correct location on your system, that's not universal. Additionally, that may not be the user's preferred bash. #!/usr/bin/env bash will select the first bash found in the path.
Also make sure /bin/bash is the proper location for bash .... if you took that line from an example somewhere it may not match your particular server. If you are specifying an invalid location for bash you're going to have a problem.
Add below lines in your .profile path
PATH=$PATH:$HOME/bin:$Dir_where_script_exists
export PATH
Now your script should work without ./
Raj Dagla
I'm new to shell scripting too, but I had this same issue. Make sure at the end of your script you have a blank line. Otherwise it won't work.
First:
chmod 777 ./MigrateNshell.sh
Then:
./MigrateNshell.sh
Or, add your program to a directory recognized in your $PATH variable. Example: Path Variable Example
Which will then allow you to call your program without ./
Good day,
I am writing a relatively simple BASH script that performs an SVN UP command, captures the console output, then does some post processing on the text.
For example:
#!/bin/bash
# A script to alter SVN logs a bit
# Update and get output
echo "Waiting for update command to complete..."
TEST_TEXT=$(svn up --set-depth infinity)
echo "Done"
# Count number of lines in output and report it
NUM_LINES=$(echo $TEST_TEXT | grep -c '.*')
echo "Number of lines in output log: $NUM_LINES"
# Print out only lines containing Makefile
echo $TEST_TEXT | grep Makefile
This works as expected (ie: as commented in the code above), but I am concerned about what would happen if I ran this on a very large repository. Is there a limit on the maximum buffer size BASH can use to hold the output of a console command?
I have looked for similar questions, but nothing quite like what I'm searching for. I've read up on how certain scripts need to use the xargs in cases of large intermediate buffers, and I'm wondering if something similar applies here with respect to capturing console output.
eg:
# Might fail if we have a LOT of results
find -iname *.cpp | rm
# Shouldn't fail, regardless of number of results
find -iname *.cpp | xargs rm
Thank you.
Using
var=$(hexdump /dev/urandom | tee out)
bash didn't complain; I killed it at a bit over 1G and 23.5M lines. You don't need to worry as long as your output fits in your system's memory.
I see no reason not to use a temporary file here.
tmp_file=$(mktemp XXXXX)
svn up --set-depth=infinity > $tmp_file
echo "Done"
# Count number of lines in output and report it
NUM_LINES=$(wc -l $tmp_file)
echo "Number of lines in output log: $NUM_LINES"
# Print out only lines containing Makefile
grep Makefile $tmp_file
rm $tmp_file