mktemp vs. umask 066 and touch? - bash

My bash shell requires a temp file. Suppose filename conflict is not an issue, can I say mktemp is not as good as manually touch a temp file after umask 066?
My assumption is:
mktemp is a system function, compared to manually touch a file, it still takes a little bit more resource.
I've read something about ln -s etc/passwd attack, but it looks like a story decades ago when passwords were not shadowed.
Please correct me if my understanding is wrong.

Those two commands are not destined to do the same thing. mktemp creates a file in a flexible way, and has features to make sure it uses a unique name. touch will modify the timestamp of a file (or create it if it does not exist), but you supply the name.
If you want to create an empty file for which you already have a name, then use touch ; if you are going to write to that file right after, you do not need to create it first, just redirect to it.
But if you really need to make a temporary file and ensure you will not overwrite any other file, touch does nothing for you. It is "lighter", maybe, but useless in this case, and you need mktemp.

The mktemp command was written by Todd C. Miller of OpenBSD to prevent common vulnerabilities in shell scripts. In his own words:
Mktemp is a simple utility designed to make temporary file handling in
shells scripts be safe and simple. Traditionally, people writing
shell scripts have used constructs like:
TFILE=/tmp/foop.$$
which are trivial to attack. If such a script is run as root it may
be possible for an attacker on the local host to gain access to the
root login, corrupt or unlink system files, or do a variety of other
nasty things.
The basic problem is that most shells have no equivalent to open(2)'s
O_EXCL flag. While it is possible to avoid this using temporary
directories, I consider the use of mktemp(1) to be superior both in
terms of simplicity and robustness.
Shadow passwords do not help here. If the script is run a root and it writes to a temporary file in an insecure way, then an attacker could possibly exploit race conditions to modify /etc/password or /etc/shadow or both!

Related

Script allowing the modification of several values in a configuration file

Hello developer friends,
I am looking to create a Shell script to modify several configuration files at several different paths.
For example; in /etc/nginx create a .bck file of the nginx.conf file and in the .conf file, replace the value "/etc/nginx/nginx-cloudflare.conf" with "/etc/nginx/nginx-cloudflare-2022.conf"
This manipulation would have to be done on several files and I would like to automate it as much as possible.
Do you have a script with an easy way to do it?
According to my research, it would be necessary to make a loop of conditions and the use of sed
I don't really know how it works, so I'm turning to you.
I cannot comment yet due to reputation, but I was going to suggest exactly what you were thinking: create a bash shell .sh script https://www.w3schools.io/terminal/bash-tutorials/, make it executable with chmod +x filename.sh so you can run it like ./filename.sh, and within it you can use sed https://www.man7.org/linux/man-pages/man1/sed.1.html in an in-place fashion --in-place[=SUFFIX] that also creates backups of said files. Sed search replace format is 's/search/replace/flags'.

Protecting scripts from errant clobbering

I spent some time building this handy bash script that accepts input via stdin. I got the idea from the top answer to this question: Pipe input into a script
However, I did something really dumb. I typed the following into the terminal:
echo '{"test": 1}' > ./myscript.sh
I meant to pipe it | to my script instead of redirecting > the output of echo.
Up until this point in my life, I never accidentally clobbered any file in this manner. I'm honestly surprised that it took me until today to make this mistake. :D
At any rate, now I've made myself paranoid that I'll do this again. Aside from marking the script as read-only or making backup copies of it, is there anything else I can do to protect myself? Is it a bad practice in the first place to write a script that accepts input from stdin?
Yes, there is one thing you can do -- check your scripts into a source-code-control repository (git, svn, etc).
bash scripts are code, and any non-trivial code you write should be checked in to source-code-control (and changes committed regularly) so that when something like this happens, you can just restore the most-recently-committed version of the file and continue onwards.
This is a very open-ended question, but I usually put scripts in a global bin folder (~/.bin or so). This lets me invoke them as myscript rather than path/to/myscript.sh, so if I accidentally used > instead of |, it'd just create a file by that name in the current directory - which is virtually never ~/.bin.

Prog Challenge - Find paths to files called from configuration files or scripts

I have no idea how to do that, so I come here for help :) Here is what I'd need. I need to parse some configuration files or bash/sh scripts on a Red Hat Linux system, and look for the paths to the files/commands/scripts meant to be executed by them. The configuration files can have different syntax or be using different languages.
Here are the files I have to look at:
Config scripts:
/etc/inittab
/var/spool/cron/root
/var/spool/cron/tabs/root
/etc/crontab
/etc/xinetd.conf
Files located under /etc/cron.d/* recursively
Bash / Sh scripts:
Files located under /etc/init.d/* or /etc/rc.d/* recursively. These folders contain only shell scripts so maybe all the other files listed above need separate treatment.
Now here's the challenges that I can think of:
The paths within the files may be absolute or relatives ;
The paths within the files may be at the beginning of lines or preceded by a character such as space, colon or semicolon ;
File paths expressed as arguments to commands/scripts must be ignored ;
Paths to directories must be ignored ;
Shell functions or built-in commands must be ignored ;
Some examples (extracted from /etc/init.d/avahi-daemon):
if [ -s /etc/localtime ]; then
cp -fp /etc/localtime /etc/avahi/etc >/dev/null 2>&1
-> Only /bin/cp and /bin/[ must be returned in the snippet above (its the only commands actually executed)
AVAHI_BIN=/usr/sbin/avahi-daemon
$AVAHI_BIN -r
-> /usr/sbin/avahi-daemon must be returned, but only because the variable is called after.
Note that I do not have access to the actual filesystem, I just have a copy of the files to parse.
After writing this up, I realize how complicated it is and unlikely to have a 100% working solution... But if you like programming challenges :)
The good part is I can use any scripting language: bash/sh/grep/sed/awk, php, python, perl, ruby or a combination of these..
I tried to start writing up in PHP but I am struggling to get coherent results.
Thanks!
The language you use to implement this doesn't matter. What matters is that the problem is undecidable, because it is equivalent to the halting problem.
Just as we know that it is impossible to determine if a program will halt, it is impossible to know if a program will call another program. For example, you may think your script will invoke X then Z, but if X never returns, Z will never be invoked. Also, you may not notice that your script invokes Y, because the string Y may be determined dynamically and never actually appear in the program text.
There are other problems which may stymie you along the way, too, such as:
python -c 'import subprocess; subprocess.call("ls")'
Now you need not only a complete parser for Bash, but also for Python. Not to mention solve the halting problem in Python.
In other words, what you want is not possible. To make it feasible you would have to significantly reduce the scope of the problem, e.g. "Find everything starting with /usr/bin or /bin that isn't in a comment". And it's unclear how useful that would be.

Pre-filling a prompt in Bash

Writing a bash script, and I want to get user input. Awesome,
read -p "What directory should we save in? " -e FOLDER
Except that what I'd like to do, ideally, is have the user see something like:
What directory should we save in? /home/user/default/
with the cursor at the end of the line, and the ability to delete backwards or append or whatever. Essentially, pre-filling the user's input, but giving them the ability to edit it.
Readline obviously has the capability, but it appears to be not exposed in the read command. Any alternatives? I'd prefer to not have to use perl or such.
The constraint I'm working under is that I'm writing a single shell script that would be nice to disseminate widely, so should rely on as little pre-existing infrastructure as possible. rlwrap and read -i both work if their dependencies (rlwrap and bash version >> whatever I have, respectively) are available. Both good answers, choose whichever works for you.
$ read -p "What directory should we save in? " -i "/home/user/default/" -e FOLDER
What directory should we save in? /home/user/default/
that should work, right?
You can wrap the command in rlwrap, which provides instant readline capabilities: https://github.com/hanslub42/rlwrap
(rlwrap -P does what you want)
As far as a pure bash solution is concerned for the 3.2 line (which i am presuming you are using), I dont think its possible

Protect against accidental deletion

Today I first saw the potential of a partial accidental deletion of a colleague's home directory (2 hours lost in a critical phase of a project).
I was enough worried about it to start thinking of the problem ad a possible solution.
In his case a file named '~' somehow went into a test folder, which he after deleted with rm -rf... when rm arrived to the file bash expanded it to his home folder (he managed to CTRL-C almost in time).
A similar problem could happen if one have a file named '*'.
My first thought was to prevent creation of files with "dangerous names", but that would still not solve the problem as mv or other corner case situations could lead to the risky situation as well.
Second thought was creating a listener (don't know if this is even possible) or an alias of rm that checks what files it processes and if it finds a dangerous one skips sending a message.
Something similar to this:
take all non-parameter arguments (so to get the files one wants to delete)
cycle on these items
check if current item is equal to a dangerous item (say for example '~' or '*'), don't know if this works, at this point is the item already expanded or not?
if so echo a message, don't do anything on the file
proceed with iteration
Third thought: has anyone already done or dealed with this? :]
There's actually pretty good justification for having critical files in your home directory checked into source control. As well as protecting against the situation you've just encountered it's nice being able to version control .bashrc, etc.
Since the shell probably expands the parameter, you can't really catch 'dangerous' names like that.
You could alias 'rm -rf' to 'rm -rfi' (interactive), but that can be pretty tedious if you actually mean 'rm -rf *'.
You could alias 'rm' to 'mv $# $HOME/.thrash', and have a separate command to empty the thrash, but that might cause problems if you really mean to remove the files because of disk quotas or similar.
Or, you could just keep proper backups or use a file system that allows "undeletion".
Accidents do happen. You only can reduce the impact of them.
Both version control (regular checkins) and backups are of vital importance here.
If I can't checkin (because it does not work yet), I backup to an USB stick.
And if the deadline aproaches, the backup frequency increases because Murphy strikes at the most inapropriate moment.
One thing I do is always have a file called "-i" in my $HOME.
My other tip is to always use "./*" or find instead of plain "*".
The version control suggestion gets an upvote from me. I'd recommend that for everything, not just source.
Another thought is a shared drive on a server that's backed up and archived.
A third idea is buying everyone an individual external hard drive that lets them back up their local drive. This is a good thing to do because there are two kinds of hard drives: those that have failed and those that will in the future.
You could also create an alias from rm that runs through a simple script that escapes all characters, effectively stopping you from using wildcards. Then create another alias that runs through real rm without escaping. You would only use the second if you are really sure. Bu then again, that's kinda the point of rm -rf.
Another option I personally like is create an alias that redirects through a script and then passes everything on to rm. If the script finds any dangerous characters, it prompts you Y/N if you want to continue, N cancelling the operation, Y continuing on as normal.
One company where I worked we had a cron job which ran every half an hour which copied all the source code files from everyone's home directory to backup directory structure elsewhere on the system just using find.
This wouldn't prevent actual deletion but it did minimise the work lost on a number of occasions.
This is pretty odd behaviour really - why is bash expanding twice?
Once * has expanded to
old~
this~
~
then no further substitution should happen!
I bravely tested this on my mac, and it just deleted ~, and not my home directory.
Is it possible your colleague somehow wrote code that expanded it twice?
e.g.
ls | xargs | rm -rf
You may disable file name generation (globbing):
set -f
Escaping special chars in file paths could be done with Bash builtins:
filepath='/abc*?~def'
filepath="$(printf "%q" "${filepath}")"
filepath="${filepath//\~/\\~}"
printf "%s\n" "${filepath}"
I use this in my ~/.basrc
alias rm="rm -i"
rm prompts before deleting anything, and the alias can be circumvented either with the -f flag, or by escabing, e.g.
\rm file
Degrades the problem yes; solves it no.

Resources