I am writing a posix compliment shell script that will, amongst other things, clone a git repository and then execute a script (that was cloned along with the repository) inside the repository.
For example:
git clone git#github.com:torvalds/linux.git
cd linux
./Kconfig
The idea would be that people would use it for good, not evil, but you know.... So really I would like stop people from doing putting a line like:
rm -rf /
Inside the script.
Or perhaps something slightly less evil like:
rm -rf ../../
Is it possible for me to somehow change the permissions of the script (after the clone) so that it is only able to modify things inside the cloned repository?
Basically the answer for your question is the chroot command, which allows you to lock in processes in a directory as if it was the root directory. chroot requires root privileges to setup, but there are alternative implementations such has schroot, fakechroot, or proot that don't. Because all file system access (also read) is restricted, you will need to hand in anything that the scripts need to function into the chrooted environment. How to do that conveniently depends on your distribution.
That doesn't necessarily mean it is perfectly secure, because it provides only file system isolation.
Related
I have a Python project that is compiled using pyinstaller. It usually uses a bunch of concatenated operations && to not only compile, but perform several other things. Ok, just a big command.
I can use an alias with git bash to reduce it to a single word, fine. But I want to know, if exists some way to distribute some bash file with that alias with my project.
I mean, in my repo, having something like alias.sh that contains the alias I want, but they just exists inside that repo, so whenever I open a terminal inside that repo, I already have present those alias. And if someone forks or clones my project, they already have that alias too thanks to that specific file.
No, it's not possible.
But you can create a script file alias.sh to set up the alias and instruct users of your repo via README file to source this file once in their terminal session when they need this alias: . ./alias.sh
The file alias.sh might contain:
#!/bin/sh
alias youralias='your | long | command'
After the file was sourced with . ./alias.sh, you can simply run youralias as a standalone command.
In git when i commit bash files i want those files to be executable like this command do
git update-index --chmod=+x <file>
But i want it to be set globally by default.
Is there anyway of doing that by default ?
No, there isn't any way of doing that. On Unix systems, Git honors the executable bit of the files that are in the repository. So on such a system, by the fact that you've set them executable in the working tree, they'll automatically be tracked as executable in Git.
If you're on Windows, there is no executable bit, so you have to update them by hand. You can try using Windows Subsystem for Linux, which has an executable bit, so you can check out the repository there and work with it.
This feature doesn't exist as a default because (a) it would be hard to implement since it would require intuiting which files are shell scripts (which is nontrivial) and (b) there are many shell scripts which wouldn't be safe to run by themselves. If you know that this is appropriate for your project, you can write a shell script to set all of your shell scripts executable by default.
I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.
I've always mentally regarded the current directory as something for users, not scripts, since it is dependent on the user's location and can be different each time the script is executed.
So when I came across the Java jar utility's -C option I was a little puzzled.
For those who don't know the -C option is used before specifying a file/folder to include in a jar. Since the path to the file/folder is replicated in the jar, the -C option changes directories before including the file:
in other words:
jar -C flower lily.class
will make a jar containing the lily.class file, whereas:
jar flower/lily.class
will make a flower folder in the jar which contains lily.class
For a jar-ing script I'm making I want to use Bourne wild-cards folder/* but that would make using -C impossible since it only applies to the next immediate argument.
So the only way to use wild-cards is run from the current directory; but I still feel uneasy towards changing and using the current directory in a script.
Is there any downside to using the current directory in scripts? Is it frowned upon for some reason perhaps?
I don't think there's anything inherently wrong with changing the current directory from a shell script. Certainly it won't cause anything bad to happen, if taken by itself.
In fact, I have a standard script that I use for starting up a Java-based server, and the very first line is:
cd `dirname $0`
This ensures that the rest of the commands in the script are executed in the directory that contains the script file itself (useful when a single machine is hosting multiple server instances), regardless of where the shell script was actually invoked from. Without changing the current directory in the script, it would only work correctly if the user remember to manually cd into the corresponding directory before running the script.
In this case, performing the cd operation from within the script removes a manual step from the server startup/shutdown process, and makes things slightly less error-prone as a result.
So as with most things, there are legitimate uses for this sort of thing. And I'm sure there are also some questionable ones, as well. It really depends upon what's most appropriate for your specific use-case. Which is something I can't really comment on...I always just let maven build my JAR's for me.
I'm sharing a git repository with a colleague, and because git does not propagate the full panoply of Unix file permissions, we have a "hook" that runs on update which sets the 'other' permissions as they need to be set. The problem? The hook uses chmod, and it turns out that when my colleague commits a file, he owns it, so I can't run chmod on it, and vice versa. The directories are all group writable, sticky, so I believe that either of us has the right to remove any file and replace it with one of the same name, same contents, but different ownership. Presumably then we could chmod it. But this seems like an awfully big hammer, and I'm a bit leery of screwing it up. So, two questions:
Can anybody think of another way to do it?
If not, what's the best design for a bulletproof shell script that implements "make this file belong to me"? No cross-filesystem moves, etc etc...
For those who may not have realized, write permission does not confer permission to chmod:
% ls -l value.c
-rw-rw---- 1 agallant ta105 133 Feb 10 13:37 value.c
% [ -w value.c ] && echo writeable
writeable
% chmod o+r value.c
chmod: changing permissions of `value.c': Operation not permitted
We are both in the ta105 group.
Notes:
We're using git not only to coordinate changes but to publish the repo as a course web site. Publishing the web site is the primary purpose of the repo. The permissions script runs at every update using a git hook, and it ensures that students do not have permission to read solutions that have not yet been unveiled.
Please do not suggest that I have the wrong umask. Not all files in the repo should have the same permissions, and whatever umask is chosen, permissions on some files will need to be changed. Not to mention that it would be discourteous for me to impose my umask preferences on my colleagues.
UPDATE: I've just learned that in our environment, root is quashed to nobody on all machines we have access to, so that a solution which relies on root privileges won't work.
There is at least one Unix in which I've seen a way to give someone chmod and chown permissions on all files owned by a particular group. This is sometimes called "group superuser" or something similar.
The only Unix I'm positive I've seen this on was the version of Unix that the Encore Multimax ran.
I've searched a bit, and while I remember a few vague references to an ability of this sort in Linux, I have been unable to find them. So this may not be an option. Luckily, it can be simulated, albeit the simulation is a bit dangerous.
The way to simulate this is to make a very specific suid program that will do the chmod as root after checking that you are a member of the same group as owns the file, and your username is listed as having that permission in a special /etc/chmod_group file that must be owned by root and readable and writeable only by root.
The most straightforward way to do this is to make your partner and you members of a new group (let's say "devel"), and have that as the group of the file. That way it can be owned by either of you, and as long as the group is right, you can both work with it.
If that doesn't work with you, "sudo" can be configured such that only those two users can run a chmod command on files in that specific directory as root with no password.
If you set your umask correctly, the files could be created with the correct permissions in the first place:
$ umask 0022
$ touch foo
$ ls -l foo
-rw-r--r-- 1 sarnold sarnold 0 2011-02-20 21:17 foo
$ rm foo
$ umask 0002
$ touch foo
$ ls -l foo
-rw-rw-r-- 1 sarnold sarnold 0 2011-02-20 21:17 foo
I'm taking a step back. Let me know if I'm violating some restriction in your system I haven't read.
From your question, I assume you're trying to share a git repository using file:// URLs and relying on the UNIX filesystem permissions to take care of authorisation etc. Why don't you consider another way to share your repositories that doesn't involve this hassle?
I can think of two ways.
You can create bare repository on either of your machines, add that as a remote to your working repos and use it to collaborate. Serving it can be done using the inbuilt git daemon command. Detais are here. This will however not give you any access control.
You can install gitosis locally and use that to serve your repository. This allows a simple access control system so that you can restrict/allow certain users.
There was a related question that came up a while ago that might be relevant. git daemon worked for him - Administrating a git repo without root permissions
I also found something on server fault that might be relevant to your problem - https://serverfault.com/questions/21126/git-daemon-and-access-control-for-multiple-repos
Probably not the most elegant way, but it seems to work
$ umask 0002
$ mv value.c value.c.tmp
$ cat value.c.tmp > value.c
$ rm value.c.tmp
one could argue that it could be made bulletproof, but then someone brings along a RPG...
If both of you need to chmod, I can not think of another way - if it is OK that YOU can chmod but no the other guy, you can chmod 6770 . or chmod g+s,u+s . in the directory (e.g. set SUID and GUID bits) so the one that owns the directory will always be the owner of the files. Unfortunately some (if not most), namely EXT2/3/4 ignore the SUID bit.
Of course, setting the umask to 0002 will solve the problem by not making it mandatory.
Assuming that your publishing hook actually deploy files, rather than just setting permissions in the working copy, you could deploy to a temporary location then use rsync to ensure that the file contents and permissions are correct.
Slightly nicer, but requiring some infrastructure which I'm guessing isn't in place, would be to ensure that the deploy script only runs under one user. You could do this using sudo, if your sysadmins allow, or by setting up a git server service, like gerrit, or even by having a cron job run every five minutes which checks for updates and deploys if necessary.
This might work:
touch $name.tmp
chmod 660 $name.tmp
cp $name $name.tmp
if cmp $name $name.tmp 2>/dev/null; then
rm $name && \
cp $name.tmp $name && \
rm $name.tmp
fi
It's just a variation of your original idea
Ok, a mixture of things that build on previous answers:
you can set the umask of a folder if you mount it at fstab. If you could agree with people to work on that mount, you could enforce g+w
If you set the group-id bit of that folder (g+s) all files will belong to the group the folder belongs to, so the group ownership of the file propagates
Is that doable? Of course enforcing that mount point is no easy task. Any better ideas around that one anyone?