Using Bash on Ubuntu on Windows 10. Svn directory structure is as follows:
/student/2017/s1/EM/otherdirs
I recently restructured a committed directory (called EM) locally, and when I was trying to commit the changes on svn I made a mistake somewhere (lots of attempts of deleting/adding directory) and now the parent directory of EM (s1) has no write permissions.
Running ls -l in 2017 dir:
dr-xr-xr-x 2 root root 0 May 6 11:01 s1
drwxrwxrwx 2 root root 0 May 7 13:57 s2
I have tried running the chmod and chown commands on s1 as suggested by other questions on SE but they are not working.
This all happened yesterday and I can't remember exactly the error svn gave me but it had something to do with the WC db (not sure what that is). How can I make the dir s1 writable?
EDIT: Solved. Bash on Ubuntu on Windows terminal was not working using accepted answer, so I tried using running Cygwin as administrator and was able to change the permissions
chmod a+w s1 will give all users write access. Since it's a directory, you may need to apply write access to every file in the directory. The following code will do it:
chmod -R a+w s1
To explain the chmod command more in-depth, there are 9 permission categories. There are 3 types of permissions (read, write, execute), and 3 groups (owner, group, other). The owner is quite simply the owner of the file. group consists of all users in the same "group" as the owner; that includes all sudo users if the owner is sudo. Typically this is not used too much. others includes everyone else.
The way chmod is usually used is like so:
chmod [-R] [u][g][o][a][+|-][r][w][x]
Use -R to apply recursively to all subelements of a directory. The first part looks like this:
chmod [-R] [u][g][o][a]
u specifies that the changes will apply to the owner. g includes the group. o includes all others. a includes everyone (thus making it equivalent to ugo).
The second part looks like this:
chmod [-R] <target>[+|-][r][w][x]
+ adds the permissions to the target, and - removes them. r for read, w for write, and x for execute.
This means that for your case, depending on whether you only want yourself to have write-access or if you want everyone to get write access, use something along the lines of:
chmod [-R] a+w s1
or replacing a with the appropriate target.
Chmod and chown should work.
Try sudo'g coz the owner may be root and your id may not have the root user permissions.
Chmod and chown should work, no reason why they can't work.
Is there a way to chmod 777 the contents of a tarfile upon creation (or shortly thereafter) before distributing? The write permissions of the directory that's being tar'd is unknown at the time of tar'ing (but typically 555). I would like the unrolled dir to be world writable without the users who are unrolling the tar to have to remember to chmod -R 777 <untarred dir> before proceeding.
The clumsy way would be to make a copy of the directory, and then chmod -R 777 <copydir> but I was wondering if there was a better solution.
I'm on a Solaris 10 machine.
BACKGROUND:
The root directory is in our ClearCase vob with specific file permissions, recursively. A tarfile is created and distributed to multiple "customers" within our org. Most only need the read/execute permissions (and specifically DON'T want them writable), but one group in particular needs their copy to be recursively writable since they may edit these files, or even restore back to a "fresh" copy (i.e., in their original state as I gave them).
This group is somewhat technically challenged. Even though they have instructions on the "how-to's" of the tarfile, they always seem to forget (or get wrong) the setting of the files to be recursively writable once untarred. This leads to phone calls to me to diagnose a variety of problems where the root cause is that they forgot to do (or did incorrectly) the chmod'ing of the unrolled directory.
And before you ask, yes, I wrote them a script to untar/chmod (specific just for them), but... oh never mind.
So, I figured I'd create a separate, recursively-writable version of the tar to distribute just to them. As I said originally, I could always create a copy of the dir, make the copy recursively writable and then tar up the copy dir, but the dir is fairly large, and disk space is sometimes near full (it can vary greatly), so making a copy of the dir will not be feasable 100% of the time.
With GNU tar, use the --mode option when creating the archive, e.g.:
tar cf archive.tar --mode='a+rwX' *
But note that when the archive is extracted, the umask will be applied by default. So unless the user's umask is 000, then the permissions will be updated at that point. However, the umask can be ignored by using the -p (--preserve) option, e.g.:
tar xfp archive.tar
You can easily change the permissions on the files prior to your tar command, although I generally recommend people NEVER use 777 for anything except /tmp on a unix system, it's more productive to use 755 or worst case 775 for directories. That way you're not letting the world write to your directories, which is generally advisable.
Most unix users don't like to set the permissions recursively because it sets the execute bit on files that should not be executable (configuration files for instance) to avoid this they invented a new way to use chmod some time ago, called symbolic mode. Reading the man page on chmod should provide details, but you could try this:
cd $targetdir; chmod -R u+rwX,a+rX .; tar zcvf $destTarFile .
Where your $targetdir is the directory you are tarring up and $destTarFile is the name of the tar file you're creating.
When you untar that tar file, the permissions are attempted to be retained. Certain rules govern that process of course - the uid and gid of the owner will only be retained if root is doing the untaring, but otherwise, they are set to the efective uid and gid of the current process.
I have one file with multiple filenames in it (with their absolute path). I need to copy those file in my home directory from the server & the owner of the files are different. I can use "dzdo su - username" to switch to user. How I can achieve the same ?????
It may be simpler to just copy the files in the way you already are then and then run `chown -R' on them to change ownership:
cp -r /path/to/specialdirectory /home/someuser/
chown someuser -R /home/someuser/specialdirectory
Or I suppose you could also chown them one at a time to the correct user as you copy them in.
I'm trying to write a script that will let me add to an existing directory structure and copy a bunch of files into various places within this. However, using mkdir ... and cp... commands alone wont work since I do not have permission to do so. I understand that this can be changed manually in the 'Get Info' window, but this script will be run by others and its whole point is to save time and hassle.
Is there a way of adding to this script to give me permission to copy files to BASEDIR/SUBDIRS?
A bit more detail on what I'm doing:
I want to add to the directory BASEDIR with a bunch of SUBDIRS then copy files into these subdirectories. The problem is that I am receiving these 'permission denied' errors right after the mkdir BASEDIR/SUBDIR1/SUBDIR2 command.
Thanks
The command
sudo chmod -R ugo=rwx BASEDIR/
gives all folder permissions to all users to BASEDIR and all its subdirectories
I'm sharing a git repository with a colleague, and because git does not propagate the full panoply of Unix file permissions, we have a "hook" that runs on update which sets the 'other' permissions as they need to be set. The problem? The hook uses chmod, and it turns out that when my colleague commits a file, he owns it, so I can't run chmod on it, and vice versa. The directories are all group writable, sticky, so I believe that either of us has the right to remove any file and replace it with one of the same name, same contents, but different ownership. Presumably then we could chmod it. But this seems like an awfully big hammer, and I'm a bit leery of screwing it up. So, two questions:
Can anybody think of another way to do it?
If not, what's the best design for a bulletproof shell script that implements "make this file belong to me"? No cross-filesystem moves, etc etc...
For those who may not have realized, write permission does not confer permission to chmod:
% ls -l value.c
-rw-rw---- 1 agallant ta105 133 Feb 10 13:37 value.c
% [ -w value.c ] && echo writeable
writeable
% chmod o+r value.c
chmod: changing permissions of `value.c': Operation not permitted
We are both in the ta105 group.
Notes:
We're using git not only to coordinate changes but to publish the repo as a course web site. Publishing the web site is the primary purpose of the repo. The permissions script runs at every update using a git hook, and it ensures that students do not have permission to read solutions that have not yet been unveiled.
Please do not suggest that I have the wrong umask. Not all files in the repo should have the same permissions, and whatever umask is chosen, permissions on some files will need to be changed. Not to mention that it would be discourteous for me to impose my umask preferences on my colleagues.
UPDATE: I've just learned that in our environment, root is quashed to nobody on all machines we have access to, so that a solution which relies on root privileges won't work.
There is at least one Unix in which I've seen a way to give someone chmod and chown permissions on all files owned by a particular group. This is sometimes called "group superuser" or something similar.
The only Unix I'm positive I've seen this on was the version of Unix that the Encore Multimax ran.
I've searched a bit, and while I remember a few vague references to an ability of this sort in Linux, I have been unable to find them. So this may not be an option. Luckily, it can be simulated, albeit the simulation is a bit dangerous.
The way to simulate this is to make a very specific suid program that will do the chmod as root after checking that you are a member of the same group as owns the file, and your username is listed as having that permission in a special /etc/chmod_group file that must be owned by root and readable and writeable only by root.
The most straightforward way to do this is to make your partner and you members of a new group (let's say "devel"), and have that as the group of the file. That way it can be owned by either of you, and as long as the group is right, you can both work with it.
If that doesn't work with you, "sudo" can be configured such that only those two users can run a chmod command on files in that specific directory as root with no password.
If you set your umask correctly, the files could be created with the correct permissions in the first place:
$ umask 0022
$ touch foo
$ ls -l foo
-rw-r--r-- 1 sarnold sarnold 0 2011-02-20 21:17 foo
$ rm foo
$ umask 0002
$ touch foo
$ ls -l foo
-rw-rw-r-- 1 sarnold sarnold 0 2011-02-20 21:17 foo
I'm taking a step back. Let me know if I'm violating some restriction in your system I haven't read.
From your question, I assume you're trying to share a git repository using file:// URLs and relying on the UNIX filesystem permissions to take care of authorisation etc. Why don't you consider another way to share your repositories that doesn't involve this hassle?
I can think of two ways.
You can create bare repository on either of your machines, add that as a remote to your working repos and use it to collaborate. Serving it can be done using the inbuilt git daemon command. Detais are here. This will however not give you any access control.
You can install gitosis locally and use that to serve your repository. This allows a simple access control system so that you can restrict/allow certain users.
There was a related question that came up a while ago that might be relevant. git daemon worked for him - Administrating a git repo without root permissions
I also found something on server fault that might be relevant to your problem - https://serverfault.com/questions/21126/git-daemon-and-access-control-for-multiple-repos
Probably not the most elegant way, but it seems to work
$ umask 0002
$ mv value.c value.c.tmp
$ cat value.c.tmp > value.c
$ rm value.c.tmp
one could argue that it could be made bulletproof, but then someone brings along a RPG...
If both of you need to chmod, I can not think of another way - if it is OK that YOU can chmod but no the other guy, you can chmod 6770 . or chmod g+s,u+s . in the directory (e.g. set SUID and GUID bits) so the one that owns the directory will always be the owner of the files. Unfortunately some (if not most), namely EXT2/3/4 ignore the SUID bit.
Of course, setting the umask to 0002 will solve the problem by not making it mandatory.
Assuming that your publishing hook actually deploy files, rather than just setting permissions in the working copy, you could deploy to a temporary location then use rsync to ensure that the file contents and permissions are correct.
Slightly nicer, but requiring some infrastructure which I'm guessing isn't in place, would be to ensure that the deploy script only runs under one user. You could do this using sudo, if your sysadmins allow, or by setting up a git server service, like gerrit, or even by having a cron job run every five minutes which checks for updates and deploys if necessary.
This might work:
touch $name.tmp
chmod 660 $name.tmp
cp $name $name.tmp
if cmp $name $name.tmp 2>/dev/null; then
rm $name && \
cp $name.tmp $name && \
rm $name.tmp
fi
It's just a variation of your original idea
Ok, a mixture of things that build on previous answers:
you can set the umask of a folder if you mount it at fstab. If you could agree with people to work on that mount, you could enforce g+w
If you set the group-id bit of that folder (g+s) all files will belong to the group the folder belongs to, so the group ownership of the file propagates
Is that doable? Of course enforcing that mount point is no easy task. Any better ideas around that one anyone?