Make symlinked file non-writable - macos

I have this situation, a real folder called "git" is symlinked to .githooks:
Is there a way to make the symlinked files non-writable, without affecting the original files?
I don't want users to accidentally modify the source in the git folder form the .githooks folder.
I tried:
chmod -R .githooks/* 555
and
chmod -R .githooks/* 544
and I got this both times:
chmod: Invalid file mode: .githooks/pre-commit

Related

Unable to Change Permission for File on External Disc on Mac

Problem
I have a corrupt file on a network NAS:
/bad/bad_file.file
Which I want to remove but I cannot. The file name is extremely long, so this may be part of the problem.
Finder
The file has the permission user:Read&Write, everyone:No Access
If I try to change it with in macs file info, it basically tells me that the file is not found:
An unexpected error occurred (error code -43).
Terminal
If cd to the folder containing the bad file's folder
sudo chmod -R 777 bad
I get
chmod: Unable to change file mode on bad_file.file
If I want to make myself the owner of the file:
sudo chown -R $(whoami) bad
I get the error:
chown: bad_file.file No such file or directory
Question
How can I remove this file, or if it is not there make it disappear under the path in case it's not there?

Redirection to other directories

My /home directory is having very less memory. But some of my programs which are running in production will create dynamic files in '/home' directory.
The problem is if it reaches to 100% then my program doesn't work. So I have to manually go and delete the files or copy the files.
So rather than doing that I want to redirect the files from '/home' to '/tmp' directory in unix by default.
Please give me some thoughts.
You have at least two ways to do:
if you can config your program to export files to other dir, do this.
if you cannot do anything on the program, you can create a cron job, remove/cp those files automatically
If the program creates files under it's own directory, you can create a symlink:
# Create directory in /tmp
mkdir /tmp/myprog
# Set permissions
chown "${USER}:${USER}" /tmp/myprog
chmod -R o-x /tmp/myprog
# Create symlink at /home/myprog
ln -s /tmp/myprog "${HOME}/myprog"

how to upload a file to s3 and make it readable by "everyone", in one linux command

I want to upload test.zip to mybucket1 folder1 test.zip
and set the permission to "allow read" for "Everyone"
Can this be done with a single linux command?
sure it can be, though i dont really understand if you want to upload or just move so i am going for one of these.
mv test.zip /mybucket1/folder1 | chmod 644 test.zip
with using the 644 you define these permissions for the file.
-rw-r--r--
if you want this: -rwxr--r--
instead of 644 add 744.

Strange systems permissions warnings when running processes in bash

I am getting the following warnings when I run homebrew or any of the ruby web servers locally.
Does anyone know how to get them to go away?
larson:local larson$ brew doctor
/usr/local/Library/Homebrew/global.rb:65: warning: Insecure world writable dir /Users/larson in PATH, mode 040757
/usr/local/bin/brew:74: warning: Insecure world writable dir /Users/larson in PATH, mode 040757
/usr/local/Library/Homebrew/global.rb:65: warning: Insecure world writable dir /Users/larson in PATH, mode 040757
/usr/local/bin/brew:74: warning: Insecure world writable dir /Users/larson in PATH, mode 040757
Your system is raring to brew.
Just remove world write permissions from the directory. As it's your home directory, it really shouldn't have them
chmod o-w /Users/larson
Make the directory not world-writable.
chmod o-w /Users/larson
Change the permissions on the indicated directories to remove the world-writable bit.
as in
chmod 750 ${directory}
Edit: This isn't just your home directory, ruby will gripe if any part of the path contains a world writable permission, since the ability to write to a directory means you can delete any file in that directory, even if you don't own it. Then you can recreate the directory structure and put your own files in place, potentially substituting malicious code.
In other words, if the Users directory is also world writable, it will also cause the error to pop up.
To get them all in one whack:
sudo chmod -R 750 /Users

Copying a Directory Tree File by File

My advisor frequently compiles a bunch of software into his /usr/local and then distributes his /usr/local as a tarball. I then extract the tarball onto our lab machines for our use. Until now, this has worked fine, we just replace the old /usr/local with the contents of the tarball.
But I have started installing my own software on these machines. If I delete /usr/local, obviously some of that software and config gets deleted as well.
Instead, how can I recursively copy files from the tarball into the corresponding directory in /usr/local?
For example:
tarball path filesystem path
------------ ---------------
local/bin/myprog -> /usr/local/bin/myprog
local/lib/mylib.so -> /usr/local/lib/mylib.so
etc.
Or is this a better question for ServerFault?
$ cd /usr
$ tar xvf f.tar
or
$ cd /tmp
$ tar xvf f.tar
$ cp -R local/. /usr/local/.
Although really, I think it should just go in some other directory, or in a subdir of /usr/local/. There isn't anything magical about /usr/local/ except perhaps a default PATH component.
The cp command has the -r flag for copying recursively:
$ cp -r local/* /usr/local/
Please look up your system's man page for cp for more information.
Use the k option and specify the destination to protect against overwriting files:
$ cd /usr
$ tar xvfk localtarball.tar local
local/
local/file
tar: local/file: Cannot open: File exists
local/bar2/
local/bar2/bar3
tar: local/file: Cannot open: File exists
local/bar2a/bar2aY/
tar: Error exit delayed from previous errors

Resources