In KDE, I adjusted a macro to compile and install Python files, but I'm having problem with it keeping the files' permissions.
To be more clear, the offendling line in the macro is
install(FILES ${SOURCE_FILE} DESTINATION ${DESTINATION_DIR})
which works for 99% of the cases.
In one case, though I have a Python file marked as executable (+x, I'm talking about Linux here) in the source directory, which then is symlinked to the installation's binary dir. Since install() does not preserve permissions, the execute bit is stripped from it, and this causes all sorts of problems later on.
Is it possible to keep the file's permissions, or to read them and set them accordingly? I would hate to use a manual chmod command since it's not portable.
EDIT: I do not want to make all files installed by this macro executable, as this would be pointless.
You can install files with +x permission using
install(PROGRAMS ...
command.
Alternatively, you can install whole directory preserving file permissions:
install(DIRECTORIES ... USE_SOURCE_PERMISSIONS)
See documentation for install command for more info.
Related
I am new to Ubuntu. I need to set path in my .bashrc file, but I am getting permission denied error even if am the admin of the system .
export TCFRAME_HOME=~/tcframe
alias tcframe=$TCFRAME_HOME/scripts/tcframe
Now when I type tcframe version I get
bash: /home/p46562/tcframe/scripts/tcframe: No such file or directory
How to fix this?
The error message is telling you that you are trying to execute a file which does not exist.
We can vaguely guess about what files do exist, but without access to your system, we can't know for sure what you have actually installed and where.
Perhaps you have a file named tcframe in a directory called scripts in your home directory?
alias tcframe=$HOME/scripts/tcframe
A common arrangement to avoid littering your environment with one or more aliases for each random utility you have installed somewhere is to create a dedicated directory for your PATH - a common convention is to call it bin - and populate it with symlinks to things you want to have executable.
Just once,
mkdir $HOME/bin
and edit your .profile (or .bash_profile or .bashrc if you prefer) to include the line
PATH=$HOME/bin:$PATH
From now on, to make an executable script accessible from anywhere without an explicit path, create a symlink to it in bin;
ln -s $HOME/scripts/tcframe $HOME/bin
Notice that the syntax is like cp; the last argument is the destination (which can be a directory, or a new file name) and the first (and any subsequent arguments before the last, if the last is a directory) are the sources. When the destination is a directory, the file name of each source argument is used as the name of a new symlink within the destination directory.
Also notice that you generally want to use absolute paths; a relative path is resolved relative to bin (so e.g.
ln -s ../scripts/tcframe $HOME/bin
even if you are currently in a directory where ../scripts does not exist.)
Scripts, by definition, need to be executable. If they aren't, you get "permission denied" when you try to run them. This is controlled by permissions; each file has a set of permission bits which indicate whether you can read, write to (or overwrite), and execute this file. These permissions are also set separately for members of your group (so you can manage a crude form of team access) and everyone else. But for your personal scripts, you only really care that the x (executable) bit is set for yourself. If it isn't, you can change it - this is only required once.
chmod +x scripts/tcframe
I maintain a private Git repository with all of my config and dotfiles (.bashrc, profile.ps1, .emacs etc.).
On Windows this repository is stored under C:\git\config. Most applications expect the files to be elsewhere, so I added hard links between the repository and the expected locations.
Example
On Linux .emacs is located in ~/git/config/.emacs but emacs expects it to be at ~/.emacs. I run:
$ sudo ln -s ~/git/config/.emacs ~/.emacs
On Windows my .emacs is located in C:\git\config\.emacs, but emacs expects it to be in C:\users\ayrton\.emacs. I run:
PS> cmd /c mklink /H C:\users\ayrton\.emacs C:\git\config\.emacs
Issue
On Linux this seems to work fine: when I update the original file, the contents of the link update and everything stays in sync.
On Windows, the links break after a period of time and the files become out of sync (the file contents are different).
Why do the links break on Windows? Is there an alternative solution?
I've seen this StackOverflow post: Can't Hard Link the gitconfig File
So I’ve finally found a solution that takes the best of both: put the repo in a subdirectory, and instead of symlinks, add a configuration option for “core.worktree” to be your home directory. Now when you’re in your home directory you’re not in a git repo (so the first problem is gone), and you don’t need to deal with fragile symlinks as in the second case. You still have the minor hassle of excluding paths that you don’t want versioned (eg, the “*” in “.git/info/exclude” trick), but that’s not new.
The problem here is that the expected locations are different on Windows vs. Linux. For example, VSCode expects the user settings to be in:
Linux: $HOME/.config/Code/User/settings.json
Windows: %APPDATA%\Code\User\settings.json
Ideally I would like my repository to be platform independent. If take the core.worktree approach (e.g. make core.worktree be / or C:\, then exclude everything except specific files) I would have to maintain two copies of some configuration files when their absolute paths differ across operating systems.
Hardlinks can break if a editor opens/creates the file as a new blank file each time you save. It would not surprise me if Notepad did this because it reads the entire file into memory and has no need for the original file after it has loaded the file.
You can try to create a file symlink instead of hardlink on Windows.
I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.
I've finished a little useful script written in Bash, hosted on github. It's tested and documented. Now, I struggle with how to make it installable, i.e. where should I put it and how.
It seems other such projects use make and configure but I couldn't really find any information on how to do this for bash scripts.
Also I'm unsure into which directory to put my script.
I know how to make it usable by myself but if a user downloads it, I want to provide the means for him to install it easily.
There is no standard for this because most of the time, a project isn't a single script file. Also single file scripts don't need a build step (the script is already in an executable form) and configuration usually comes from an external config file (so no need for a configure script, either).
But I suggest to add a comment near the top of the file which explains what it does and how to install it (i.e. chmod +x + copy to folder).
Alternatively, you could create an installer script which contains your original script plus a header which asks the user where she wants to install the real script and which does everything (mkdir, set permissions with sudo, etc) but it really feels like overkill in your case.
If you want to make it installable so the package manager can easily install and remove (!) it, you need to look at the documentation for rpm or Debian packaging. These are the two most used package managers but they can't install a script per-user (so it would probably end up in /usr/bin)
instruct them to create a file named after the script in their home directory, chmod ug+x the file so it has executable permissions than put the script inside the file, don't forget the #!/bin/bash up top of the vim. This example is a script to copy a file, archive the copied file than remove the copied file leaving only the original file and the archived file.
#!/bin/bash
#### The following will copy the desired file
cp -r /home/wes/Documents/Hum430 /home/wes/docs
#### Next archives the copied file
tar -zcvf Hum430.tar.gz /home/wes/docs
#### and lastly removes the un-archived copy leaving only the original and the archived file.
rm -r /home/wes/docs
### run the file with ./filename (whatever the file is named)
Heading says it all really. Using Windows 7 and latest stable gvim, whenever I save (:w) a file it's marked executable. I'm doing cross-platform development and it'd be nice if this didn't happen.
#sceptics: The flag of the files are indeed set as executable. Do a ls -al before and after re-saving the file to observe the issue. (install cygwin, or may be other *nix emulations)
#OP: the question have been raised several times in the past. I don't remember the conclusion on the subject. You should search vim mailing-lists archives (vim_use and vim_dev).
May be you can try to add an hook to your RCS (if it supports that) to proceed to a chmod -x on file extensions that does not correspond to an executable (*.h, *.cpp, *.vim, ...), or on files that do not contain a shebang (unlike perl, I don't know if python source files may contain a shebang)