setfacl(bash)isnt working - bash

Well,i wanted to give a spesific user(lets call him Tom) some spesific rights to a folder that i own.So i created a folder to my home directory called test1
-Then i setted the rights i wanted to give to him.Well it didnt work.These are the commands i used.
cd ~
mkdir test1
chmod 700 test1
setfacl -m u:Tom:rwx test1
The output of the getfacl command after that was
getfacl test1
# file: test1
# owner: Alator
# group: users
user::rwx
user:Tom:rwx
group::---
mask::rwx
other::---
After all that Tom still couldnt access folder test1.any thoughts?

Related

Using GitHub cache action with multiple cache paths?

I'm trying to use the official GitHub cache action (https://github.com/actions/cache) to cache some binary files to speed up some of my workflows, however I've been unable to get it working when specifying multiple cache paths.
Here's a simple, working test I've set up using a single cache path:
There is one action for writing the cache, and one for reading it (both executed in separate workflows, but on the same repository and branch).
The write-action is executed first, and creates a file "subdir/a.txt", and then caches it with the "actions/cache#v2" action:
# Test with single path
- name: Create file
shell: bash
run: |
mkdir subdir
cd subdir
printf '%s' "Lorem ipsum" >> a.txt
- name: Write cache (Single path)
uses: actions/cache#v2
with:
path: "D:/a/cache_test/cache_test/**/*.txt"
key: test-cache-single-path
The read-action retrieves the cache, prints a list of all files in the directory recursively to confirm it has restored the file from the cache, and then prints the contents of the cached txt-file:
- name: Get cached file
uses: actions/cache#v2
id: get-cache
with:
path: "D:/a/cache_test/cache_test/**/*.txt"
key: test-cache-single-path
- name: Print files
shell: bash
run: |
echo "Cache hit: ${{steps.get-cache.outputs.cache-hit}}"
cd "D:/a/cache_test/cache_test"
ls -R
cat "D:/a/cache_test/cache_test/subdir/a.txt"
This works without any issues.
Now, the description of the cache action contains an example for specifying multiple cache paths:
- uses: actions/cache#v2
with:
path: |
path/to/dependencies
some/other/dependencies
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
But when I try that for my example actions, it fails to work.
In the new write-action, I create two files, "subdir/a.txt" and "subdir/b.md", and then cache them by specifying two paths:
# Test with multiple paths
- name: Create files
shell: bash
run: |
mkdir subdir
cd subdir
printf '%s' "Lorem ipsum" >> a.txt
printf '%s' "dolor sit amet" >> b.md
#- name: Write cache (Multi path)
uses: actions/cache#v2
with:
path: |
"D:/a/cache_test/cache_test/**/*.txt"
"D:/a/cache_test/cache_test/**/*.md"
key: test-cache-multi-path
The new read-action is the same as the old one, but also specifies both paths:
# Read cache
- name: Get cached file
uses: actions/cache#v2
id: get-cache
with:
path: |
"D:/a/cache_test/cache_test/**/*.txt"
"D:/a/cache_test/cache_test/**/*.md"
key: test-cache-multi-path
- name: Print files
shell: bash
run: |
echo "Cache hit: ${{steps.get-cache.outputs.cache-hit}}"
cd "D:/a/cache_test/cache_test"
ls -R
cat "D:/a/cache_test/cache_test/subdir/a.txt"
cat "D:/a/cache_test/cache_test/subdir/b.md"
This time I still get the confirmation that the cache has been read:
Cache restored successfully
Cache restored from key: test-cache-multi-path
Cache hit: true
However "ls -R" does not list the files, and the "cat" commands fail because the files do not exist.
Where is my error? What is the proper way of specifying multiple paths with the cache action?
I was able to make it work with a few modifications;
use relative paths instead of absolute
use a hash of the content for the key
It looks like with at least bash the absolute paths look like this:
/d/a/so-foobar-cache/so-foobar-cache/cache_test/cache_test/subdir
Where so-foobar-cache is the name of the repository.
.github/workflows/foobar.yml
name: Store and Fetch cached files
on: [push]
jobs:
store:
runs-on: windows-2019
steps:
- name: Create files
shell: bash
id: store
run: |
mkdir -p 'cache_test/cache_test/subdir'
cd 'cache_test/cache_test/subdir'
echo pwd $(pwd)
printf '%s' "Lorem ipsum" >> a.txt
printf '%s' "dolor sit amet" >> b.md
cat a.txt b.md
- name: Store in cache
uses: actions/cache#v2
with:
path: |
cache_test/cache_test/**/*.txt
cache_test/cache_test/**/*.md
key: multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
- name: Print files (A)
shell: bash
run: |
echo "Cache hit: ${{steps.store.outputs.cache-hit}}"
find cache_test/cache_test/subdir
cat cache_test/cache_test/subdir/a.txt
cat cache_test/cache_test/subdir/b.md
fetch:
runs-on: windows-2019
needs: store
steps:
- name: Restore
uses: actions/cache#v2
with:
path: |
cache_test/cache_test/**/*.txt
cache_test/cache_test/**/*.md
key: multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
restore-keys: |
multiple-files-${{ hashFiles('cache_test/cache_test/**') }}
multiple-files-
- name: Print files (B)
shell: bash
run: |
find cache_test -type f | xargs -t grep -e.
Log
$ gh run view 1446486801
โœ“ master Store and Fetch cached files ยท 1446486801
Triggered via push about 3 minutes ago
JOBS
โœ“ store in 5s (ID 4171907768)
โœ“ fetch in 10s (ID 4171909690)
First job
$ gh run view 1446486801 --log --job=4171907768 | grep -e Create -e Store -e Print
store Create files 2021-11-10T22:59:32.1396931Z ##[group]Run mkdir -p 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1398025Z mkdir -p 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1398695Z cd 'cache_test/cache_test/subdir'
store Create files 2021-11-10T22:59:32.1399360Z echo pwd $(pwd)
store Create files 2021-11-10T22:59:32.1399936Z printf '%s' "Lorem ipsum" >> a.txt
store Create files 2021-11-10T22:59:32.1400672Z printf '%s' "dolor sit amet" >> b.md
store Create files 2021-11-10T22:59:32.1401231Z cat a.txt b.md
store Create files 2021-11-10T22:59:32.1623649Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
store Create files 2021-11-10T22:59:32.1626211Z ##[endgroup]
store Create files 2021-11-10T22:59:32.9569082Z pwd /d/a/so-foobar-cache/so-foobar-cache/cache_test/cache_test/subdir
store Create files 2021-11-10T22:59:32.9607728Z Lorem ipsumdolor sit amet
store Store in cache 2021-11-10T22:59:33.9705422Z ##[group]Run actions/cache#v2
store Store in cache 2021-11-10T22:59:33.9706196Z with:
store Store in cache 2021-11-10T22:59:33.9706815Z path: cache_test/cache_test/**/*.txt
store Store in cache cache_test/cache_test/**/*.md
store Store in cache
store Store in cache 2021-11-10T22:59:33.9708499Z key: multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55
store Store in cache 2021-11-10T22:59:33.9709961Z ##[endgroup]
store Store in cache 2021-11-10T22:59:35.1757943Z Received 260 of 260 (100.0%), 0.0 MBs/sec
store Store in cache 2021-11-10T22:59:35.1761565Z Cache Size: ~0 MB (260 B)
store Store in cache 2021-11-10T22:59:35.1781110Z [command]C:\Windows\System32\tar.exe -z -xf D:/a/_temp/653f7664-e139-4930-9710-e56942f9fa47/cache.tgz -P -C D:/a/so-foobar-cache/so-foobar-cache
store Store in cache 2021-11-10T22:59:35.2069751Z Cache restored successfully
store Store in cache 2021-11-10T22:59:35.2737840Z Cache restored from key: multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55
store Print files (A) 2021-11-10T22:59:35.3087596Z ##[group]Run echo "Cache hit: "
store Print files (A) 2021-11-10T22:59:35.3088324Z echo "Cache hit: "
store Print files (A) 2021-11-10T22:59:35.3088983Z find cache_test/cache_test/subdir
store Print files (A) 2021-11-10T22:59:35.3089571Z cat cache_test/cache_test/subdir/a.txt
store Print files (A) 2021-11-10T22:59:35.3090176Z cat cache_test/cache_test/subdir/b.md
store Print files (A) 2021-11-10T22:59:35.3104465Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
store Print files (A) 2021-11-10T22:59:35.3106449Z ##[endgroup]
store Print files (A) 2021-11-10T22:59:35.3494703Z Cache hit:
store Print files (A) 2021-11-10T22:59:35.4456032Z cache_test/cache_test/subdir
store Print files (A) 2021-11-10T22:59:35.4456852Z cache_test/cache_test/subdir/a.txt
store Print files (A) 2021-11-10T22:59:35.4459226Z cache_test/cache_test/subdir/b.md
store Print files (A) 2021-11-10T22:59:35.4875011Z Lorem ipsumdolor sit amet
store Post Store in cache 2021-11-10T22:59:35.6109511Z Post job cleanup.
store Post Store in cache 2021-11-10T22:59:35.7899690Z Cache hit occurred on the primary key multiple-files-25c0e6413e23766a3681413625169cee1ca3a7cd2186cc1b1df5370fb43bce55, not saving cache.
Second job
$ gh run view 1446486801 --log --job=4171909690 | grep -e Restore -e Print
fetch Restore 2021-11-10T22:59:50.8498516Z ##[group]Run actions/cache#v2
fetch Restore 2021-11-10T22:59:50.8499346Z with:
fetch Restore 2021-11-10T22:59:50.8499883Z path: cache_test/cache_test/**/*.txt
fetch Restore cache_test/cache_test/**/*.md
fetch Restore
fetch Restore 2021-11-10T22:59:50.8500449Z key: multiple-files-
fetch Restore 2021-11-10T22:59:50.8501079Z restore-keys: multiple-files-
fetch Restore multiple-files-
fetch Restore
fetch Restore 2021-11-10T22:59:50.8501644Z ##[endgroup]
fetch Restore 2021-11-10T22:59:53.1143793Z Received 257 of 257 (100.0%), 0.0 MBs/sec
fetch Restore 2021-11-10T22:59:53.1145450Z Cache Size: ~0 MB (257 B)
fetch Restore 2021-11-10T22:59:53.1163664Z [command]C:\Windows\System32\tar.exe -z -xf D:/a/_temp/30b0dc24-b25f-4713-b3d3-cecee7116785/cache.tgz -P -C D:/a/so-foobar-cache/so-foobar-cache
fetch Restore 2021-11-10T22:59:53.1784328Z Cache restored successfully
fetch Restore 2021-11-10T22:59:53.5197756Z Cache restored from key: multiple-files-
fetch Print files (B) 2021-11-10T22:59:53.5483939Z ##[group]Run find cache_test -type f | xargs -t grep -e.
fetch Print files (B) 2021-11-10T22:59:53.5484730Z find cache_test -type f | xargs -t grep -e.
fetch Print files (B) 2021-11-10T22:59:53.5498140Z shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
fetch Print files (B) 2021-11-10T22:59:53.5498674Z ##[endgroup]
fetch Print files (B) 2021-11-10T22:59:55.8119800Z grep -e. cache_test/cache_test/subdir/a.txt cache_test/cache_test/subdir/b.md
fetch Print files (B) 2021-11-10T22:59:56.1777887Z cache_test/cache_test/subdir/a.txt:Lorem ipsum
fetch Print files (B) 2021-11-10T22:59:56.1784138Z cache_test/cache_test/subdir/b.md:dolor sit amet
fetch Post Restore 2021-11-10T22:59:56.3890391Z Post job cleanup.
fetch Post Restore 2021-11-10T22:59:56.5481739Z Cache hit occurred on the primary key multiple-files-, not saving cache.
Came here to see if I can cache multiple binary files. I see there a separate workflow for pushing cache and another one for retrieving. We had a separate usecase where we need to install certain dependencies. Sharing the same here.
Usecase
You workflow needs gcc and python3 to run.(The dependencies can be any other as well)
You have a script to install dependencies ./install-dependencies.sh and you provide appropriate env to the script like ENV_INSTALL_PYTHON=true or ENV_INSTALL_GCC=true
Points to be noted
./install-dependencies.sh takes care of installing the dependencies in the path ~/bin and produces the executable binaries in the same path. It also ensures that the $PATH environment variable is updated with the new binary paths
Instead of duplicating the check cache and install binaries 2 times (as we have 2 binaries now), we are able to do it in only one. So even if we have a requirement of installing 50 binaries, we can still do them in only two steps like this
The cache key name python-gcc-cache-key can be anything but ensure that it is unique.
The third step - name: install python, gcc takes care of creating the key with the name python-gcc-cache-key if it was not found, even though we have not mentioned this keyname anywhere in this step.
The first step is where you checkout your repository containing your ./install-dependencies.sh script.
Workflow
name: Install dependencies
on: [push]
jobs:
install_dependencies:
runs-on: ubuntu-latest
name: Install python, gcc
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
## python, gcc installation
# Check if python, gcc if present in worker cache
- name: python, gcc cache
id: python-gcc-cache
uses: actions/cache#v2
with:
path: |
~/bin/python
~/bin/gcc
key: python-gcc-cache-key
# Install python, gcc if was not found in cache
- name: install python, gcc
if: steps.python-gcc-cache.outputs.cache-hit != 'true'
working-directory: .github/workflows
env:
ENV_INSTALL_PYTHON: true
ENV_INSTALL_GCC: true
run: |
./install-dependencies.sh
- name: validate python, gcc
working-directory: .github/workflows
run: |
ENV_INSTALL_BINARY_DIRECTORY_LINUX="$HOME/bin"
export PATH="$ENV_INSTALL_BINARY_DIRECTORY_LINUX:$PATH"
python3 --version
gcc --version
Benefits
It will depend on what binaries you are trying to install.
For us the saved time was nearly 50sec everytime there was cache hit.

Backuping files before change

Since Ansible backup feature is questionable a little with lack of configuration. I'm looking into some solution.
Normally in script I would have backup function that you can call with file name and it would copy the file to separate location with changed name.. for example bkp_location = /tmp/backup//
Lets say I want to backup /etc/systemconf/network I pass it to function and it would copy it to backup directory under etc_systemconf_network ( it replace / with _ so we can tell where it come from )
What would be the best solution in Ansible for something like that ? That I could call it in every role etc...
Maybe one backup.yml in root directory and have it include and pass variable ( file name ) to it, would that work ?
Edit:
Backup feature I speak of:
there is an option backup: yes for some modules ( this is shared function between them as far as I know ) but does not offer any modification to what it does.
Like what would be the backup file name, where it would be located... ? so I have to handle that externally... kind of mid-step between.. but seems like include backup.yml and pass variable to it will do the trick.
cat backup.yml
- name: creating backup
copy: src="{{ path_of_file }}" dest="{{ bkp_location }}/backup{{ path_of_file }}{{ contenttoaddwhilebackingup }}" remote_src=true
in running playbook
include: backup.yml
So if you run a playbook like this
ansible-playbook random.yml -e 'bkp_location=/tmp/backup/ path_of_file=/etc/systemconf/network contenttoaddwhilebackingup=26march2021'
It will create backup like this
ls -lrt /etc/systemconf/
-rw-r--r-- 1 root root 2 Mar 25 15:22 network
ls -lrt /tmp/backup/
-rw-r--r-- 1 root root 2 Mar 25 15:22 backupnetwork26march2021

Bash: set name of directory as a variable while looping

I have a directory containing a big number of sub-directories within.
I need to loop over all subdiretories and save it names (without a path!) as a distinct variable
for d in ${output}/*/
do
dir_name=${d%*/}
echo ${dir_name}
done
the problem of the current version that it gives me a full path of the directory instead. Here is the result of echo
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig992
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig993
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig994
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig995
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig996
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig997
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig998
/Users/gleb/Desktop/DOcking/clusterizator/sub_folders_to_analyse/7000_CNE_lig999
With the dir_name=${d%*/}, you remove the trailing / only. You will want to remove everything upto the last / as well. Or try basename, which is perhapse a better option.
As in:
for d in /var/*/ ; do
dir_name=${d%/}
base=$(basename "$d")
echo "$d $dir_name ${dir_name##*/} $base"
done
which produces:
/var/adm/ /var/adm adm adm
/var/cache/ /var/cache cache cache
/var/db/ /var/db db db
/var/empty/ /var/empty empty empty
/var/games/ /var/games games games
/var/heimdal/ /var/heimdal heimdal heimdal
/var/kerberos/ /var/kerberos kerberos kerberos
/var/lib/ /var/lib lib lib
/var/lock/ /var/lock lock lock
/var/log/ /var/log log log
/var/mail/ /var/mail mail mail
/var/man/ /var/man man man
/var/named/ /var/named named named
/var/netatalk/ /var/netatalk netatalk netatalk
/var/run/ /var/run run run
/var/slapt-get/ /var/slapt-get slapt-get slapt-get
/var/spool/ /var/spool spool spool
/var/state/ /var/state state state
/var/tmp/ /var/tmp tmp tmp
/var/www/ /var/www www www
/var/yp/ /var/yp yp yp
(on my system).
Can you cd to that parent directory?
cd ${output}/
lst=( */ )
for d in "${lst[#]}"; do echo "${d*/}"; done
If that's not an option, then you can strip it each time.
lst=( ${output}/*/ )
for d in "${lst[#]}"; do dir="${d*/}"; echo "${dir##/}"; done
As a hybrid, you can sometimes use a trick of changing directory inside a subshell, as the cd is local to the subshell and "goes away" when it ends, but so do any assignments.
cd /tmp
( cd ${output}/; lst=( */ ); for d in "${lst[#]}"; do echo "${d*/}"; done )
# in /tmp here, lst array does not exist any more...

Cannot create staging directory on HDFS in a folder that has permissions

There are couple of folders in the root dir of HDFS:
dir1
subdir1
table1
table2
subdir2
dir2
subdir1
table1
table2
dir3
They all have subfolders that contain different Parquet files that are queried with Hive.
I can't load one of the subfolders (for example table1 inside dir2) even though the permissions look ok to me, I get the EXECUTE error when trying to load it.
The code is running in a Jupyter notebook.
Users are organized in groups.
I've added rwx permissions for the directory in question to the group by using the following command:
hdfs dfs -setfacl -R -m group:user_group:rwx /dir2/subdir2
The error I'm getting looks like this:
Cannot create staging directory 'hdfs://server:8020/dir2/subdir1/table1/.hive-staging_hive_2019-08-01_13-04-22': Permission denied: user=username, access=EXECUTE, inode="/dir2":hdfs:supergroup:drwxrwx---
I've added read and execute permissions on dir2 to the user group but the error persists. It looks to me from this error that somehow the default permissions are applied and they are ---
So, to summarize;
group has read and execute privileges on the root dir, and read, write and execute privileges on the table directories, but it keeps failing with permissions for root directory.
This is how the permissions look:
# file: /dir2
# owner: hdfs
# group: supergroup
user::rwx
user:some_group1:r-x
group::---
group:some_group2:rwx
group:user_group:r-x
group:hive:rwx
group:some_group3:r-x
group:some_group4:r-x
mask::rwx
other::---
default:user::rwx
default:user:some_group1:r-x
default:group::---
default:group:some_group2:rwx
default:group:hive:rwx
default:group:some_group3:r-x
default:group:some_group4:r-x
default:mask::rwx
default:other::---
# file: /dir2/subdir1/table1
# owner: some_user
# group: supergroup
user::rwx
user:some_group1:r-x
group::---
group:some_group2:rwx
group:user_group:rwx
group:hive:rwx
group:some_group3:r-x
group:some_group4:rwx
mask::rwx
other::---
default:user::rwx
default:user:some_group1:r-x
default:group::---
default:group:some_group2:rwx
default:group:user_group:rwx
default:group:hive:rwx
default:group:some_group3:r-x
default:group:some_group4:rwx
default:mask::rwx
default:other::---
The problem was eventually solved by creating new directories that replaced the old ones. The new directories were created with the correct user and credentials.
For example, I created subdir1_new, moved the data there, renamed subdir1 to subdir1_old and renamed subdir1_new to subdir1. Not a lot of folders were affected by this issue so it didn't take a long time.
I know it's not the actual solution, but I couldn't figure out what exactly was happening and this workaround did the trick.

Files created through Cygwin (calling a shell script) don't have correct Windows permissions

I am currently running Cygwin on a target Windows Server 2003 machine to fire off a shell script that, among other things, creates a bunch of files on disc. However after the files are created I no longer have permissions to manipulate them through Windows.
When the files are created the owner is getting set to 'SYSTEM' and the permissions for Administrators/Creator Group/Creator Owner/system are set to only 'special permissions' and nothing else.
The permissions for Everyone and Users have Read & Execute, List folder contents and Read.
My problem is that I cannot delete/modify the files now through Windows. I would prefer to have something built into my scripts (either the shell script or something to call in Cygwin) that would allow Administrators full control on the folder and all contents.
My current workaround has been to either do file modifications through Cygwin but this is not preferable. I have also used setfacl -r -m default:other:rwx to add write permissions for the 'Users' group but it doesn't appear to have a recursive option and still doesn't give 'full control'
Is there a better way to use setfacl? Can I call the shell script using different/elevated permissions?
Results of getfacl on a newly created directory:
$ getfacl Directory/
# file: Directory/
# owner: SYSTEM
# group: root
user::rwx
group::r-x
group:Users:rwx
mask:rwx
other:r-x
default:user::rwx
default:group::r-x
default:group:Users:rwx
default:mask:rwx
default:other:r-x
You can try setting umask:
umask u=rwx,g=rwx,o=rwx
That should give user, group, and other read/write/execute on any newly created dirs.
If you only want the modified umask permanently, you can add it to your .bash_profile
Edit - Added example of mkdir before/after umask.
Here's the output of getfacl on a directory created before I set umask:
[/cygdrive/c/Documents and Settings/NOYB/Desktop]
==> getfacl test_wo_umask/
# file: test_wo_umask/
# owner: NOYB
# group: Domain Users
user::rwx
group::r-x
group:root:rwx
group:SYSTEM:rwx
mask:rwx
other:r-x
default:user::rwx
default:user:NOYB:rwx
default:group::r-x
default:group:root:rwx
default:group:SYSTEM:rwx
default:mask:rwx
default:other:r-x
Here's the output of getfacl on a directory created after I set umask:
[/cygdrive/c/Documents and Settings/NOYB/Desktop]
==> getfacl test_w_umask/
# file: test_w_umask/
# owner: NOYB
# group: Domain Users
user::rwx
group::rwx
group:root:rwx
group:SYSTEM:rwx
mask:rwx
other:rwx
default:user::rwx
default:user:NOYB:rwx
default:group::rwx
default:group:root:rwx
default:group:SYSTEM:rwx
default:mask:rwx
default:other:rwx

Resources