I'm using code like following to monitor the whole file system:
fanotify_mark(fd,
FAN_MARK_ADD | FAN_MARK_MOUNT,
FAN_OPEN | FAN_EVENT_ON_CHILD,
AT_FDCWD, "/"
)
But I need write some tests, so, I want monitor just a specific dir, let say "/tmp/test_dir". The problem is when I change code this way:
fanotify_mark(fd,
FAN_MARK_ADD,
FAN_OPEN | FAN_EVENT_ON_CHILD,
AT_FDCWD, "/tmp/test_dir"
)
fanotify only watchs to events on "/tmp/test_dir" ignoring whatever happen in deeper folders.
For instance: If I open "/tmp/test_dir/aa/bb/cc/test_file.txt" fanotify detects nothing.
I'm missing some flag?
Problem solved.
fanotify isn't recursive. It only works that way when working on mounted directories. I did the following test:
mkdir /tmp/parent
mkdir -p /tmp/other/aa/bb/cc/dd
touch /tmp/other/aa/bb/cc/dd/test.txt
mount --bind /tmp/other /tmp/parent
then in code:
fanotify_mark(fd,
FAN_MARK_ADD | FAN_MARK_MOUNT,
FAN_OPEN | FAN_EVENT_ON_CHILD,
AT_FDCWD, "/tmp/parent"
)
and it's done. Now fanotify fire up events for test.txt file.
With fanotify, either monitor entire mount point of specified path (using FAN_MARK_MOUNT), or monitor files in a directory (not its sub-directory, without specifying FAN_MARK_MOUNT). You can set separate monitors for sub-directories to achieve this. see https://stackoverflow.com/a/20965660/2706918
Related
How can I upload a selection of files with a single shortcut in PhpStorm?
Ideally it uses the PhpStorm Deployment-mechanism, - but all answers are welcome. Such as making a bash-file, that scp's the files over (which is then executed using PhpStorm).
I'm looking for a way, where I can simply press something like: CMD + OPT + CTRL + J - and then it uploads all these marked files.
My project have below-shown structure. I've marked the files I would like to be able to upload with an (x):
project
|- subfolder
|- subsubfolder
|- assets
| |- css (x)
| |- js (x)
| |- admin (x)
| |- img
|
|- foo.php
|- bar.php
|- style.css (x)
|- bundle.js (x)
|- other.php
|- other-1.php
Attempt1
I've already tried the: "Upload changed files automatically to the default server" == "Always|On explicit save..." - and it's quite magically. But if the setup isn't right - then it can mess up badly.
Alright. This is how I did it. FINALLY!!! I've been looking for this for years (no exaggeration). I just got too annoyed with it now, that I dived down into the suggestion #LazyOne came with ( <3 ).
I using PhpStorm v. 2020 and I'm on a Mac.
Step 1 - Install SSHPASS
This was the first hurdle. There are a couple of different taps out there. This one worked for me:
brew install hudochenkov/sshpass/sshpass
This is necessary to avoid getting prompted for the password whenever the scp-command is executed.
Step 2 - Get SCP to work from a terminal
It's annoying to debug from inside PhpStorm. So I would suggest starting in the terminal, getting a SCP-command to work. It relative to soooo many things, so it might differ from one host to another.
Please note that SCP is using SSH to copy files, to SSH should be enabled for this to work.
Here's a command that works for me:
sshpass -p 'mypassword' scp style.css app.js SERVER_USER#SERVER_IP:public_html/wp-content/themes/my_theme_name/
This copies two files to the given destination.
Why I'm not using a SSH-key instead?
I'm on a server where there is a master-user, that I can add my ssh-key to - so that can access the server. But I can't do that for the individual instances.
Step 3 - Make a shell-script
I made a script called uploader.sh and added this content:
#!/bin/bash
sshpass -p 'mypassword' scp myfile.css anotherfile.js athirdfile.php SERVER_USER#SERVER_IP:public_html/wp-content/themes/my_theme_name/
Then I went to 'Run' >> 'Edit configurations' and add a new Shell-file.
Note!! Remember to uncheck 'Execute in terminal'. The reason being that it's nice to be able to just keep working whereever you are. And if you execute it in the terminal, then the cursor will finish in the terminal. If it's unchecked, then it doesn't do that.
Here you can see my configuration:
Step 4 - Run and test
Now go tho 'Run' >> 'Run' and choose the one you just added. Then you should see a window like this:
And to run latest 'Run' again, you can simply press CMD + r.
BAM!
Step 5 - Add uploader.sh to .gitignore
Now the password to your server is stored in clean-text in a file on your machine. It's bad for security purposed. So if you're coding a backend for nuclear launches, then you probably shouldn't do this.
But remember to add the uploader.sh file to your .gitignore-file to avoid uploading it to the repo.
Useful resources
How to use sshpass, to pass a password to scp.
Function npp {
Param([String]$filepath)
start 'D:\Program Files (x86)\Notepad++\notepad++.exe' &($filepath)
}
Function nteract {
$file = $args[0]
start 'D:\Program Files\nteract\nteract.exe' &($file)
}
I wrote two beginner functions for the purpose of recreating the much easier aliases in bash and fish. I have tried two ways of capturing a file argument as shown above. Neither of them work. Instead I receive the following.
Say I am opening the file '.\01 Getting started.ipynb' in nteract.
Id Name PSJobTypeName State HasMoreData Location Command
-- ---- ------------- ----- ----------- -------- -------
7 Job7 BackgroundJob Running True localhost Microsoft.PowerShell.Man…
.\01. Getting started.ipynb
This displays in my console, and opens a default instance of nteract with an empty notebook. The same happens with Notepad++ with other files.
Insert a complaint here about how confusing it is to get this functionality working compared to Linux shells. What am I doing wrong?
-==-
EDIT: This question has been answered sufficiently, but I did notice strange behavior the first time I commented out the functions I wrote in my profile.
PS D:\julitory\JuliaBoxTutorials\introductory-tutorials\intro-to-julia> . $profile
PS D:\julitory\JuliaBoxTutorials\introductory-tutorials\intro-to-julia> nteract '.\02. Strings.ipynb'
Id Name PSJobTypeName State HasMoreData Location
-- ---- ------------- ----- ----------- --------
5 Job5 BackgroundJob Running True localhost
.\02. Strings.ipynb
For some reason, this occured the first time I commented them out. So I uncommented them, and it began to work again... then, the next time I commented them, they continued to work. I think I'm just too tired for this, but thanks all.
PowerShell has Set-Alias for that purpose. Define, for example:
Set-Alias -Name nteract -Value "D:\Program Files\nteract\nteract.exe"
Then use the alias as:
nteract ".\01 Getting started.ipynb"
Aliases defined this way are only available during the current PowerShell session. For ways to make them persist see How to create permanent PowerShell Aliases
.
I'm trying to use a Slurm-operated cluster to run LS-Dyna (a finite-element simulation program with a limited number of licenses available on my cluster). I am trying to write my batch scripts so that I do not waste processing time due to this license limit (as well as to improve legibility when running 'squeue' commands) by using job arrays -but I'm having trouble making that work.
I want to run identical Bash scripts in a variety of FEM meshes, each of which I have organized into different subfolders.
Given this folder structure on my cluster...
cluster root
|
...
|
|-+ my scratch space's root
|
|-+ this project
|
|--+ lat_-5mm
| |- runCurrentLine.bash
| |- other files
|
|--+ lat_-4.75mm
| |- runCurrentLine.bash
| |- other files
|
|--+ lat_-4.5mm
| |- runCurrentLine.bash
| |- other files
|
...
|
|--+ lat_5mm
| |- runCurrentLine.bash
| |- other files
|
|
|-sendDynaRuns.bash
|-other dependencies
...I'm trying to submit "runCurrentLine.bash" in each folder by running the following script in my login node.
#!/bin/bash
iter=0
for foldernow in */; do
# change to subdirectory for current line iteration
cd "./${foldernow}";
# make Slurm and user happy
echo "sending LS Dyna simulation for ${pos}mm line..."
sleep 1
# first line only: send batch, and get job ID
if [ "${iter}" == 0 ];then
# send the batch...
jobID=$(sbatch -J "Dyna" --array="${iter}"%15 runCurrentLine.bash)
# ...ensure that Slurm's output shows on console (which includes the job ID)...
echo "${jobID}"
# ...and extract the job ID and save as a variable
jobID=$(echo "${jobID}" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?')
# subsequent lines: add current line to job array
else
scontrol update --jobid="${jobID}" --array="${iter}"%15 runCurrentLine.bash
fi
# prepare to move onto next position
iter=$((iter+1))
cd ../
done
This setup properly sends the batch job for the first line, at -0.25mm*. However, for the second line onwards, it doesn't seem to do the same thing... This is what I end up getting on my console:
*: I intended the "lat_xmm" folders to be numerically ordered, but Unix doesn't seem to recognize that
$ ./sendDynaRuns.bash
sending LS Dyna simulation for -0.25mm line...
Submitted batch job 1081040
sending LS Dyna simulation for 0.25mm line...
sbatch: error: Batch job submission failed: Invalid job id specified
sending LS Dyna simulation for -0.5mm line...
sbatch: error: Batch job submission failed: Invalid job id specified
I know that runCurrentLine.bash runs just fine if I manually send it as a batch (and it runs to completion within the time limit I specified in-file, mainly since it doesn't have to compete with other lines for open licenses). What should I do to be able to get my code to work?
Thank you in advance!
As state by #Poshi, you cannot add jobs to an existing array.
I would create a submission script like this one:
#!/bin/bash
#SBATCH --array=1-<nb of folders>%15
# ALL OTHER SLURM SBATCH DIRECTIVES HERE
folders=(lat_*)
foldernow=${folders[$SLURM_TASK_ARRAY_ID]}
cd $foldernow && ./runCurrentLine.bash
The only drawback is that you need setup explicitly the number of jobs the array based on the number of folders.
Is it possible to create aliases when I enter a certain folder?
What I want:
I use composer a lot (a PHP package manager), which installs binaries in ./vendor/bin. I would like to run the binaries directly from ..
For example:
/path/to/project
| - composer.json // dictates dependencies for the project
| - vendor // libs folder for composer, is created by composer
| | - bin // if lib has bin, composer creates this folder
| | | phpunit // binary
| | | phinx // binary
| | - somelib1 // downloaded by composer
| | - somelib2 // downloaded by composer
Is it possible to get this to work:
> cd /path/to/project
> phpunit
And get phpunit to execute?
Something like "sensing" the composer.json file and dynamically find the binaries in ./vendor/bin and then do something like alias="./vendor/bin/<binary-name> $#" automatically?
I use OS X 10.9 and the boxed in Terminal app.
You can override cd, trap my_function DEBUG to run something on every command, or add a command into PS1 or PROMPT_COMMAND.
These have different behaviour and caveats, and I can't recommend doing any of them for this use case (after having used each of them at some point). They are bad solutions to X-Y problems.
An alternative which is much less likely to break things horribly is to create a custom function to do both things:
cdp() {
cd "$#" && phpunit
}
I want to monitor whole system for FAN_OPEN_PERM | FAN_CLOSE_WRITE events by a multi - threaded program, and ignore some directories (say /home/mydir). I used fanotify_init() and fanotify_mark() in main() as:
//Is there any way to use FAN_GLOBAL_LISTENER?
fd = fanotify_init(FAN_CLOEXEC| FAN_NONBLOCK | FAN_CLASS_CONTENT | FAN_UNLIMITED_QUEUE | FAN_UNLIMITED_MARKS, O_RDONLY | O_LARGEFILE)
...
//Marking "/" (doesn't work as multi-threaded program) or "/home" (works fine)
fanotify_mark(fd, FAN_MARK_ADD | FAN_MARK_MOUNT, FAN_OPEN_PERM | FAN_CLOSE_WRITE | FAN_EVENT_ON_CHILD, AT_FDCWD, "/")
....
//Now, to ignore directory
fanotify_mark(fd, FAN_MARK_ADD | FAN_MARK_ONLYDIR | FAN_MARK_IGNORED_MASK | FAN_MARK_IGNORED_SURV_MODIFY, FAN_OPEN_PERM | FAN_CLOSE_WRITE | FAN_EVENT_ON_CHILD, AT_FDCWD, "/home/mydir")
In my program, main() reads events and pass it to multiple threads to process further.
Problems : 1) System hangs for this multi-threaded program in case of monitoring "/", but works fine for "/home". 2) Still I am getting notifications for "/home/mydir" (marked "/home" & ignored "/home/mydir").
How to mark entire system without any problem with multi-threaded program?
How to use ignore mask to ignore entire directory (recursively)?
(Kernel 2.6.38-8-generic)
Read the man page.
the FAN_OPEN_PERM flag fires up an event when privileges are required to open the file. If you open a file, let say in /tmp, it does nothing.
Instead you should use FAN_OPEN.