I'm looking for a solution for monitoring a folder for new file creation and then execute shell command upon the created file. The scenario is I have a host machine that runs a virtual machine and they share a folder. What I want is when I create or copy a new file to that shared folder on my host machine, on the VM, the system should be able to detect those changes. I have tried incron and inotify but they only work when I do the copy, create as a user in the VM. Thanks
Method 1 in this answer may help: Bash script, watch folder, execute command
Just run that script in your VM, and you should be able to detect changes made by the host.
Related
I am wondering if it is possible to RUN a remote file stored in an NFS share when building an image from a dockerfile.
Currently I am using the COPY command and then the RUN command to execute the files, however many of the files I need to create the image are extremely large.
Is it possible to execute files stored in an NFS share directly in the dockerfile without having to copy them all over?
You can only RUN files inside your container - so it needs to copied to your container.
What you can do is move the COPY commands to the beginning of your Dockerfile so that they are cached and don't need to be copied every time you change a command later in the Dockerfile.
You can RUN curl.... to grab the remote file ,then execute it sure.
But this will only run at image build time, not during lifecycle of the container
You could also mount the NFS volume to your host, then COPY the files.
Otherwise, remote execution is a pretty basic security flaw and shouldn't be possible under any circumstances
enter image description here
I have windows with linux subsystem and I am trying to run druid. I am getting a message CANNOT CREATE FIFO. What should I do to avoid it?
I faced the same issue myself. Was trying to run through WSL Ubuntu. It seems that FIFO file can't be created over mounted drive i.e /mnt/c/.
Workaround for this, you'd have to copy the entire Druid installation folder over to any internal folder eg. /usr/share/ and launch from there.
I am running a Jenkins slave on a restricted environment. This environment will only allow me to execute files in a specific directory.
The problem I have is running simple batch commands.
The slave's java.io.tmpdir being AppData/Local/Temp, jenkins will copy my command in a temp bat file and attempt to run it, like such:
cmd /c call D:\Users\TastyWithPasta\AppData\Local\Temp\hudson8090039221524722157.bat
Here the issue becomes obvious, the command cannot be run due to restriction and the build fails.
Anybody working in a restricted environment and facing the same issues? What would be a good workaround?
Unfortunately, -Djava.io.tmpdir=newpath is not an option since this taps into the Java installation. Maybe there is a way to override it locally?
I have a simple Django code which I want to keep running on a specific GCE instance. Sometimes the instance gets restarted due to some reasons, not in my control. I created a batch script which I tried to put in Startup folder in both users and common folder. It didn't work. I tried putting the script in using sysprep-specialize-script-url(using cloud storage), sysprep-specialize-script-cmd and sysprep-specialize-script-bat. It didn't work. Here's the content of the batch script -
cd C:\Users\kartik_domadiya\Desktop\happierMiscGoogleCloud
manage.py runserver 0.0.0.0:80
pause
I tried running C:\Program Files\Google\Compute Engine\metadata_scripts\run_startup_scripts.cmd manually and it worked (with any metadata key). So I can see that there's no problem with the script itself.
I even tried with putting the batch script in task scheduler which didn't work too.
So is there any way I can debug the problem and find out why isn't the batch script working? I am using Windows 2012 R2, if that matters.
PS: I know that's a development server and should not be used in production.
I moved the code to C:/code (basically out of any particular user's folder) and then provided all user its access (Right Click > Properties > Security), updated the batch file and put it into startup folder (Run > shell:startup).
It started working after that. I suppose the issue was due to access permission.
I have a Jenkins job that calls a batch file on a ClearCase drive (V:).
My Jenkins slave agent is running as a service using a local admin account.
The Jenkins job does the follow:
cleartool startview MY_VIEW
cd /d "V:\MY_VIEW\Build"
call PrepareBuild.bat
When I run the Jenkins job, I keep getting "Access is denied." in the Console Output when it tries to call the batch file. However if I manually run the above in command prompt, it completes successfully.
I did not have this problem under Windows XP. Does anybody know why this is happening on Windows 7 (32-bit)?
Thanks.
The V:\ is a virtual drive obtained with the windows command subst.
It is a shortcut between the root directory of your dynamic view (M:\yourView) and the virtual drive.
(Ie, V:\ is not particularly linked to ClearCase. It is just a drive letter the user wishes to associate to a certain ClearCase view root directory)
However, ClearCase registers that association in the registry HKCU/software/atria/....
Which means the ClearCase session run under the local admin account for Jenkins won't know about said association and the need to restore that virtual drive.
A workaround would be to make that drive permanent, using psubst.
That register the drive path in [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices], and HKLM is accessible from all accounts.
See " How to make SUBST mapping persistent across reboots? "
I had the same problem. Had a simpler solution.
Jenkins doesn't have access to folders that only the user has access to (even though its run by the user). So the folder which is getting access denied you need to set folder permission to everyone not the user