This question already has answers here:
Monitor directory content change
(2 answers)
Closed 8 years ago.
I want to write a shell script which checks all the logic files placed at /var/www/html/ and, if a user makes any change in those files, sends an alert to an administrator informing them that "User x has made change in f file." I do not have much experience in writing shell scripts, and for this I do not know how to start. Any help will be highly appreciated.
This is answered in superuser: https://superuser.com/questions/181517/how-to-execute-a-command-whenever-a-file-changes
Basically you use inotifywait
Simple, using inotifywait:
while inotifywait -e close_write myfile.py; do ./myfile.py; done
This has a big limitation: if some program replaces myfile.py with a different file, rather than writing to the existing myfile, inotifywait will die. Most editors work that way.
To overcome this limitation, use inotifywait on the directory:
while true; do
change=$(inotifywait -e close_write,moved_to,create .)
change=${change#./ * }
if [ "$change" = "myfile.py" ]; then ./myfile.py; fi
done
The asker has even put a script online for that, which can be called like this:
while sleep_until_modified.sh derivation.tex ; do latexmk -pdf derivation.tex ; done
To answer your question directly, you should probably take a look at the Advanced Bash Scripting Guide. It's about Bash specifically, but you may find it helpful even if you're not using Bash. As far as watching for changes in files, try inotify. There are also tools available to make it usable directly from the command line, which have worked quite nicely in my experience.
Now, there are a few other ways you might approach this problem. You might look at md5deep and ssdeep. They are tools designed for digital forensics which can create a list of cryptographic hashes (in the case of md5deep) or fuzzy hashes (in ssdeep's case) and later scan against that list and tell you which files have appeared, disappeared, changed, moved, etc. Or, if you want to detect potentially unauthorized changes, you might look at a host-based intrusion detection system such as OSSEC. Apparently, these can both scan for file changes and watch for other signs of unauthorized activity.
Related
I've got automated package configuration test that should also run eula and some other shell user interactions, all y/n style stuff.
I've googled this a quite bit now, however I haven't found anything quite useful yet. Also the parametrized build doesn't seem to be the resolution either (what little I understand about Jenkins). So far I've used free style project with several execute shell steps.
I've tried now like this:
#!/bin/bash
/path/to/my/script.sh < input.txt
with input.txt containing few echos containing " \n"(to scroll down the eula) and "y" (to accept the eula).
Problem is that script in question calls second script that should be the recepient of the input. But now the whole input.txt is outputted before the eula starts and therefore it's not handled.
Is there better way of handling this sort of user input situations? My Jenkins experience is rather limited.
Hi I would like to create a small program that listens for copy comands copied content for later retrival in bash. Is it possible to listen to key strokes while still keeping the shell interactive? And how can this be don arcitectualy. I don't need the whole program just a hint at how it can be done. I have no preferance when it comes to language exept that it should be implemented in a scripting language or maby c++.
Pherhaps this needs to be written like a shell extension or somthing. just a hint would be fine.
Consider the way that the script program works (see man script). I havn't done this in a while, but basically you write your pseudo terminal in C and push that into the stream, then launch the shell.
See tcgetattr/tcsetattr, grantpt, unlockpt, and ptsname, with ptem, ldterm and possibly ttcompat to be pushed using ioctl.
A simpler, though less efficient, is to run script into a pipe and capture the output. You probably will need script -f to flush the buffer (I think the -f is only in the GNU version).
This is so wrong.
I want to perform a large copy operation; moving 250 GB from my laptop hard drive to an external drive.
OSX lion claims this will take about five hours.
After a couple of hours of chugging, it reports that one particular file could not be copied (for whatever reason; I cannot remember and I don't have the patience to repeat the experiment at the moment).
And on that note it bails.
I am frankly left aghast.
That this problem persists in this day and age is to me scarcely believable. I remember hitting up against the same scenario 20 years back with Windows 3.1.
How hard would it be for the folks at Apple (or Microsoft for that matter) to implement file copying in such a way that it skips over failures, writing a list of failed operations on-the-fly to stderr? And how much more useful would that implementation be? (both these questions are rhetorical by the way; simply an expression of my utter bewilderment; please don't answer them unless by means of comments or supplements to an answer to the actual question, which follows:).
More to the point (and this is my actual question), how can I implement this myself in OS X?
PS I'm open to all solutions here: programmatic / scripting / third-party software
I hear and understand your rant, but this is bordering on being a SuperUser-type question and not a programming question (saved only by the fact you said you would like to implement this yourself).
From the description, it sounds like the Finder bailed when it couldn't copy one particular file (my guess is that it was looking for admin and/or root permission for some priviledged folder).
For massive copies like this, you can use the Terminal command line:
e.g.
cp
or
sudo cp
with options like "-R" (which continues copying even if errors are detected -- unless you're using "legacy" mode) or "-n" (don't copy if the file already exists at the destination). You can see all the possible options by typing in "man cp" at the Terminal command line.
If you really wanted to do this programatically, there are options in NSWorkspace (the performFileoperation:source:destination:files:tag: method (documentation linked for you, look at the NSWorkspaceCopyOperation constant). You can also do more low level stuff via "NSFileManager" and it's copyItemAtPath:toPath:error: method, but that's really getting to brute-force approaches there.
Want to upgrade my file management productivity by replacing 2 panel file manager with command line (bash or cygwin). Can commandline give same speed? Please advise a guru way of how to do e.g. copy of some file in directory A to the directory B. Is it heavy use of pushd/popd? Or creation of links to most often used directories? What are the best practices and a day-to-day routine to manage files of a command line master?
Can commandline give same speed?
My experience is that commandline copying is significantly faster (especially in the Windows environment). Of course the basic laws of physics still apply, a file that is 1000 times bigger than a file that copies in 1 second will still take 1000 seconds to copy.
..(howto) copy of some file in directory A to the directory B.
Because I often have 5-10 projects that use similar directory structures, I set up variables for each subdir using a naming convention :
project=NewMatch
NM_scripts=${project}/scripts
NM_data=${project}/data
NM_logs=${project}/logs
NM_cfg=${project}/cfg
proj2=AlternateMatch
altM_scripts=${proj2}/scripts
altM_data=${proj2}/data
altM_logs=${proj2}/logs
altM_cfg=${proj2}/cfg
You can make this sort of thing as spartan or baroque as needed to match your theory of living/programming.
Then you can easily copy the cfg from 1 project to another
cp -p $NM_cfg/*.cfg ${altM_cfg}
Is it heavy use of pushd/popd?
Some people seem to really like that. You can try it and see what you thing.
Or creation of links to most often used directories?
Links to dirs are, in my experience used more for software development where a source code is expecting a certain set of dir names, and your installation has different names. Then making links to supply the dir paths expected is helpful. For production data, is just one more thing that can get messed up, or blow up. That's not always true, maybe you'll have a really good reason to have links, but I wouldn't start out that way, just because it is possible to do.
What are the best practices and a day-to-day routine to manage files of a command line master?
( Per above, use standardized directory structure for all projects.
Have scripts save any small files to a directory your dept keeps in the /tmp dir, .
i.e /tmp/MyDeptsTmpFile (named to fit your local conventions) )
It depends. If you're talking about data and logfiles, dated fileNames can save you a lot of time. I recommend dateFmts like YYYYMMDD(_HHMMSS) if you need the extra resolution.
Dated logfiles are very handy, when a current process seems like it is taking a long time, you can look at the log file from a week ago and quantify exactly how long this process took, a week, month, 6 months (up to how much space you can afford). LogFiles should also capture all STDERR messages, so you never have to re-run a bombed program just to see what the error message was.
This is Linux/Unix you're using, right? Read the man page for the cp cmd installed on your machine. I recommend using an alias like alias CP='/bin/cp -pi' so you always copy a file with the same permissions and with the original files' time stamp. Then it is easy to use /bin/ls -ltr to see a sorted list of files with the most recent files showing up at the bottom of the list. (No need to scroll back to the top, when you sort by time,reverse). Also the '-i' option will warn you that you are going to overwrite a file, and this has saved me more than a couple of times.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for a shell script idea to work on for practice with shell scripting. Can you please suggest intermediate ideas to work on?
I'm a developer and I prefer working on an idea that deals with files.
For shell scripting, think of a task that you do frequently - and think how you would automate that task.
You can start off with a basic script that just about does what you need. Then you realize that there are small variations on the task, and you start to allow the script to handle those. And it gently becomes more complex.
Almost all of the scripts I have (some hundreds of them) started off as "I've done that before; how can I avoid having to do it again?".
Can you give an example?
No - because I don't know what tasks you do sufficiently often to be (minor) irritants that could be salved by writing a script.
Yes - because I've got scripts that I wrote incrementally, in an attempt to work around some issue or other in my environment.
One task that I'm working on - still a work in progress - is:
Identify duplicate files
Starting at some nominated directory (default, $HOME), find all the files, and for each file, establish a checksum (MD5, SHA1, SHA256 - it is not critical which) for the file; record the file name and checksum (and maybe device number and inode number).
Establish which checksums are repeated - hence identifying identical files.
Eliminate the unique checksums.
Group the duplicate files together with appropriate identifying information.
This much is fairly easy - it requires some medium-grade shell scripting and you might have to find a command to generate the checksum (but you might be OK with sum or cksum, though neither of those reaches even the level of MD5). I've done this in both shell and Perl.
The hard part - where I've not yet gotten a good solution - is then dealing with the duplicates. I have some 8,500 duplicated hashes, with about 27,000 file names in total. Some of the duplicates are images like smileys used in chat transcripts - there are a lot of that particular image. Others are duplicate PDF files collected from various machines at various times; I need to organize them so I have one copy of the file on disk, with perhaps links in the other locations. But some of the other locations should go - they were convenient ways to get the material from retired machines onto my current machine.
I have not yet got a good solution to the second part.
Here are two scripts from my personal library. They are simple enough not to require a full blown programming language, but aren't trivial, particularly if you aim to get all the details right (support all flags, return same exit code, etc.).
cvsadd
Write a script to perform a recursive cvs add so you don't have to manually add each sub-directory and its files. Make it so it detects the file types and adds the -kb flag for binary files as needed.
For bonus points: Allow the user to optionally specify a list of directories or files to restrict the search to. Handle file names with spaces correctly. If you can't figure out if a file is text or binary, ask the user.
#!/bin/bash
#
# Usage: cvsadd [FILE]...
#
# Mass `cvs add' script. Adds files and directories recursively, automatically
# figuring out if they are text or binary. If no file names are specified, looks
# for unversioned files and directories in the current directory.
svnfind
Write a wrapper around find which performs the same job, recursively finding files matching arbitrary criteria, but ignores .svn directories.
For bonus points: Allow other actions besides the default -print. Support the -H, -L, and -P options. Don't erroneously filter out files which simply happen to contain the substring .svn. Make usage identical to the regular find command.
#!/bin/bash
#
# Usage: svnfind [-H] [-L] [-P] [path...] [expression]
#
# Attempts to behave identically to a plain `find' command while ignoring .svn
# directories. Usage is identical to `find'.
You could try some simple CGI scripting. It can be done in shell and involves a lot of here documents, parsing and extracting of form values, a bit of escaping and whatever you want to do as payload. (I do not recommend exposing such a script to the hostile internet, though.)