I have a problem when I detect accessing folder on Mac in python programming.
I use Pyinotify, watchdog, fsevent to monitor files change, it's very good, but now I need to detect when I access into a folder. I want to know when someone opens up the Finder at a folder so I can check for changes to that folder only
Ex: I'm currently in here folder/
and when I go to folder/folder_children , my python program can know that.
Anyone know how to detect it?
You may want to use Watchman. We provide a (not currently very well documented) python client and it works on Linux and Mac (and Solaris and FreeBSD)
https://facebook.github.io/watchman/
For your use case, the following aspects of Watchman are pertinent:
Watchman builds a time ordered index of file changes
Each logical change has an associated "clock" value
You can query Watchman for the list of files that changed since a clock value
For convenience you can ask Watchman to track a clock value with a symbolic name; we call these named cursors
For example:
When I run watchman since /path/to/dir n:myclient the first time, I get a complete list of files in /path/to/dir. When I run it the second and subsequent times it returns the list of files that changed since the last time it was run.
You can construct more sophisticated queries than since to match certain files; pertinent docs:
https://facebook.github.io/watchman/docs/cmd/since.html
https://facebook.github.io/watchman/docs/cmd/query.html
Architecturally:
On the remote side, run the Watchman service and ask it to monitor the root of the filesystem tree that you're syncing. Watchman always watches recursively.
On the client side, you can periodically (or via whatever heuristic you use to figure out when is appropriate) call up to the server
When polled by the client, the server issues a since query to Watchman using a cursor name
Your server only needs to re-examine the files in that list
You can use a separate cursor name per discrete client if you have multiple clients to synchronize.
Related
I'm using Node.js to start Watchman on Windows 2016 with a number of file type filters on a specific directory. This directory is being used for staging. Uploaded files will be routed to other folders depending on the filename.
The problem that I'm having is Watchman is picking up files that are being uploaded. It causes the moving processes to fail as it's locked. I'm thinking about using this package to check the file status (#ronomon/opened) before marking it as a candidate for moving. Is there a better way to do it?
Thanks,
Paul
Please take a look at this issue that sounds almost identical to your question; it has some other alternatives and details beyond what I've put below: https://github.com/facebook/watchman/issues/562#issuecomment-355450096
Summarizing that issue here: you need to allow for the filesystem to settle. There is a settle option you can set in your .watchmanconfig to control this:
{"settle": 60000}
You'd place that file in the upload directory (and make sure that you don't mistake it for an uploaded file and move it out) and then re-create your watch.
I waste a lot of time when running Xcode bots and I just want to see if I have it configured correctly. My test suite takes 5 minutes to run, so having to wait that amount of time each time I tweak a setting until I can see the results is not ideal. Is there any way I can see the logs as the bot is running?
An alternative approach would be some way to run just a single test, if that's possible. Obviously I could remove/comment all other tests, but I'm looking for a faster way.
This is a bit tricky to do, but possible.
Xcode Server stores bot log information in /Library/XcodeServer/IntegrationAssets/<bot_name_here>/.
Within this directory, you will find number folders for each integration (folders named 1/, 2/, 3/, etc), and within each of those folders you will find the following files (not necessarily limited to these but this is what I see):
buildService.log
sourceControl.log
trigger-before-0.log
...etc
However, this directory is only accessible if you are the root user. If you really want to take a look at logs while bots are running, you can assume root on your server machine with the following command (server password required):
sudo su -
then you can navigate to the above directory and observe the log files as they are being written.
i was reading this thread ubuntu/linux bash: traverse directory and subdirectories to work with files and i thought maybe it can be twisted a little bit
Can this be set to:
be given a base folder
scan folder + subfolder
collect all files it finds (only images)
pick one randomly
write a symbolic link to /user/share/backgrounds directory (writing the image itself overwriting existing one may work as well)
what i intend is to execute the script upon system shutdown or at set interval so it will change the gdm background image..
this is based on a step to do it manually with this line
sudo ln -s /usr/share/applications/gnome-appearance-properties.desktop /usr/share/gdm/autostart/LoginWindow/
which prompts for the appearance dialog on startup, which writes the link.
Ideally, it would have a GUI to do it at will, and an option to "change it automagically upon restart" which will do the process i described above and add itself to system start, reboot or shutdown sequence.
Since theres no working utility atm for this, it might come handy for some people =)
thanks for your help.
Use Wallpapoz. It can change wallpapers randomly across workspaces and over time.
The title may not be so clear but the issue I am facing is this:
Are designers are working on large photoshop files across the network, this has a number of network traffic and file corruption issues which I am trying to overcome.
The way I want to do this is to have the designers copy the the files to their machine (Mac OSX) and work on them locally. But the problem then stands that they may forget to copy them back up or that another designer may start work on the version stored on the network.
What I need is a system where the designer checks out the files or folders from the server which locks those files so no other user can copy them until they are checked back in. We do not need to store revisions for the files.
My initial idea was to use SVN or preferably GIT and force lock on checkout somehow, does this sound feasible or is there a better system?
How big are the files on average? Not sure about GIT haven't used it but SVN should be ok - If you did go with SVN I would trial checking out over Http/Https vs Network Path to the repo as you may get a speed advantage out of one or the other. When we vpn to our repo at work it is literally 100 times faster over http than checking out using a network \\path to the repo.
SVN is a good option, but you will have revisions (this is the whole point of SVN). SVN doesn't lock files by default, but you may configure it so that it does. See http://svnbook.red-bean.com/nightly/en/svn-book.html?bcsi_scan_554E00F99A9AD604=0&bcsi_scan_filename=svn-book.html#svn.advanced.locking
I don't know git very well, but since it's not a centralized VCS, I'm pretty sure it isn't the right tool for your situation.
I recently moved my whole local web development area over to using MacPorts stuff, rather than using MAMP on my Mac. I've been getting into Python/Django and didn't really need MAMP any more.
Thing is, I have uninstalled MAMP from the Applications folder, with the preferences file too, but how come when I run the 'locate MAMP' command in the Terminal it still shows all my /Applications/MAMP/ stuff as if it's all still there? And when I 'cd' into /Applications/MAMP/ it doesn't exist?
Something to do with locate being a kind of index searching system, hence things these old filepaths are cached? Please explain why, and how to sort it so they don't show anymore.
You've got the right idea: locate uses a database called 'locatedb'. It's normally updated by system cron jobs (not sure which on OS X); you can force an update with the updatedb command. See http://linux-sxs.org/utilities/updatedb.html among others.
Also, if you don't find files which you expect to, note this important caveat from the BUGS section of OSX' locate(1) man-page:
The locate database is typically built by user ''nobody'' and the
locate.updatedb(8) utility skips directories which are not readable
for user ''nobody'', group ''nobody'', or world. For example, if your
HOME directory is not world-readable, none of your files are in the database.
The other answers are correct about needing to update the locate database. I've got this alias to update my locate DB:
alias update_locate='sudo /usr/libexec/locate.updatedb'
I actually don't use locate all that much anymore now that I've found mdfind. It uses the spotlight file index which OSX is much better at keeping up to date compared to the locatedb. It also has quite a bit more power in what it can search from the command line.
Indeed the locate command searches through an index, that's why it's pretty fast.
The index is generated by the updatedb command, which is usually run as a nightly
or weekly job.
So to update it manually, just run updatedb.
According to the man page, its database is updated once a week:
NAME
locate.updatedb -- update locate database
SYNOPSIS
/usr/libexec/locate.updatedb
DESCRIPTION
The locate.updatedb utility updates the database used by locate(1). It is typically run once a week by
the /etc/periodic/weekly/310.locate script.
Take a look at the locate man page
http://unixhelp.ed.ac.uk/CGI/man-cgi?locate+1
You'll see that locate searches a database, not your actual filesystem.
You can update that database by using the updatedb command.
Also, since it's a database, unless you do update it regularly, locate wouln't find files that are in your filesystem that arn't in the database.