Here is what I am trying to do: I need to know whenever a file is read or used by a tool (e.g. compiler). I use ls to get the last accessed time using the following command
ls -l --time=access -u --sort=time --time-style=+%H:%M:%S
or
stat "filename"
But my files access times are not getting updated, I figured its because of caching! please correct me if I am wrong. So my next step was how can I clear the cache, researching it I came across some variations of the following command:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
The thing is even after I execute this command my file access time is not updated! My way of testing access time is by opening the file in gEdit or call gcc on my source file.
My setting: Ubunto 12.0.4 running on VMware, which is running on Win 7
Question: what am I missing or doing wrong that my access time is not being updated??
What you're observing is the change in the default mount option starting 2.6.30 in order to bring about filesystem performance improvement.
Quoting from man mount:
relatime
Update inode access times relative to modify or change time.
Access time is only updated if the previous access time was ear‐
lier than the current modify or change time. (Similar to noat‐
ime, but doesn't break mutt or other applications that need to
know if a file has been read since the last time it was modi‐
fied.)
Since Linux 2.6.30, the kernel defaults to the behavior provided
by this option (unless noatime was specified), and the stricta‐
time option is required to obtain traditional semantics. In
addition, since Linux 2.6.30, the file's last access time is
always updated if it is more than 1 day old.
(Also refer to this and this.) You might be looking for the following mount option:
strictatime
Allows to explicitly requesting full atime updates. This makes
it possible for kernel to defaults to relatime or noatime but
still allow userspace to override it. For more details about the
default system mount options see /proc/mounts.
Related
I want to change crtime properties in bash.
First, I tried to check the crtime as following command.
stat test
And next, I changed the timestamp.
touch -t '200001010101.11' test
But I realized that if the crtime is already past than the date I wrote, then it can't be changed.
So I want to know how to specify the crtime even it is already past.
Edit:
According to This answer to a similar question, you may be able to use debugfs -w -R 'set_inode_field ...' to change inode fields, though this does require unmounting.
man debugfs shows us the following available commmand:
set_inode_field filespec field value
Modify the inode specified by filespec so that the inode field field has value value. The list of valid inode fields which can be set via this command can be displayed by using the command:
set_inode_field -l Also available as sif.
You can try the following to verify the inode number and name of the crtime field:
stat -c %i test
debugfs -R 'stat <your-inode-number>' /dev/sdb1
and additionally df -Th to find the /dev path of your filesystem (e.g. /dev/sdb1)
Followed by:
umount /dev/sdb1
debugfs -w -R 'set_inode_field <your-inode-number> crtime 200001010101.11' /dev/sdb1
Note: In the above commands, inode numbers must be indicated with <> brackets as shown. Additionally, as described here it may be necessary to flush the inode cache with echo 2 > /proc/sys/vm/drop_caches
Original answer:
You might try birthtime_touch:
birthtime_touch is a simple command line tool that works similar to
touch, but changes a file's creation time (its "birth time") instead
of its access and modification times.
From the birthtime_touch Github page, which also notes why this is not a trivial thing to accomplish:
birthtime_touch currently only runs on Mac OS X. The minimum required
version is Mac OS X 10.6. birthtime_touch is known to work for files
that are stored on HFS+ and MS-DOS filesystems.
The main problem why birthtime_touch does not work on all systems and
for all filesystems, is that not all filesystems store a file's
creation time, and for those that actually do store the creation time
there is no standardized API to access/change that information.
This page has more details about the reasons why we haven't yet seen support for this feature.
Beyond this tool, it might be worth looking at the source on Github to see how it's accomplished and whether or not it might be portable to Unix/Linux. And beyond that, I imagine it would be necessary to write low level code to expose those aspects of the filesystems that crtime would be stored.
I have did search and people say that I must use
sudo sh -c ’echo 1 > /proc/sys/kernel/randomize_va_space to edit this file. Can someone explain to me why?
When I use vim with root to edit this file and save it shows an error: "/proc/sys/kernel/randomize_va_space" E667: Fsync failed
The files in /proc are not regular files. They are in fact kernel variables exposed through filesystem for easier access. Because of that, they don't support all functions of "normal" files, namely, fsync.
The text editor doesn't know that it's dealing with a special file and tries to use some unsupported function. On the other hand, echo ... > file works because it does not use fsync function.
fsync is a function that tells the OS to write any pending changes from file to disk.
I have a .jar file that is compiled on a server and is later copied down to a local machine. Doing ls -lon the local machine just gives me the time it was copied down onto the local machine, which could be much later than when it was created on the server. Is there a way to find that time on the command line?
UNIX-like systems do not record file creation time.
Each directory entry has 3 timestamps, all of which can be shown by running the stat command or by providing options to ls -l:
Last modification time (ls -l)
Last access time (ls -lu)
Last status (inode) change time (ls -lc)
For example, if you create a file, wait a few minutes, then update it, read it, and do a chmod to change its permissions, there will be no record in the file system of the time you created it.
If you're careful about how you copy the file to the local machine (for example, using scp -p rather than just scp), you might be able to avoid updating the modification time. I presume that a .jar file probably won't be modified after it's first created, so the modification time might be good enough.
Or, as Etan Reisner suggests in a comment, there might be useful information in the .jar file itself (which is basically a zip file). I don't know enough about .jar files to comment further on that.
wget and curl have options that allow you to preserve the file's modified time stamp. This is close enough to what I was looking for.
I set the HOME variable in /etc/launchd.conf using the following line: setenv HOME /Users/student
Now the machine wont boot at startup. I tried holding shift at startup but safe mode doesn't seem to be working. I tried holding cmd+s on startup and got into single user mode. I was able to bring up the /etc/launchd.conf file but I can not save/overwrite the existing file due to permission issues.
Is there some way to reset this file from single user mode or other to fix this? I'm open to other approaches as well. I am not a unix/linux power user by any means, fyi :)
Thank you in advance.
I'll give you two options, but first a warning: both of these involve using the command line to undo the damage, and if you do it wrong there's a possibility you'll make it even worse. A backup would've been a good idea, but it's a little late for that (well, actually, there are still ways to do it, but they also involve a risk of getting it wrong...). So whatever your do, be careful.
Option 1: use single-user mode (Command-S at startup). This will leave you running as root, which means you are not subject to normal file permissions; after remounting the startup disk for write access (mount -uw /) you should not get permissions errors. You said this didn't work; the most likely thing is that you mistyped the command (I see this happen a lot -- people either leave out the "/", or the space between "-uw" and "/"). Hint: if the mount command prints anything (other than the prompt for the next command) you typed it wrong. If that still doesn't do it, check the file's flags and metadata with ls -leO# /etc/launchd.conf and report the results.
Option 2: use recovery mode (Command-R at startup). This boots from a small hidden partition with a minimal copy of OS X. In recovery mode, pull down the Utilities menu and choose Terminal. This is actually a fair bit like single-user mode, except that the normal startup disk won't be /, it'll be /Volumes/Macintosh HD (or whatever it's named), and it'll already be mounted for write access. Since there's (probably) a space in the volume name, you'll have to quote or escape it:
$ cd "/Volumes/Macintosh HD"
$ mv launchd.conf launchd.conf-disabled
I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`