I use UBIFS for rootfs on NAND.
When I edited a file like /etc/rc.local with nano command and saveed it,
"cat /etc/rc.local" shows the editted content, of course.
However after removing power supply (without reboot or poweroff command) and supply power again, the content of /etc/rc.local becomes empty.
I found that written data is not written to NAND straight away in UBIFS and written to cache. (refer: http://www.linux-mtd.infradead.org/faq/ubifs.html#L_empty_file)
I want to sync to NAND straightly after editting.
Only solution I found is fsync, but this should be called in C program and it requires file descripter. Nano command and so on does not give us file descripter. So I can not solve this sync problem.
How can I solve this not syncing to NAND problem?
Are there any command to sync?
Do I have to edit files with C program and use fsync if I want to edit and save a file in UBIFS?
You can use the 'sync' command. the system will flush all the cache to the disk.
Related
I have did search and people say that I must use
sudo sh -c ’echo 1 > /proc/sys/kernel/randomize_va_space to edit this file. Can someone explain to me why?
When I use vim with root to edit this file and save it shows an error: "/proc/sys/kernel/randomize_va_space" E667: Fsync failed
The files in /proc are not regular files. They are in fact kernel variables exposed through filesystem for easier access. Because of that, they don't support all functions of "normal" files, namely, fsync.
The text editor doesn't know that it's dealing with a special file and tries to use some unsupported function. On the other hand, echo ... > file works because it does not use fsync function.
fsync is a function that tells the OS to write any pending changes from file to disk.
I need to mount a VHD file at grub2 command prompt.
I tries using "loopback" command as shown below:
grub > insmod ntfs
grub > insmod ntldr
grub > loopback loop (hd0,1)/test.vhd
grub > ls (loop)/
error: unknown filesystem
I tried both "static" and "dynamic" vhd and both VHD file had ntfs partitioned data.
I guess VHD files have some header data which makes the filesystem not recognizable after "loopback" mount. I am able to mount and access "iso" files using same set of commands.
Is my guess correct? If so, is there a way to overcome this issue?
Well, your guess is half right:
Whilst VHD supports a linear "fixed" storage model, which actually is just the raw data as it would be stored on a "real" hard drive, followed by a VHD footer, this is most probably not usually the case; VHD supports dynamically resizing images, which of course aren't linear internally, so you can't simply boot into that.
I was finally able to get the data from the loop mounted data with following change in the above pasted grub command.
grub > insmod ntfs
grub > loopback loop (hd0,1)/test.vhd
grub > ls (loop,1)/
The file "test.vhd" was a single partitioned VHD file.
NOTE: Only "fixed" or "static" model VHDs work. I could not get it working with "dynamic" VHD (as suggested by #Marcus Müller )
Thanks for the help. Hope this helps somebody.
To use VHD disks on grub2 need:
insmod part_msdos
insmod ntfs
loopback loop /point/where/disk.vhd tdisk=VHD
ls (loop,msdos1)/
Here is what I am trying to do: I need to know whenever a file is read or used by a tool (e.g. compiler). I use ls to get the last accessed time using the following command
ls -l --time=access -u --sort=time --time-style=+%H:%M:%S
or
stat "filename"
But my files access times are not getting updated, I figured its because of caching! please correct me if I am wrong. So my next step was how can I clear the cache, researching it I came across some variations of the following command:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
The thing is even after I execute this command my file access time is not updated! My way of testing access time is by opening the file in gEdit or call gcc on my source file.
My setting: Ubunto 12.0.4 running on VMware, which is running on Win 7
Question: what am I missing or doing wrong that my access time is not being updated??
What you're observing is the change in the default mount option starting 2.6.30 in order to bring about filesystem performance improvement.
Quoting from man mount:
relatime
Update inode access times relative to modify or change time.
Access time is only updated if the previous access time was ear‐
lier than the current modify or change time. (Similar to noat‐
ime, but doesn't break mutt or other applications that need to
know if a file has been read since the last time it was modi‐
fied.)
Since Linux 2.6.30, the kernel defaults to the behavior provided
by this option (unless noatime was specified), and the stricta‐
time option is required to obtain traditional semantics. In
addition, since Linux 2.6.30, the file's last access time is
always updated if it is more than 1 day old.
(Also refer to this and this.) You might be looking for the following mount option:
strictatime
Allows to explicitly requesting full atime updates. This makes
it possible for kernel to defaults to relatime or noatime but
still allow userspace to override it. For more details about the
default system mount options see /proc/mounts.
I set the HOME variable in /etc/launchd.conf using the following line: setenv HOME /Users/student
Now the machine wont boot at startup. I tried holding shift at startup but safe mode doesn't seem to be working. I tried holding cmd+s on startup and got into single user mode. I was able to bring up the /etc/launchd.conf file but I can not save/overwrite the existing file due to permission issues.
Is there some way to reset this file from single user mode or other to fix this? I'm open to other approaches as well. I am not a unix/linux power user by any means, fyi :)
Thank you in advance.
I'll give you two options, but first a warning: both of these involve using the command line to undo the damage, and if you do it wrong there's a possibility you'll make it even worse. A backup would've been a good idea, but it's a little late for that (well, actually, there are still ways to do it, but they also involve a risk of getting it wrong...). So whatever your do, be careful.
Option 1: use single-user mode (Command-S at startup). This will leave you running as root, which means you are not subject to normal file permissions; after remounting the startup disk for write access (mount -uw /) you should not get permissions errors. You said this didn't work; the most likely thing is that you mistyped the command (I see this happen a lot -- people either leave out the "/", or the space between "-uw" and "/"). Hint: if the mount command prints anything (other than the prompt for the next command) you typed it wrong. If that still doesn't do it, check the file's flags and metadata with ls -leO# /etc/launchd.conf and report the results.
Option 2: use recovery mode (Command-R at startup). This boots from a small hidden partition with a minimal copy of OS X. In recovery mode, pull down the Utilities menu and choose Terminal. This is actually a fair bit like single-user mode, except that the normal startup disk won't be /, it'll be /Volumes/Macintosh HD (or whatever it's named), and it'll already be mounted for write access. Since there's (probably) a space in the volume name, you'll have to quote or escape it:
$ cd "/Volumes/Macintosh HD"
$ mv launchd.conf launchd.conf-disabled
I would like to have a synchronized copy of one folder with all its subtree.
It should work automatically in this way: whenever I create, modify, or delete stuff from the original folder those changes should be automatically applied to the sync-folder.
Which is the best approach to this task?
BTW: I'm on Ubuntu 12.04
Final goal is to have a separated real-time backup copy, without the use of symlinks or mount.
I used Ubuntu One to synchronize data between my computers, and after a while something went wrong and all my data was lost during a synchronization.
So I thought to add a step further to keep a backup copy of my data:
I keep my data stored on a "folder A"
I need the answer of my current question to create a one-way sync of "folder A" to "folder B" (cron a script with rsync? could be?). I need it to be one-way only from A to B any changes to B must not be applied to A.
The I simply keep synchronized "folder B" with Ubuntu One
In this manner any change in A will be appled to B, which will be detected from U1 and synchronized to the cloud. If anything goes wrong and U1 delete my data on B, I always have them on A.
Inspired by lanzz's comments, another idea could be to run rsync at startup to backup the content of a folder under Ubuntu One, and start Ubuntu One only after rsync is completed.
What do you think about that?
How to know when rsync ends?
You can use inotifywait (with the modify,create,delete,move flags enabled) and rsync.
while inotifywait -r -e modify,create,delete,move /directory; do
rsync -avz /directory /target
done
If you don't have inotifywait on your system, run sudo apt-get install inotify-tools
You need something like this:
https://github.com/axkibe/lsyncd
It is a tool which combines rsync and inotify - the former is a tool that mirrors, with the correct options set, a directory to the last bit. The latter tells the kernel to notify a program of changes to a directory ot file.
It says:
It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes.
But - according to Digital Ocean at https://www.digitalocean.com/community/tutorials/how-to-mirror-local-and-remote-directories-on-a-vps-with-lsyncd - it ought to be in the Ubuntu repository!
I have similar requirements, and this tool, which I have yet to try, seems suitable for the task.
Just simple modification of #silgon answer:
while true; do
inotifywait -r -e modify,create,delete /directory
rsync -avz /directory /target
done
(#silgon version sometimes crashes on Ubuntu 16 if you run it in cron)
Using the cross-platform fswatch and rsync:
fswatch -o /src | xargs -n1 -I{} rsync -a /src /dest
You can take advantage of fschange. It’s a Linux filesystem change notification. The source code is downloadable from the above link, you can compile it yourself. fschange can be used to keep track of file changes by reading data from a proc file (/proc/fschange). When data is written to a file, fschange reports the exact interval that has been modified instead of just saying that the file has been changed.
If you are looking for the more advanced solution, I would suggest checking Resilio Connect.
It is cross-platform, provides extended options for use and monitoring. Since it’s BitTorrent-based, it is faster than any other existing sync tool. It was written on their behalf.
I use this free program to synchronize local files and directories: https://github.com/Fitus/Zaloha.sh. The repository contains a simple demo as well.
The good point: It is a bash shell script (one file only). Not a black box like other programs. Documentation is there as well. Also, with some technical talents, you can "bend" and "integrate" it to create the final solution you like.