I just set up the NVMeOF/RDMA environment to play around. I have a target node which NVMe SSD is accessed by some client nodes. However, when I delete a file say test on one client node, the rest nodes cannot see this operation and can still read the content of test as normal. I know that RDMA bypasses the kernel, so I guess this is because of the cache? I then have tried to clean up the cache using these commands:
sudo sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
sudo sync; echo 1 | sudo tee /proc/sys/vm/drop_caches
sudo sync; echo 2 | sudo tee /proc/sys/vm/drop_caches
Unfortunately, other nodes still keep this file.
So actually I have two questions:
Does it exactly due to the cache? How does it work?
What is the correct way to clean up the cache so that other nodes can see the deletion without re-mount?
Any help will be greatly appreciated!
Relatively short answer
Like Boris said, you don't want to do that (distributed consistency on storage is a hard problem), and you need something else to do what you want. Flushing caches may not work because you've got multiple distinct views of the system + caching behaviors
Longer answer:
As Boris mentioned, NVMeoF is a block protocol. This means that at a broad level (with some hand-waving) all it can do is read and write blocks at a particular address. In practice, we usually have layers above the NVMe/NVMeoF communication layer like file systems that handle this abstraction.
I can't tell right now if you're using a file system or if you reading/writing the device directly, but in either case you are at least partially correct that the page cache may be getting in the way, even with RDMA.
Now, if you are using local file systems on your client nodes, you quickly get inconsistent views. The filesystem (and consequently overall operating system and its view of the state of the page cache and block storage) has no idea anyone else wrote anything. So even if you write and sync on one client, you may have to bypass the page cache on another (e.g. use O_DIRECT reads, which have their own sets of complexities) and make sure you target something that eventually refers to the same block addresses that were written on the NVMe target from your other client.
In theory, this will let you read data written by another if everything lines up correctly, in practice though this can cause confusion, especially if a file system or application on one client writes something at a location, and the other client attempts to read or write that location unknowingly. Now you have a consistency problem.
NVMeoF (with RDMA or any other transport) is a block level storage protocol and not a file level storage protocol. Thus, there is no guarantee to the atomicity of file operations across nodes in NVMeoF systems. Even if one node deletes a file, there is no guarantee that:
The delete operation was actually translated to block erase operations and sent to the storage server;
Even if the storage server deleted the blocks, there is no guarantee that other clients that have cached this data will not continue to read it. Moreover, another client can overwrite the deleted file.
Overall, I think that to have any guarantees at the file level, you should consider a distribute file systems, rather than NVMeoF.
What is the correct way to clean up the cache so that other nodes can see the deletion without re-mount?
There is no good way to do it. Flushing the cache on all nodes and only then reading may work, but it depends on the file system.
I am willing to write an backup application that:
read a ntfs partition without using windows api (done)
write a bootable ntfs partition from the data it saved (planned)
i have a problem with $logfile, im scared that if i simply copy it, it could make windows believe that this partition is in a bad state and try to fix it(and would probably corrupt stuff since this scenario isn't supposed to happen). i currently have very little understanding of how logfile work except that it is some kind of transaction and it use entries sequence numbers.
my question is the following:
what happen if mft entries don't fit $logFile content? (like sequences numbers)
can i bypass this by not copying $logfile? or at least removing a part of it content? (i guess windows wouldn't try to fix anything but i cant be sure)
if it doesn't work, what could i do to make $logfile safe to copy?
I accidentally ran this command while trying to remove an errant directory named \\ from my project directory. Quite a mistake I know. It pretty quickly began hitting permissioned files at which point I realized my mistake so I ctrl-c'ed out of there. I have all my important projects backed up but the command killed my development environment. Opening vim anywhere is crashing and throwing a segfault like so:
Vim: Caught deadly signal SEGV
Error detected while processing function <SNR>130_PollServerReady[7]..<SNR>130_Pyeval:Vim: Finished.
line 4:
Exception MemoryError: MemoryError() in <module 'threading' from '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.pyc'> ignored
[1] 6921 segmentation fault vim ~/dotfiles/.vimrc
My primary question for myself and anyone who commits a similar gaff is:
What, precisely does the double backslash // point to? What would be deleted first? Is there a logical first place to begin replacing util, configs, $PATH stuff etc?
Hopefully, this is clear and specific enough for SO.
cd // will take you to the root directory /.
rm performs a depth-first search, walking the results of the xfts_open call. find also traverses filesystems in this manner.
find / will list the files that exist. You can then use your knowledge of the expected structure to reverse the list that are missing.
Alternatively, you can use debugfs to help you get at the files.
This assumes that these commands will actually work. Realistically, your system is probably hosed. Deleting things in / will break your computer. Restoring from backup is probably the easiest way to return to a functional system. You can also try various utilities to recover recently erased files from your hard drive; if you plan on doing this, you should stop using your computer, as your hard drive currently treats many areas as free space (since you told it to) which recently held files in / and it could start writing to those areas.
In Linux, and I believe other *nix flavours, an extra slash in a path is simply ignored. Thus, a//b is the same as a/b and // is the same as /. I hope you didn't run this as a superuser...
I was working on my website under root and I commit the worst thing that a linux user can do : rm -R /* instead of rm -R ./*.
I've stopped the process when I saw that it was taking too long...
I manage to reinstall lubuntu with an usb key, is this a good idea or are there other ways to reverse this big mistake ?
Thanks to any answer
Short answer: no.
Long answer: depends on the filesystem and on how rm is implemented. It's possible that rm merely unlinks the file; the inode (marked "deleted") and data may still remain. And even if the inode is hard-deleted, the data may remain. But in either case: there is a risk that your actions since that time have already written data over your old data or over the location of the soft-deleted inode. This can happen even with temporary files, or file descriptors (such as for sockets or processes) or pagefile [well, unless that thing has its own partition].
I wouldn't recommend trying to relink soft-deleted inodes, or infer from your data how to reconstruct hard-deleted inodes. Sure, maybe for irreplaceable memories this would be worth it (take the drive to a data forensics specialist), but there's near-guaranteed corruption somewhere on the disk. I would certainly not attempt to run a production system off a disk recovered like that.
I recommend one of the following:
restoring from your regularly-scheduled backup
wiping everything and starting again (you have all your website files stored under source control and stored remotely, right?)
redeploying your Docker image (this was an immutable deployment, right?)
To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.