Ruby: Reading Unsaved Text File - ruby

I have a RFID scanner program in Windows writing its results to a sample text file. However the program does NOT save the file. Is there any way in Ruby that I can read in these changes made to the text file even though they have not been committed?

. You can not .

Can the RFID program be setup to send it's data to any arbitrary program? Because if so, then I'd say your best bet is to start by writing a program that it can send to that would do something sensible with the data (like write it to a file), instead of using Notepad as part of your process.

Related

How to make program to overwrite itself during execution in go

I tried to write a program that open itself, reads itself and looks for a certain address or bytes to substitute with an other value.
My objective is to make a program that understands if it's the first time that it's running or not by modifying some bytes the first time it runs (and I don't really like to create a file outside of my program)
The executable can read itself but when it tryes to self-overwrite it throws an error (file used by an other process... As expected)
Is there a way for the program to overwrite itself? If not maybe I can modify just a part of the program that contains just data?
Is there an other simple solution I am not aware of?
(I'm using both Linux and windows as OS.)
From what I understand, your objective is to find out if the program has been run previously or not. Instead of going with the idea you presented why not create a file, could be any file, check upon running if the file is there or not. If it's there then it has been run before else not.
A workaround can be (because it doesn't overwrite itself, it just creates an other file):
copy all content of the original executable
modify what I need
rename di original executable to a fixed name "old version"
write the modified bytes to "original name" (the modified executable)
launch the new executable just created
either have the original executable self delete or delete it from the modified executable just created
I think this gets the job done even if not on the cleanest way (the program has to start from beginning but i guess this is unavoidable)...
If someone still know a better way you are more the welcome to write your idea.

How to get all open file handles in kernel space code?

I want to write the code in kernel space to find all open file handles in the system and the process id which holdes those handles.
In user space we can do it using the utility "lsof". Similarly, i want the same in kernel space.
What's so great about Linux Kernel is that it's open source. If you want to understand how to implement something that is similar to lsof why not inspecting its' source code (I suggest the following implementation, from Android 4.2.2 source tree, at it is simplified and easier to understand) or straceing it to understand how the magic happens?
If you'll do so, at some point you'll encounter the following line
openat(AT_FDCWD, "/proc/<PID>/fd", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC)
Which will hint you that for each PID that is running, procfs is able to print information about all open file descriptors that this process holds. Therefore, this is where I would start my research and journey through the code.

two programs accessing one file

New to this forum - looks great!
I have some Processing code that periodically reads data wirelessly from remote devices and writes that data as bytes to a file, e.g. data.dat. I want to write an Objective C program on my Mac Mini using Xcode to read this file, parse the data, and act on the data if data values indicate a problem. My question is: can my two different programs access the same file asynchronously without a problem? If this is a problem can you suggest a technique that will allow these operations?
Thanks,
Kevin H.
Multiple processes can read from the same file at a time without any problem. A process can also read from a file while another writes without problem, although you'll have to take care to ensure that you read in any new data that was written. Multiple processes should not write to the same file at at the same time, though. The OS will let you do it, but the ordering of data will be undefined, and you'll like overwrite data—in general, you're gonna have a bad time if you do that. So you should take care to ensure that only one process writes to a file at a time.
The simplest way to protect a file so that only one process can write to it at a time is with the C function flock(), although that function is admittedly a bit rudimentary and may or may not suit your use case.

In Haskell, in Windows 7, can I read a file that is already write-locked by another program?

I have a 3rd party program that is running continuously, and is logging events in a text file. I want to write a small Haskell program that reads the text file while the other program is running and warns me when certain events are logged.
I looked around and it seems as if, for Windows, readFile is single write OR multiple read - it does not allow single write and multiple read. As I understand it, this is to avoid side effects like the write changing the file after/during reads.
Is there some way for me to work around this constraint on locks? The log file is only appended, and I am only looking for specific rows in the file, so I really don't mind if I don't get the most recent write, as I am interested in eventual consistency and will keep checking the file.

Programmatically empty out large text file when in use by another process

I am running a batch job that has been going for many many hours, and the log file it is generating is increasing in size very fast and I am worried about disk space.
Is there any way through the command line, or otherwise, that I could hollow out that text file (set its contents back to nothing) with the utility still having a handle on the file?
I do not wish to stop the job and am only looking to free up disk space via this file.
Im on Vista, 64 bit.
Thanks for the help,
Well, it depends on how the job actually works. If it's a good little boy and it pipes it's log info out to stdout or stderr, you could redirect the output to a program that you write, which could then write the contents out to disk and manage the sizes.
If you have access to the job's code, you could essentially tell it to close the file after each write (hopefully it's an append) operation, and then you would have a timeslice in which you could actually wipe the file.
If you don't have either one, it's going to be a bit tough. If someone has an open handle to the file, there's not much you can do, IMO, without asking the developer of the application to find you a better solution, or just plain clearing out disk space.
Depends on how it is writing the log file. You can not just delete the start of the file, because the file handle has a offset of where to write next. It will still be writing at 100mb into the file even though you just deleted the first 50mb.
You could try renaming the file and hoping it just creates a new one. This is usually how rolling logs work.
You can use a rolling log class, which will wrap the regular file class but silently seek back to the beginning of the file when the file reaches a maximum designated size.
It is a very simple wrap, either write it yourself or try finding an implementation online.

Resources