I have recently discovered "fs_usage" which seems extremely useful.
I was wondering, along the same lines, is it possible to intercept the interaction of a process with the operating system? (to run it in a "sand box" mode for example, if the program tells "write address X", I write to address Y instead. and if program tells "read address X", I read address Y and return that etc, basically have full control on what a process I run can do to my computer or sees from my computer)
Of course it's possible, that's what debuggers do! If you're talking about Linux, that's ptrace(2). On Mac, that's dtrace.
Related
I've seen sites that say when a command is entered in a terminals shell, this gives it to the OS which is where the actual change is done; while others say the shell allows users to communicate with the kernel.
However, images on the internet, like the first one on this Wikipedia page say that the kernel sits between the OS and applications.
So is the shell actually sending the commands to the kernel which then sends them to the OS or does the shell sit at the same level as the kernel and and is then just sending the commands straight to the OS?
How would I go about detecting if my go CLI program is in focus or minimized?
Current program based off https://github.com/jroimartin/gocui
I require the functionality as it is a chat program and I would like to send OS notifications but only when the program is not in focus or is minimized.
Your help/direction is much appreciated as of right now unsure where to start.
This is not possible from the library itself. A command line program does now have a focus, but the terminal program it is running in.
To implement that (if possible at all) would be dependent on os, window manager etc.
To refine the answer provided by #mbuechmann, I suggest the OP not to try to resort to APIs etc.
The reasoning is simple.
"Contemporary" users are used to running programs in terminal emulators which are typically presented as separate windows, and so the users naturally think of these programs as not really different from GUI apps.
But the reality is different: a terminal emulator—whether graphical or not (for instance, so-called "virtual consoles" provided by the Linux kernel running on an x86/amd64 hardware are terminal emulators as well)—really emulates a typical work session on a real hardware terminal, and there, a program would work in foreground solely, and the only means of "switching" to another program was using the shell's job control (those jobs, bg and fg commands).
In other words, the whole concept of a program working in a terminal has an inbuilt assumption that the terminal is always "foreground"—since at the time the concept was developed, a terminal was a physical device.
Now please also consider that "terminal emulation" may be more pervasive on a contemporary system than you might think: screen and tmux on a Unix-like OS are multiplexing terminal emulators—which may themselves be run in a terminal emulator, and a console window on Windows™ may be considered to be a terminal emulator of sorts as well.
So, "resorting to APIs" have several technical problems:
Terminal emulation tries to actually decouple the program which uses this facility from being aware of how the facility is actually provided.
To put it simple, there's, say, no easy way on X Window System, to know what window is used by the terminal emulator running your program.
You'd need to cover diverse set of APIs in order for your program to still be useful: X Window System on Unix-like systems, Mac OS, Windows™. And contemporary GUI stacks running on Linux tend to be switching to Wayland instead of X.
In certain cases, like running a program in a "nested" terminal emulation sessions (for example, a pane in a "window" of a tmux running in xterm), figuring out such facts about the environment might be next to impossible.
And still the crucial problem is that if your program really needs to know whether it's focused or not, it actually wants to be aware about the concepts currently hardly accessible to it. I mean, it wants to be GUI. And if so, just make it GUI.
In fact, it may be simpler than you think. The core of your program might still be a CLI app with a thin GUI wrapper around it which uses any sort of IPC to talk with the app (which might be two-way, if needed).
The simplest is to write some (usually line-wise) data to the program's standard input.
I would like to know what is going on during sleep and wakeup process on OSx Kernel.
Does a Kernel extension receive a new address space and start all over again its initialization process or the kernel simply puts the extension back in the same address space?
Does internal kernel extensions (IOKit drivers for example) also behave the same? Perhaps they are loaded into a different location in the memory?
Basically the question is: will my driver, which obtained an interface to a IOService, will be able to use its address after sleep without a problem.
On sleep, memory is "frozen", and on resume, it's restored to its original state. So unless you actively participate in power management, your kext won't notice anything has changed. If you're dealing directly with hardware, you will HAVE to care about power management, though, as your device will have power-cycled and will need to be reinitialised.
I've got a customer who told me that my program (simple user-land program, not a driver) is crashing his system with a Blue Screen Of Death (BSOD). He says he has never encountered that with other program and that he can reproduce it easily with mine.
The BSOD is of type CRITICAL_OBJECT_TERMINATION (0x000000F4) with object type 0x3 (process): A process or thread crucial to system operation has unexpectedly exited or been terminate.
Can a simple program be responsible for a BSOD (even on Vista...) or should he check the hardware or OS installation?
Just because your program isn't a driver doesn't mean it won't use a driver.
In theory, your code shouldn't be able to BSOD the computer. It's up to the OS to make sure that doesn't happen. By definition, that means there's a problem somewhere either in hardware or in code other than your program. That doesn't preclude there being a bug in your code as well though.
The easiest way to cause a BSOD with a user-space program is (afaik) to kill the Windows subsystem process (csrss.exe). This doesn't need faulty hardware nor a bug in the kernel or a driver, it only needs administrator privileges1.
What is your code exactly doing? The error message ("A process or thread crucial to system operation has unexpectedly exited or been terminate.") sounds like one of the essential system processes terminated. Maybe you are killing a process and unintentionally got the wrong process?
If somehow possible you could try to get a memory dump from that customer. Using the Debugging Tools for Windows you can then further analyze that dump as described here.
1Windows doesn't prevent you from doing so because it "keeps administrators in control of their computer". So this is by design and not a bug. Read Raymond's articles and you will see why.
Short answer is yes. Long answer depends on what is you program is suppose to do and how it does it?
Normally, it shouldn't. If it does, there must be either
A bug in the Windows kernel (possible but very unlikely)
A bug in a device driver (not necessarily in a device your program uses, this could get quite complicated)
A fault in the hardware
I would bet on option number two (device driver) but it would be interesting if you could get us a more detailed dump.
Well, yes it can - but for many different reasons.
That's why we test on different machines, operating systems, hardware etc..
Have you set some requirements for your program and is your user following them?
If you can't duplicate it yourself, and your program doesn't need admin to run, I'd be a bit suspicous about
The stability of that system's hardware
The virus/malware status of that system.
If you can get physical access to the client box, it might be worth running a full virus scan with an up-to-date scanner, and running a full memtest on it.
I had a system once that seemed stable, except that a certian few programs wouldn't run on it (and would sometimes crash the box). Memtest showed my RAM had some bad bits, but they were in higer sims, so they only got accessed if a program tried to use a lot of RAM.
No, and that is pretty much by definition. The worst thing that you can say is that a user-land application may have "triggered" a Windows bug or a driver bug. But a modern desktop Operating System is fully responsible for its own integrity; a BSOD is a failure of that integrity. Therefore the OS is responsible, and only the OS.
(Example of a BSOD bug that your application alone could expose: a virus scanner implemented as a driver, that crashes when executing a file from sector 0xFFFFFFFF, a sector that on this one machine just happens to contain one DLL of your application)
I had problems when exit my app without stopping all the processes and BD connections when the program ends (I crashed the entire IDE). I place the "stopping and disconnecting" code in the "Terminate" of "Form_Closed" event of my main form and the problem wa solved, I don't know it this is your situation.
Another problem can be if the user is trying to access the same resources your app is using (databases, hardware, sockets, etc). Ask him/her about what apps he/she is using when the BSOD happens.
A virus can't be discarded.
How does one programmatically cause the OS to switch off, go away and stop doing anything at all so that a program may have complete control of a PC system?
I'm interested in doing this from both an MS Windows and Linux environments. Any languages or APIs considered.
I want the OS to stop preempting my program, stop its virtual memory management, stop its device drivers and interrupt service routines from running and basically just go away. Then, when my program has had its evil way with the bare metal, I want the OS to come back again without a reboot.
Is this even possible?
With Linux, you could use kexec jump to transfer control completely to another kernel (ie, your program). Of course, with great power comes great responsibility - it is entirely up to you to service interrupts, and avoid corrupting the old kernel's memory. You'll end up having to write your own OS kernel to do this. Also, the transfer of control takes quite some time, as the kernel has to de-initialize all hardware, then reinitialize it when it's time to resume. Since kexec jump was originally designed for hibernation support, this isn't a problem in its original context, but depending on what you're doing, it might be a problem.
You may want to consider instead working within the framework given to you by the OS - just write a normal driver for whatever you're doing.
Finally, one more option would be using the linux Real-Time patchset. This lets you assign static priorities to everything, even interrupt handlers; by running a process with higher priority than anything else, you could suspend /nearly/ everything - the system will still service a small stub for interrupts, as well as certain interrupts that can't be deferred, like timing interrupts, but for the most part the heavy work will be deferred until you relinquish control of the CPU.
Note that the RT patchset won't stop virtual memory and the like - mlockall will prevent page faults on valid pages though, if that's enough for you.
Also, keep in mind that whatever you do, the system BIOS can still cause SMM traps, which cannot be disabled, except by motherboard-model-specific methods.
There are lots of really ugly ways to do this. You could modify the running kernel by writing some trampoline code to /dev/kmem that passes control to your application. But I wouldn't recommend attempting something like that!
Basically, you would need to have your application act as its own operating system. If you want to read data from a file, you would have to figure out where the data lives on disk, and generate your own SCSI requests to talk to the disk drive. You would have to implement your own interrupt handler to get notified when the data is ready. Likewise you would have to handle page faults, memory allocation, etc. Most users feel that this isn't worth the effort...
Why do you want to do this?
Is there something that your application needs to do that the OS won't let it do? Are you concerned with the OS impact on performance? Something else?
If you don't mind shelling out some cash, you could use IntervalZero's RTX to do this for a Windows system. It's a hard realtime subsystem that gets installed on a Windows box as sort of a hack into the HAL and takes over the machine, letting Windows have whatever CPU cycles are left over.
It has its own scheduler and device drivers, but if you run your program at the top RTX priority, don't install any RTX device drivers (or disable interrupts for the duration), then nothing will interrupt it.
It also supports a small amount of interaction with programs on the Windows side.
We use it as a nice way to get a hard realtime box that runs Windows.
coLinux loads CoLinuxDriver into the NT kernel or a colinux.ko into the Linux kernel. It does exactly what you asked – it "unschedules" the host OS, and runs its own code, with its own memory management, interrupts, etc. Then, when it's done, it "reschedules" the host OS, allowing it to continue from where it left off. coLinux uses this to run a modified Linux kernel parallel to the host OS.
Unlike more common virtualization techniques, there are no barriers between coLinux and the bare metal hardware at all. However, hardware and the host OS tend to get confused if the coLinux guest touches anything without restoring it before returning to the host OS.
Not really. Operating Systems are a foundation, and your program runs on top of them. The OS handles memory access, disk writing operations, communications, etc. when your application makes requests, and asking the OS to move out of the way would mean that your program would have to do the OS's job instead.
Not as such, no.
What you want is basically an application that becomes an OS; a severely stripped down Linux kernel coupled with some highly customized and minimized tools might be the way to go for this.
if you were devious, and wanted to avoid alot of the operating system housekeeping you could probably hook yourself into a driver routine. Thinking out aloud, verging on hacking. google how to write root kits.
Yeah dude, you can totally do that, you can also write a program to tell my bank to give you all my money and send you a hot Russian.