i want to access my system resources such as CPU without the use of OS system calls.
is there any way to make this possible?
The only way to access the hardware directly on most modern operating systems, Linux and Windows included, is via kernel code. Linux Device Drivers is an excellent starting point for writing such code on Linux, even if it is a bit dated.
Otherwise, the OS provides various I/O facilities and controls the allocation of resources to the user applications, using the system call interface. The system call interface is omnipresent in its basic concept among all operating systems that actually have some sort of separation between kernel and user code. The use of software interrupts is the standard way to implement system calls on current hardware.
You need a system call to allocate the slightest amount of memory and even to read or write a single character. Not to mention that even a program that does absolutely nothing generally needs a few system calls just to be loaded.
You could gain more direct access to the hardware if you used DOS or an exokernel design.
But why would you want to do that anyway? Modern hardware is far from trivial to work with directly.
Related
I know system call's uses is to communicate between use-level and kernel-level
So, Does that mean I can write kernel memory with system call?
e.g. write() is used to write kernel memory
But if what I think is available, It also relate big-security problem?
If I can't, why?
Let's break your questions down one by one.
Yes you can write to kernel memory via system calls directly if you implemented one that allowed writing to arbitrary locations in memory. Whether or why you would want something like this is another question. Something like this would pose a huge security risk. For example, any process can elevate their privileges or install rootkits in the kernel easily using your system call.
However, Linux does have an interface for reading and writing to kernel memory. Have a look at /dev/kmem. It allows you to open kernel memory as a 'file', seek around and read and write to it.
In fork system call in arm,
swi #0
instruction is used, what exactly it does?
Thank you.
If you google on 'swi arm instruction' the first hit for example is:...
SWI : SoftWare Interrupt
SWI
This is a simple facility, but possibly the most used. Many Operating System facilities are provided by SWIs. It is impossible to imagine RISC OS without SWIs.
Nava Whiteford explains how SWIs work (originally in Frobnicate issue 12½)...
In this article I will attempt to delve into the working of SWIs (SoftWare Interrupts).
What is a SWI?
SWI stands for Software Interrupt. In RISC OS SWIs are used to access Operating System routines or modules produced by a 3rd party. Many applications use modules to provide low level external access for other applications.
Examples of SWIs are:
The Filer SWIs, which aid reading to and from disc, setting attributes etc.
The Printer Driver SWIs, used to well aid the use of the Parallel port for printing.
The SWIs FreeNet/Acorn TCP/IP stack SWIs used to transmit and receive data using the TCP/IP protocol usually used for sending data over the Internet.
When used in this way, SWIs allow the Operating System to have a modular structure, meaning that the code required to create a complete operating system can be split up into a number of small parts (modules) and a module handler.
When the SWI handler gets a request for a particular routine number it finds the position of the routine and executes it, passing any data.
Maybe I am missing something but isn't sandboxing and virtualisation exactly the same
concept, ie., separating the memory space for applications running in parallel. So I am wondering why they are having different names, are there maybe differences in the way
they are employed?
Many thanks,
Simon
These concepts address different problems: When we virtualize, we are hiding physical limitations of the machine. Sandboxing, on the other hand, sets artificial limits on access across a machine. Consider memory as a representative analogy.
Virtualization of memory is to allow every program to access every address in a 32- or 64-bit space, even when there isn't that much physical RAM.
Sandboxing of memory is to prevent one program from seeing another's data, even though they might occupy neigboring cells in memory.
The two concepts are certainly related in the common implementation of virtual memory. However, this is a convenient artifact of the implementation, since the hardware page table is only accessible by the kernel.
Consider how to implement them separately, on an x86 machine: You could isolate programs' memory using page tables without ever swapping to disk (sandboxing without virtualization). Alternatively, you could implement full virtual memory, but also give application-level access to the hardware page table so they could see whatever they wanted (virtualization without sandboxing).
There are actually 3 concepts that you are muddling up here. The first and foremost is what is provided by the OS and what it does is it separates the memory space for applications running in parallel. And it is called virtual memory.
In Virtual memory systems, the OS maps the memory address as seen by applications onto real physical memory. Thus memory space for applications can be separated so that they never collide.
The second is sandboxing. It is any technique you, the programmer, use to run untrusted code. If you, the programmer, are writing the OS then from your point of view the virtual memory system you are writing is a sandboxing mechanism. If you, the programmer, are writing a web browser then the virtual memory system, in itself, is not a sandboxing mechanism (different perspectives, you see). Instead it is a pontential mechanism for you to implement your sandbox for browser plug-ins. Google Chrome is an example of a program that uses the OS's virtual memory mechanism to implement its sandboxing mechanism.
But virtual memory is not the only way to implement sandboxing. The tcl programming language for example allows you to instantiate slave interpreters via the interp command. The slave interpreter is often used to implement a sandbox since it runs in a separate global space. From the OS's point of view the two interpreters run in the same memory space in a single process. But because, at the C level, the two interpreters never share data structures (unless explicitly programmed) they are effectively separated.
Now, the third concept is virtualization. Which is again separate from both virtual memory and sandboxing. Whereas virtual memory is a mechanism that, from the OS's perspective, sandboxes processes from each other, virtualisation is a mechanism that sandboxes operating systems from each other. Example of software that does this include: Vmware, Parallels Desktop, Xen and the kernel virtual machine.
Sandboxing means isolation only, when virtualization usually means simulating of some sort of hardware (virtual machine). Virtualization can happen with our without sandboxing.
Sandboxing is limiting access by a particular program. Virtualization is a mechanism that can be used to help do this but sandboxing is acheived with other mechanisms as well, and likewise virtualization has uses besides sandboxing. Sandboxing is a "what", virtualization is a "how".
I am very curious in messing up with HW. But my top level "messing" so far was linked or inline assembler in C program. If my understanding of CPU and ring mode is right, I cannot directly from user mode app access some low level CPU features, like disabling interrupts, or changing protected mode segments, so I must use system calls to do everything I want.
But, if I am right, drivers can run in ring mode 0. I actually don´t know much about drivers, but this is what I ask for. I just want to know, is learning how to write your own drivers and than call them the way I should go, to do what I wrote?
I know I could write whole new OS (at least to some point), but what I exactly want to do is acessing some low level features of HW from standart windows application. So, is driver the way to go?
Short answer: yes.
Long answer: Managing access to low-level hardware features is exactly the job of the OS kernel and if you only want access to a single feature there's no need to start your own OS from scratch. Most modern OSes, such as WIndows, Linux, or the BSDs, allow you to add code to the kernel through kernel modules.
When writing a kernel module (or device driver), you write code that is going to be executed inside the OS kernel and will thus be running in CPU ring 0. Great power comes with great responsibility, which in this case means that you should really know what you're doing as there will be no pre-configured OS interface to prevent you from doing the wrong things. You should therefore study the manuals of your hardware (e.g., Intel's x86 software developer's manuals, device specs, ...) as well as standard operating systems development literature (where you're also going to find plenty on the web -- OSDev, OSDever, OSR, Linux Device Drivers).
If you want to play with HW write some programs for 16-bit real-mode (or even with your own transition to protected-mode). There you have to deal with ASM, BIOS interrupts, segments, video memory and a lot of other low-level stuff.
I was going through some general stuff about operating systems and struck on a question. How will a developer debug when developing an operating system i.e. debug the OS itself? What tools are available to debug for the OS developer?
Debugging a kernel is hard, because you probably can't rely on the crashing machine to communicate what's going on. Furthermore, the codes which are wrong are probably in scary places like interrupt handlers.
There are four primary methods of debugging an operating system of which I'm aware:
Sanity checks, together with output to the screen.
Kernel panics on Linux (known as "Oops"es) are a great example of this. The Linux folks wrote a function that would print out what they could find out (including a stack trace) and then stop everything.
Even warnings are useful. Linux has guards set up for situations where you might accidentally go to sleep in an interrupt handler. The mutex_lock function, for instance, will check (in might_sleep) whether you're in an unsafe context and print a stack trace if you are.
Debuggers
Traditionally, under debugging, everything a computer does is output over a serial line to a stable test machine. With the advent of virtual machines, you can now wire one VM's execution serial line to another program on the same physical machine, which is super convenient. Naturally, however, this requires that your operating system publish what it is doing and wait for a debugger connection. KGDB (Linux) and WinDBG (Windows) are some such in-OS debuggers. VMWare supports this story explicitly.
More recently the VM developers out there have figured out how to debug a kernel without either a serial line or kernel extensions. VMWare has implemented this in their recent stuff.
The problem with debugging in an operating system is (in my mind) related to the Uncertainty principle. Interrupts (where most of your hard errors are sure to be) are asynchronous, frequent and nondeterministic. If your bug relates to the overlapping of two interrupts in a particular way, you will not expose it with a debugger; the bug probably won't even happen. That said, it might, and then a debugger might be useful.
Deterministic Replay
When you get a bug that only seems to appear in production, you wish you could record what happened and replay it, like a security camera. Thanks to a professor I knew at Illinois, you can now do this in a VMWare virtual machine. VMWare and related folks describe it all better than I can, and they provide what looks like good documentation.
Deterministic replay is brand new on the scene, so thus far I'm unaware of any particularly idiomatic uses. They say it should be particularly useful for security bugs, too.
Moving everything to User Space.
In the end, things are still more brittle in the kernel, so there's a tremendous development advantage to following the Nucleus (or Microkernel) design, where you shave the kernel-mode components to their bare minimum. For everything else, you can use the myriad of user-space dev tools out there, and you'll be much happier. FUSE, a user-space filesystem extension, is the canonical example of this.
I like this last idea, because it's like you wrote the program to be writeable. Cyclic, no?
In a bootstrap scenario (OS from scratch), you'd probably have to introduce remote debugging capabilities (memory dumping, logging, etc.) in the OS kernel early on, and use a separate machine. Or you could use a virtual machine/hypervisor.
Windows CE has a component called KITL - Kernel Independent Transport Layer. I guess the title speaks for itslf.
You can use a VM: eg. debug ring0 code with bochs/gdb
or Debugging NetBSD kernel with qemu
or a serial line with something like KDB.
printf logging
attach to process
serious unit tests
etc..
Remote debugging with kernel debuggers, which can also be done via virtualization.
Debugging an operating system is not for the faint of heart. Because the kernel is being debugged, your options would be quite limited. Copious amount of printf statements is one trick, and furthermore, it depends on really what 'operating system' is being debugged, we could be talking about
Filesystem
Drivers
Memory management
Raw Disk input/output
Screen input/output
Kernel
Again, it is a widely varying exercise as in the above, they all interact with one another. Even more complicated is the fact, supposing you were to debug the kernel, how would you do it if the runtime environment is not properly set (by that, I am talking about the kernel's responsibility for loading binary executables).
Some kernels may (not all of them have them) incorporate a simple debug monitor, in fact, if I rightly recall, in the book titled 'Developing your own 32bit Operating System' by Richard A Burgess, Sams publishing, he incorporated a debug monitor which displays various states of the CPU, registers and so on.
Again, take into account of the fact that the binary executables require a certain loading mechanism, for example a gdb equivalent, if the environment for loading binaries are not set up, then your options are quite limited.
By using copious amount of printf statements to display errors, logs etc to a separate terminal or to a file is the best line of debugging, it does sound a nightmare but it would be worth the effort to do so.
Hope this helps,
Best regards,
Tom.