What's the relationship between a Linux OS and a kernel? - linux-kernel

I've been using Linux for several years, but never stepped beyond installing from a CD/DVD. If the app manager didn't have what I was looking for in the software, then I was a lost cause.
But right now I'm trying to get a grip around what "Linux" is.
The first word that pops into my head is "kernel". After reading on Wikipedia, I understand that a kernel is software running to give other software (OS + apps) access to hardware (CPU, RAM+++). It also handles memory, but isn't that what the OS is supposed to do (what I remember from OS class)?
Is the Linux distro just a packed list of software?
Take my favorite distro: Fedora. It's now in version 14 and ships with kernel 2.6.35.
Does the kernel come from somewhere central and is the core of every Linux distro? If this is true, then is the Linux distro just a way of making the computer with the kernel more user-friendly to use? In that way, the distro+kernel is the OS because the one without the other is not usable (maybe pure kernel, but who sits on that?).

Pretty much correct. To me, "linux" is just the kernel. But it is pretty common to refer to entire distributions as linux. That is what annoys RMS so much. He maintains it should be called GNU/Linux, as he sees distributions as the linux kernel plus the additional software from the GNU project. This makes sense too but I never use the term GNU/Linux. I am either talking about the kernel linux, or "linux distributions", or a specific distribution.
So yes. A distribution is just the kernel (which may include distribution specific patches) plus all the extra programs that make it usable.
The kernel is a central project, and is nominally the same in each distro, but most distros customize it a bit.
And the extra software doesn't just make the kernel more user friendly, it makes it usable at all. A kernel is just interrupt handlers, device drivers, and system calls. It basically virtualizes the hardware and provides a standard environment for programs to work on.
As far as the phrase "operating system" goes, it can be confusing. Some people may say the kernel IS the operating system, and everything else is either a utility or an application or something else. Other people may say the kernel plus some other packages make up the operating system, but most of the software is not part of the operating system. Others may say all the software in the distro forms part of the operating system.

Linux is the kernel. That's what Linus wrote and that's what the kernel developers continue to work on today. It controls the hardware.
An operating system is something that includes a kernel plus quite a few lower-level "applications" to allow you as a user to do useful stuff with your computer (think file manager, control panel and so on).
A distro (distribution) is an operating system packaged with an absolute massive amount of higher-level applications(a) like DVD authoring tools, web browsers, office suites and so on ad-near-infinitum(b).
Now there are grey areas between kernel/OS and even OS/distro but I think that's a fair starting point for understanding how it hangs together.
(a) Even Windows does this to some extent, with the inclusion of Wordpad, Calculator and Paint, though not to the insanely prolific level that Linux distros extend to - do we really need 472 different file managers? Choice is good, yes, but only up to a point :-)
(b) Yes, I'm aware that "ad-near-infinitum" makes no sense since any finite amount subtracted from the infinite is still infinite. But, if you want mathematical accuracy, you should probably be over at https://math.stackexchange.com :-)

OS is just kernel and Shell which work hand in hand.
Distro is combination of customized shell(s) working on a kernel. This means, for examples - Kali, Ubuntu, fedora, Mint, etc. are different distros which work on Linux kernel.
Shell acts as an interface between the user and the kernel. Shell can be command line interface or Graphic user interface. Bash, sh, Windows GUI are some shells.
Kernel is hub of operating system. It allocates time and memory to programs and handles the filestore etc.
To further explain shell and Kernel suppose you type cd. The shell searches the filestore for the file containing the program cd, and then requests the kernel, through system calls, to execute the program cd on myfile.
To take a simple example - Windows GUI is a Shell, Windows OS a distribution by Microsoft.
Similarly, Ubuntu OS or fedora OS etc. a distro working on various shell using Linux kernel.
Shell or a distro does not make Kernel more user friendly to use but it makes it usable for user.
So now, simply you can say Linux is a kernel.
Linux + shell (Bash, Gnome etc.) is a Linux distro say Ubuntu, Mint, Kali etc. and each of them is a OS.

"kernel" and "shell" are the original terms, as in let's say "core" and "shell". "Shell" is the command interpreter. "Distro" is a term that means a customized shell(s) + specific programs included in that distribution. One distribution might several shells though. From a user perspective this is close to the concept of human language. Is the language that you have to talk to the terminal which will talk to shell. Shell will read it and look for a file within the filestore (still inside the shell/ distro). Once the file (executable) is found, shell sends this to the kernel which does the job (process). Think of a car which will have the same basically unmodified engine over many years but will change its frame/ body. I think I need to stop here...

Related

How to install a bare Linux kernel without any distribution to study it?

I want to study the kernel of Linux without any distribution.
I found the LoadLin boatloader of Ms-dos, but i think it works only in older version of windows (windows 95,98, ME).
So i need to install the kernel only in my PC if Possible.
How I can install it?
The kernel only is not that much useful to you; you'll probably need some shell and a working compiler if you want to test things first-hand, and these are not part of the kernel.
There's a distribution called Linux From Scratch which basically allows you to install the kernel and then whatever other stuff you want, literally from scratch (as in, by compiling stuff yourself and only adding what YOU want)
I am wondering though, what is it exactly you want to study and how does having a distribution affect your studying of the kernel? (Yes, some distributions ship custom kernels but the major features are almost always the same)
Minimal Linux Live is a small script that:
downloads the source for the kernel and busybox
compiles them
generates a bootable 8Mb ISO with them
The ISO then leaves you in a minimal shell with busybox.
With QEMU you can then easily boot into the system, which might be a more convenient way to study the kernel.
Or you can just use the Live ISO as a regular distribution and install it on metal.
Usage:
git clone https://github.com/ivandavidov/minimal
cd minimal/src
./build_minimal_linux_live.sh
# Wait.
# Install QEMU.
# minimal_linux_live.iso was generated
./qemu64.sh
and you will be left inside a QEMU Window with you new minimal system. Awesome.
See also:
https://unix.stackexchange.com/questions/17122/is-it-possible-to-install-the-linux-kernel-alone
https://superuser.com/questions/307087/linux-distro-with-just-busybox-and-bash
Why not use a distribution? Just get some free VM (eg. virtualbox) and install an arbitrary Linux distribution. You have all the build tools there you need to compile the kernel, without actually touching your system.

Use Cygwin or VM with UNIX for library that requires UNIX?

Forgive my ignorance: I need to use a library that requires a UNIX system (LIBSHORTTEXT). Do I need to install a virtual machine with Unix or is Cygwin enough? (I've read quite a few articles about the difference between them but I don't really understand the practical difference for this specific use). Thanks!
Edit: The documentation that said that the library needs UNIX is here
That really depends on what makes the library "require UNIX". Looking at it briefly, it appears to be ANSI C and Python, both of which should either compile or be fairly easy to port on a Windows development system. In your case I'd go with Cygwin if you don't already have a development suite running, as it is likely to allow you to just get things running.
A Virtual Machine is a bit more compartmentalized, so much less connection between Windows and the running software. Unless you are planning to use the operating system in the Virtual Machine as a target for your program, it is a bit of overkill in this case, IMHO.
Hope this helps.
Normally I would say Cygwin will do, but it depens on how you use the library. And when you say that the library requires a UNIX system what do you mean? Are you building a python or c++ program?
The main difference between working in cygwin and a VM is that cygwin is still working in a windows environment with windows directories and hardware drivers, whereas a VM have all this emulated as if it actually was a UNIX machine.

Do I need two machines to develop IOKit Mac drivers?

I'm building an IOKit CFPlugin driver for OS X. I'll be working with network data coming in that will be translated to MIDI data. No hardware is involved other than the built-in Airport. I have experience with drivers on Windows machines and firmware but this is my first dip into doing it on the Mac. So far things are going pretty well, but the Apple documentation sez: "For safety reasons, you should not load your driver on your development machine."
I only have one Mac. I really don't want two Macs- sorry, Apple. Should I take this warning seriously? Are there things I need to know?
Thanks, Tom Jeffries
You could also consider running OS X inside a VM as your testbed. It would surely be much more convenient that having a separate boot volume.
The warning is rather poorly worded; what you should consider doing is using a separate boot volume (partition) for trying out your driver, since it's possible to arbitrarily hose your system with your driver.
If you're doing kernel development on any OS that isn't isolated from your main system (via a VM, alternate boot disk, etc.), you're crazy!
What may be a bigger issue is that you can't do any kernel debugging, because the only option for that is to use GDB on a remote OS X system. For this, you may want to consider running OS X in virtualization.
you DEFINITELY want to have some way to recover a fubar kext installation: a bootable external drive or something you can quickly restore from-- this is the main reason for Apple's warning against running in-development-kernel-extensions on your production machine.
Nicholas is right that in order to debug using gdb (the only way in kernel space) you do need two machines. I've never tried using a VM as Coxy suggests: but I guess it's feasible (assuming that you run your kext on the virtual machine and use the real host machine to run gdb).
My preferred method for tracing and debugging in the kernel is kprintf() routed to firewire (aka firewire kprintf (man fwkpfv) ). for this you do need two machines with firewire ports.
finally, being an old computer musician myself, I wonder why you want to program a MIDI synthesizer (or transformer) on the network stack level. my guess is that you would have a much more gratifying experience working in userland (where you can use floating point math...)
if you need some hints or tips, feel free to get in touch...
|K<
from the ADC Kernel Programming Guide
Kernel programming is a black art that
should be avoided if at all possible.
Fortunately, kernel programming is
usually unnecessary. You can write
most software entirely in user space.
Even most device drivers (FireWire and
USB, for example) can be written as
applications, rather than as kernel
code. A few low-level drivers must be
resident in the kernel's address
space, however, and this document
might be marginally useful if you are
writing drivers that fall into this
category.

Can anyone please tell me how is Kernel Programming done in Linux, as Windows DDK in Windows

I am aware of windows kernel but new to linux kernel. I just need to know how its done in linux, i.e. the program development.
You can check there (free-electrons.com), it's a good informations source for kernel developement. (specialized in embedded linux, but most of the docs are available for standard development)
You have also the classical Linux Devices Drivers, which is very complete and detailled.
And last but not least, the Linux kernel documentation.
Linux does not have a stable kernel API. This is by design, so you should generally avoid writing kernel code if you can; it is unlikely to remain source-compatible indefinitely, and will definitely NOT be binary-compatible, even between minor releases.
This is less-or-more true for vendor kernels; Redhat etc DO maintain source & binary kernel compatibility between major revisions.
More work is gradually being done in the kernel to reduce the amount of kernel-code required to carry out various tasks, such as driver development (for example, USB drivers can typically be done in userspace with libusb), filesystem development (FUSE) and network filtering (NFQUEUE). However, there are still some cases where you need to; in particular, block devices still need to be in the kernel to be able to be usefully used for boot devices and swap.

Doing coding in Linux through a virtual machine on Windows VS partitioning

I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community.
Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality.
All responses are appreciated, thanks in advance.
The type of programming I intend to dive into :
Android Dev, Web Dev, Desktop Dev...More Android and Web right now though.
So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome.
I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right?
And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers"
Thanks to all for reading, I appreciate it.
Hope this isnt confusing :S
There is an easier solution..
I still have to use Windows for Symbian programming so I use a Wubi and Ubuntu to provide my double bout into Linux..you deploy Wubi uses a large file and thus no need to worry or mess with creating a partition..
I have used it for 18 months with no data loss and no worries..
There is also another tool called andlinux:
http://www.andlinux.org/
It uses colinux to run Linux as a program inside windows..
A couple things:
If you're using an IDE, there's no point to coding on Linux. Linux is nice for programming because the command line tools are awesome. Netbeans and Eclipse both work fine on Windows. All you'd be missing is makefiles (which IDEs don't use anyway).
Using a virtual machine would be annoying (working with the window and stuff) and slow. Try AndLinux if you want to have Linux running in Windows. It sets up X and Pulseaudio for you, so all of your programs will appear to be native. It's basically a way to run Ubuntu as a Windows service (all Ubuntu packages for your architecture are installable).
If you just want the fun of Linux command line programs without access to all of Ubuntu, cygwin is smaller and might be faster.
If by "Dreamweaver files", you mean HTML/PHP/CSS, then yes, you can just upload them to the server. As far as I know, the only ASP or ASP.net compatible server is Microsoft's, but why use that anyway?
EDIT: SO didn't give me enough space in the comments to answer your question..
AndLinux and Cygwin are basically just better ways to do your "virtual machine" idea.
Cygwin adds a posix layer to Windows (basically everything you need to compile Unix/Linux/BSD programs). This means that you can generally take a Linux program and just compile it on Windows and have it work. They also have repositories, but in my experience, the cygwin installer is slow and hard to use.
AndLinux runs the Linux kernel as a Windows service, giving you a similar experience as running it in VirtualBox/other virtualization programs. However, it also sets up X (the graphics layer for Linux) and PulseAudio (a sound system that lets you run sound over a network), so that when you run Linux programs they act and sound like native programs. I also like AndLinux better because you have access to all of Ubuntu's programs, and apt-get is easier to use than cygwin's installer. Also, if you use AndLinux and later to decide to go 100% Linux, you're basically already using it that way.
What I'm getting at is: If you want to run Linux in a virtual machine, don't. Just install AndLinux. It will be faster and it's much easier to work with (since everything is just a normal window).
Here's an example of the difference:
Screenshot of AndLinux: The program in the bottom right corner is running in AndLinux. Notice how it just looks like a badly themed Windows program? Compare that to something like this, where you have another desktop in a Window.
And still.. there's no reason to virtualize Netbeans. It's a native Windows program and you can gain nothing and lose a lot of speed.
If you're interested in Android development and you want to use Linux, then I would recommend you do your development in Eclipse. Eclipse is available for Linux and if you get Ubuntu then Eclipse is amazingly easy to install. I used VirtualBox + Ubuntu + Eclipse for several projects I worked on. If you decide that Linux is not for you and your project was in Eclipse then you will have no problem switching back to Windows since Eclipse is available for both operating systems.
The ONLY problem I had was the screen size on the virtual machine... if you have a big screen and you use a virtual machine then you might get limited to a fraction of your actual screen resolution. It's very easy to install Linux on a second partition, so I would just recommend you go with a second partition if you want to fully utilize the size of your monitor.
My setup is sort of the opposite: I run Linux as my main OS, both at work an at home, and I have Windows in a virtual machine. On a modern computer with adequate memory the performance of development tools is not a problem. I work with Visual Studio in the virtual machine, and I have seen few performance issues. (But note that this is on a fast computer, and that you may need more memory than otherwise, since you are running two OS:es at the same time. On an old computer with less memory it can become unbearable.)
Dual-boot, where you have to restart the computer to switch OS, doesn't work well for me. It takes way too much time to switch, and really need to switch back and forth. Having Windows in a window works much better for me, and you can maximize that "Windows window", so it looks like you're just running Windows.
One thing you may want to look at is to have Linux running in a VM, then configuring Samba to allow the host to network-mount pieces of the Linux filesystem so that you can operate using Windows tools, and have Linux running the server processes (e.g., httpd). Alternatively, I'm sure that there are shell extensions for using FTP, NFS, or SSH/SFTP servers from within Explorer, but I've not looked at any for a long time.
If you should happen to need to use graphical Linux tools then you can use the X server found in cygwin for that.
The downside of this plan is that Samba can be a bit tricky to configure, but you get to use the Windows tools you're already familiar with.
I had no issues running Ubuntu via VMWare. You can easily switch to full screen mode anytime. Strongly recommended. One shortcoming is that Linux will not be exposed to the full potential of your hardware. Compbiz Fusion failed to work as a result.
Given that you're a gamer, I'm thinking your machine should be fast enough to run Linux in a VM. Best to try out the VM before messing with disk partitions.
I use physically separate machines to run Linux and Windows (and MacOS X). This means that I don't have to reboot to do something different, and each system gets the full power of the hardware.
Disadvantages: more desk space used, more time and money spent maintaining hardware (though if you do a rolling upgrade, this is mitigated - Linux runs most happily on not-quite-new machines). Doesn't work so well if you like carrying laptops around.
Be aware that VMs universally don't give you full graphics acceleration. This can be a non-issue (many programs must cope with Intel GMA anyway), or it can be a showstopper. Your choice.

Resources