Linux kernel on virtual machine - compilation

I am studying Linux driver programming and in it, it is recommended that I work on self-compiled Linux kernels and not any distributions. I have tried compiling Linux 2.6.9 in ubuntu but the process returns errors in 'make menuconfig' stage.
I would prefer to work with Linux in a virtual environment so that I can fearlessly experiment with the kernel. So, is there any way I can compile and run Linux in a virtual machine (say VMware installed on Windows)? I can use live CDs for the purpose of compiling the kernel.
So in short, please suggest, how can I compile, install and run Linux kernel in a virtual machine in an error-free way?
I searched and read this. But after following these steps when I restarted my computer there was no separate Linux 3.2.17 OS. But my ubuntu 12.04 was now showing 3.2.17 kernel. Although this is the first time I could compile a whole kernel on ubuntu without any error, I want to load that kernel on other partition and use it as an independent OS. So, if anyone can tell, what to do in addition to the steps in the tutorial so that I can achieve this?

The simplest thing to do is probably to install some Linux distribution on a VM, such as VMWare or VirtualBox, and continue from there. You could try using a live-cd, but I'm guessing that the lack of persistent storage might get irritating. There are, of course, ways around that, but installing some distribution is probably simpler, and you don't really need that much disk space for it if all you want to do is compile a kernel.
If all you want to do is compile a kernel module, and if you already have some pre-installed Linux environment, you should also note that modern Linux installations allow you to compile modules without the need to re-compile the entire kernel. You will need the kernel source and headers, though. See, for example, this document.
And BTW, speaking of modern kernels, why did you choose to use 2.6.9? It's almost 8 years old by now. Newer kernels might actually be easier to develop for. Also, there's no guarantee that
modules developed with such an old kernel would still work with current ones.

I suggest you to read this page. This document shows you how to boot your personal kernel on qemu and how to use the debugger on it.

Kernelnewbies is the right place to start kernel hacking. This website contains a set of rich tutorials about kernel hacking and tweaking just for newbie Linux developers. Also, you can join the community and start contributing to some tiny Linux projects.
For a quick start, follow the instruction from the "kernel first patch" tutorial. Since you're cloning the "origin" remote repository in this tutorial, you'll work on the latest branches of Linux kernel. So, there's no need to worry about working on an old version of Linux. Meanwhile, if you're not comfortable working with git trees, you can always download the latest version of Linux from front page of "kernel.org".

Related

setting up a development environment for Linux device driver

I am trying to read the LDD book by Jonathan Corbet, Greg Kroah-Hartman, Alessandro Rubini and implement the sample modules. So to begin with, I tried setting up a development system. Installed Ubuntu 16.04 Xenial. Now, I just created a directory and wrote the hello_world module with a Makefile. Got it built and run it, verified the dmesg logs.
Is that all the development setup? I searched online and found articles where they are asking to download and compile the kernel, use a VM to boot the kernel. What is the reason? Or what am I missing?
Is there any better article which clarifies this?
Thanks
hago
You can try one more way:
If you have native windows, install virtual machine software such as
Virtual box. Get your favourite Linux distribution (no bias, just
an example - Ubuntu) and install it through Virtual box.
Get the latest kernel (or of your choice) from kernel.org.
Choose the platform you want to build this kernel for. E.g arm64 or x86.
In case you do not have real boards (e.g RPi for arm variant), you can use qemu-arm64 or qemu-x86 to run your compiled kernel. This is also a good option when users do not have the boards.
Another good use case for using qemu for the newbie kernel developers is even they write some modules which crashes, then the qemu instance is crashed so no harm.
I think using qemu is a good option for people who starts to learn kernel programming and also want to try writing some of their modules and do not intend to purchase hardware at this point of time.
It depends on your target. For your case, you have made a kernel driver for your computer (it run Linux kernel).
But if you want to develop a Kernel driver for another target like Rasberry Pi, ARM board, X86-X64 board, ... you must learn to compile, edit Kernel config, boot Kernel image, ... because each target has different kernel versions.
You can refer to this training for more detail: https://bootlin.com/training/embedded-linux/

How to install a bare Linux kernel without any distribution to study it?

I want to study the kernel of Linux without any distribution.
I found the LoadLin boatloader of Ms-dos, but i think it works only in older version of windows (windows 95,98, ME).
So i need to install the kernel only in my PC if Possible.
How I can install it?
The kernel only is not that much useful to you; you'll probably need some shell and a working compiler if you want to test things first-hand, and these are not part of the kernel.
There's a distribution called Linux From Scratch which basically allows you to install the kernel and then whatever other stuff you want, literally from scratch (as in, by compiling stuff yourself and only adding what YOU want)
I am wondering though, what is it exactly you want to study and how does having a distribution affect your studying of the kernel? (Yes, some distributions ship custom kernels but the major features are almost always the same)
Minimal Linux Live is a small script that:
downloads the source for the kernel and busybox
compiles them
generates a bootable 8Mb ISO with them
The ISO then leaves you in a minimal shell with busybox.
With QEMU you can then easily boot into the system, which might be a more convenient way to study the kernel.
Or you can just use the Live ISO as a regular distribution and install it on metal.
Usage:
git clone https://github.com/ivandavidov/minimal
cd minimal/src
./build_minimal_linux_live.sh
# Wait.
# Install QEMU.
# minimal_linux_live.iso was generated
./qemu64.sh
and you will be left inside a QEMU Window with you new minimal system. Awesome.
See also:
https://unix.stackexchange.com/questions/17122/is-it-possible-to-install-the-linux-kernel-alone
https://superuser.com/questions/307087/linux-distro-with-just-busybox-and-bash
Why not use a distribution? Just get some free VM (eg. virtualbox) and install an arbitrary Linux distribution. You have all the build tools there you need to compile the kernel, without actually touching your system.

Use Cygwin or VM with UNIX for library that requires UNIX?

Forgive my ignorance: I need to use a library that requires a UNIX system (LIBSHORTTEXT). Do I need to install a virtual machine with Unix or is Cygwin enough? (I've read quite a few articles about the difference between them but I don't really understand the practical difference for this specific use). Thanks!
Edit: The documentation that said that the library needs UNIX is here
That really depends on what makes the library "require UNIX". Looking at it briefly, it appears to be ANSI C and Python, both of which should either compile or be fairly easy to port on a Windows development system. In your case I'd go with Cygwin if you don't already have a development suite running, as it is likely to allow you to just get things running.
A Virtual Machine is a bit more compartmentalized, so much less connection between Windows and the running software. Unless you are planning to use the operating system in the Virtual Machine as a target for your program, it is a bit of overkill in this case, IMHO.
Hope this helps.
Normally I would say Cygwin will do, but it depens on how you use the library. And when you say that the library requires a UNIX system what do you mean? Are you building a python or c++ program?
The main difference between working in cygwin and a VM is that cygwin is still working in a windows environment with windows directories and hardware drivers, whereas a VM have all this emulated as if it actually was a UNIX machine.

Playing/Learning -- QEMU (for ARM), Angstrom Linux (or Debian)

My ultimate goal is to do some programming for the Angstrom Linux (or Debian or other Linux distros), on QEMU emulating ARM processor board s.a. Versatile board. I am happy to experiement, but if someone has attempted something similar, and can give little guidance, it might hasten progress.
My understanding of the steps needed are:-
1. Build QEMU from source (although I am not sure if a prebuilt binary won't do). I found QEMuManager on Windows (XP being my Desktop OS on which I intend to run QEMU).
2. Install ARM tool chain (e.g. Yagarto / GNU-ARM for Cygwin?)
3. Download an Angstrom Linux tarball and build it
4. Create a QEMU image with Angstrom Linux.
However I am missing on the details, as I believe there are choices to be made at each of those steps.
IMHO you should use a linux distribution as host machine for your QEmu instead of trying to compile/install all the QEmu stuff in a cygwin based system, it will remove some futur headaches. You can use a VMWare player with an ubuntu image.
I used to play with this tutorial for Debian on QEMU.
The beagleboard, hawkboard, open-rd sites all tend to lead to their distros being built on qemu (arm), and from there there is no reason why you cannot just continue to keep running on the simulation instead of heading for hardware.
This is an example of how to do it with ubuntu.
https://wiki.edubuntu.org/ARM/RootfsFromScratch
Yes it is also possible to cross compile everything as well, I would start with wiki pages that hand hold you through all of the steps. Or as with the hawkboard or beagleboard get a pre-built binary (kernel and root file system) and just boot it and run on that environment and not mess with building everything.

Doing coding in Linux through a virtual machine on Windows VS partitioning

I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community.
Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality.
All responses are appreciated, thanks in advance.
The type of programming I intend to dive into :
Android Dev, Web Dev, Desktop Dev...More Android and Web right now though.
So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome.
I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right?
And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers"
Thanks to all for reading, I appreciate it.
Hope this isnt confusing :S
There is an easier solution..
I still have to use Windows for Symbian programming so I use a Wubi and Ubuntu to provide my double bout into Linux..you deploy Wubi uses a large file and thus no need to worry or mess with creating a partition..
I have used it for 18 months with no data loss and no worries..
There is also another tool called andlinux:
http://www.andlinux.org/
It uses colinux to run Linux as a program inside windows..
A couple things:
If you're using an IDE, there's no point to coding on Linux. Linux is nice for programming because the command line tools are awesome. Netbeans and Eclipse both work fine on Windows. All you'd be missing is makefiles (which IDEs don't use anyway).
Using a virtual machine would be annoying (working with the window and stuff) and slow. Try AndLinux if you want to have Linux running in Windows. It sets up X and Pulseaudio for you, so all of your programs will appear to be native. It's basically a way to run Ubuntu as a Windows service (all Ubuntu packages for your architecture are installable).
If you just want the fun of Linux command line programs without access to all of Ubuntu, cygwin is smaller and might be faster.
If by "Dreamweaver files", you mean HTML/PHP/CSS, then yes, you can just upload them to the server. As far as I know, the only ASP or ASP.net compatible server is Microsoft's, but why use that anyway?
EDIT: SO didn't give me enough space in the comments to answer your question..
AndLinux and Cygwin are basically just better ways to do your "virtual machine" idea.
Cygwin adds a posix layer to Windows (basically everything you need to compile Unix/Linux/BSD programs). This means that you can generally take a Linux program and just compile it on Windows and have it work. They also have repositories, but in my experience, the cygwin installer is slow and hard to use.
AndLinux runs the Linux kernel as a Windows service, giving you a similar experience as running it in VirtualBox/other virtualization programs. However, it also sets up X (the graphics layer for Linux) and PulseAudio (a sound system that lets you run sound over a network), so that when you run Linux programs they act and sound like native programs. I also like AndLinux better because you have access to all of Ubuntu's programs, and apt-get is easier to use than cygwin's installer. Also, if you use AndLinux and later to decide to go 100% Linux, you're basically already using it that way.
What I'm getting at is: If you want to run Linux in a virtual machine, don't. Just install AndLinux. It will be faster and it's much easier to work with (since everything is just a normal window).
Here's an example of the difference:
Screenshot of AndLinux: The program in the bottom right corner is running in AndLinux. Notice how it just looks like a badly themed Windows program? Compare that to something like this, where you have another desktop in a Window.
And still.. there's no reason to virtualize Netbeans. It's a native Windows program and you can gain nothing and lose a lot of speed.
If you're interested in Android development and you want to use Linux, then I would recommend you do your development in Eclipse. Eclipse is available for Linux and if you get Ubuntu then Eclipse is amazingly easy to install. I used VirtualBox + Ubuntu + Eclipse for several projects I worked on. If you decide that Linux is not for you and your project was in Eclipse then you will have no problem switching back to Windows since Eclipse is available for both operating systems.
The ONLY problem I had was the screen size on the virtual machine... if you have a big screen and you use a virtual machine then you might get limited to a fraction of your actual screen resolution. It's very easy to install Linux on a second partition, so I would just recommend you go with a second partition if you want to fully utilize the size of your monitor.
My setup is sort of the opposite: I run Linux as my main OS, both at work an at home, and I have Windows in a virtual machine. On a modern computer with adequate memory the performance of development tools is not a problem. I work with Visual Studio in the virtual machine, and I have seen few performance issues. (But note that this is on a fast computer, and that you may need more memory than otherwise, since you are running two OS:es at the same time. On an old computer with less memory it can become unbearable.)
Dual-boot, where you have to restart the computer to switch OS, doesn't work well for me. It takes way too much time to switch, and really need to switch back and forth. Having Windows in a window works much better for me, and you can maximize that "Windows window", so it looks like you're just running Windows.
One thing you may want to look at is to have Linux running in a VM, then configuring Samba to allow the host to network-mount pieces of the Linux filesystem so that you can operate using Windows tools, and have Linux running the server processes (e.g., httpd). Alternatively, I'm sure that there are shell extensions for using FTP, NFS, or SSH/SFTP servers from within Explorer, but I've not looked at any for a long time.
If you should happen to need to use graphical Linux tools then you can use the X server found in cygwin for that.
The downside of this plan is that Samba can be a bit tricky to configure, but you get to use the Windows tools you're already familiar with.
I had no issues running Ubuntu via VMWare. You can easily switch to full screen mode anytime. Strongly recommended. One shortcoming is that Linux will not be exposed to the full potential of your hardware. Compbiz Fusion failed to work as a result.
Given that you're a gamer, I'm thinking your machine should be fast enough to run Linux in a VM. Best to try out the VM before messing with disk partitions.
I use physically separate machines to run Linux and Windows (and MacOS X). This means that I don't have to reboot to do something different, and each system gets the full power of the hardware.
Disadvantages: more desk space used, more time and money spent maintaining hardware (though if you do a rolling upgrade, this is mitigated - Linux runs most happily on not-quite-new machines). Doesn't work so well if you like carrying laptops around.
Be aware that VMs universally don't give you full graphics acceleration. This can be a non-issue (many programs must cope with Intel GMA anyway), or it can be a showstopper. Your choice.

Resources