Background: On machines >2.1 Ghz, Win 3.11 cannot load protected-mode drivers for TCP/IP ("Wolverine") because of a timing problem in "IOS.386". There is a similar problem with Win95 and 98 1st release. There was a patch release from MS for 95/8, but never for WFW.
I found an old post here by Rob Cowell about this problem. He said that he used a different stack. I was curious about which one. I have made it work with the 16-bit DOS mode stack from MS, but it is a real pig for memory <640K. (Also, no native DHCP, but NBT does work, so you can map network drives. You also need to get rid of IPX & NETBEUI support for similar reasons.)
And just to prevent questions of the type "why would anyone want to ....", I'm trying to put together a WFW Live CD for modern hardware, just for fun. I did cobble together DHCP support using WATTCP modules and some complicated scripting before windows is started, but I need to be very circumspect about running any DOS programs or it runs out of low memory. (WFW actually sees 256 Mb of high memory; I was quite surprised.)
I voted to close, because it's not programming related. But it is an interesting project to see if you can pull it off. I'd be keen to see a link or two to see how you go.
Related
Well, there is a strange problem occured in my working project. It is written on Delphi. When I try to compile it, it takes 8 hours to compile about 770 000 lines (and it is not the end), while my colleague needs only 15-20 seconds.
I've tried everything suggested in Why does Delphi's compilation speed degrade the longer it's open, and what can I do about it?
Shorten the path to project
Defragment disc with MyDefrag
Use Clear Unit Cache (do not sure, if it worked at all)
I also turned off the optimization and I use debug mode. My PC is pretty fast (i5-2310 3.1 GHz, 16 Gb RAM, usual SATA HDDs), the bottle neck could be the HDD, but my collegue has usual one too. So, it is very mysterious, what is the reason of so slow compilation.
Edit: I apologize for lack of information. Here is additional info:
I use debug mode, release one works same.
We use Delphi XE version.
I've copied my collegue's folder with project initially.
I do not use network drive, and I tried to move project to another HDD.
Additional info about system: I use Windows 7 Enterprise N 64 bit, while my collegue uses Windows 7 32 bit, Also, Delphi XE is 32-bit (dunno, if it can be 64-bit). May be it is the reason in some way?
Edit 2: I found solution! The problem was that I installed Delphi on my Windows 64 bit system. Installing it on virtual Windows 7 x86 made it work: compiling in seconds. Dunno, why is there so big gap in perfomance.
Are you sure this is not some hardware problem, e.g. your hard disk having a bad sector? Try to put the source code on a different disk and see if the problem goes away. Or maybe the search path points to a network drive that is very slow or not even available?
I'm currently learning about the different modes the Windows operating system runs in (kernel mode vs. user mode), device drivers, their respective advantages and disadvantages and computer security in general.
I would like to create a practical example of what a faulty device driver that runs in kernel mode can do to the system, by for example corrupting memory used for critical OS-processes.
How can I execute my code in kernel mode instead of user mode, directly?
Do I have to write a dummy device driver and install it to do this?
Where can I read more about kernel and user mode in Windows?
I know the dangers of this and will do all of the experiments on a virtual machine running Windows XP only
The "Windows Internals" book is rather shallow on the topic at question.
First I should note that any program also runs in kernel mode (KM). This is due to the fact that - not unlike in unixoid systems - for system calls the calling thread transitions into KM where the kernel itself or one of the drivers services the request and then returns to user mode (UM).
A first step to get started would be to download the latest Windows Driver Kit (WDK) and start reading the documentation. If you want a more digestive book, go for one of these:
Windows NT Device Driver Development - though an old title, many of the basics still apply.
Programming the Windows Driver Model (by Oney) - WDM programming in particular, also covers basics, has some errors (as most books).
Undocumented Windows 2000 Secrets (by Schreiber) - contains plenty of information about all kinds of internals at a more technical level than the book mentioned before.
Undocumented Windows NT - contains a more generic part about internals on a technical level followed by a reference of some native API functions.
Windows NT/2000 Native API - the classic, but it's more of a reference. Nevertheless there are several gems (and examples) in it.
Since you want to use Windows XP, many of the techniques described over at rootkit.com (even from some years ago) should work. They also got plenty of samples.
And as you notice by the name of the referenced website, you are in fact in what I'd call a gray area with that question ;)
It's a simple answer, and as you suspect, you do need to write a device driver in order to run in kernel mode. I'm afraid I don't know of a particularly good reference for kernel mode programming but a quick websearch reveals:
en.wikibooks.org/wiki/Windows_Programming/User_Mode_vs_Kernel_Mode
http://www.netomatix.com/Development/Kernelmode.aspx
http://technet.microsoft.com/en-us/library/cc750820.aspx
http://msdn.microsoft.com/en-us/library/ff553208(VS.85).aspx
You will need a good understanding of Windows Internals:
http://technet.microsoft.com/en-us/sysinternals
and yes they have a book: Windows Internals
http://technet.microsoft.com/en-us/sysinternals/bb963901
http://www.amazon.com/Windows%C2%AE-Internals-Including-Windows-PRO-Developer/dp/0735625301
Basically your questions are all answered in this book (and it even comes with samples and hands-on labs).
I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community.
Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality.
All responses are appreciated, thanks in advance.
The type of programming I intend to dive into :
Android Dev, Web Dev, Desktop Dev...More Android and Web right now though.
So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome.
I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right?
And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers"
Thanks to all for reading, I appreciate it.
Hope this isnt confusing :S
There is an easier solution..
I still have to use Windows for Symbian programming so I use a Wubi and Ubuntu to provide my double bout into Linux..you deploy Wubi uses a large file and thus no need to worry or mess with creating a partition..
I have used it for 18 months with no data loss and no worries..
There is also another tool called andlinux:
http://www.andlinux.org/
It uses colinux to run Linux as a program inside windows..
A couple things:
If you're using an IDE, there's no point to coding on Linux. Linux is nice for programming because the command line tools are awesome. Netbeans and Eclipse both work fine on Windows. All you'd be missing is makefiles (which IDEs don't use anyway).
Using a virtual machine would be annoying (working with the window and stuff) and slow. Try AndLinux if you want to have Linux running in Windows. It sets up X and Pulseaudio for you, so all of your programs will appear to be native. It's basically a way to run Ubuntu as a Windows service (all Ubuntu packages for your architecture are installable).
If you just want the fun of Linux command line programs without access to all of Ubuntu, cygwin is smaller and might be faster.
If by "Dreamweaver files", you mean HTML/PHP/CSS, then yes, you can just upload them to the server. As far as I know, the only ASP or ASP.net compatible server is Microsoft's, but why use that anyway?
EDIT: SO didn't give me enough space in the comments to answer your question..
AndLinux and Cygwin are basically just better ways to do your "virtual machine" idea.
Cygwin adds a posix layer to Windows (basically everything you need to compile Unix/Linux/BSD programs). This means that you can generally take a Linux program and just compile it on Windows and have it work. They also have repositories, but in my experience, the cygwin installer is slow and hard to use.
AndLinux runs the Linux kernel as a Windows service, giving you a similar experience as running it in VirtualBox/other virtualization programs. However, it also sets up X (the graphics layer for Linux) and PulseAudio (a sound system that lets you run sound over a network), so that when you run Linux programs they act and sound like native programs. I also like AndLinux better because you have access to all of Ubuntu's programs, and apt-get is easier to use than cygwin's installer. Also, if you use AndLinux and later to decide to go 100% Linux, you're basically already using it that way.
What I'm getting at is: If you want to run Linux in a virtual machine, don't. Just install AndLinux. It will be faster and it's much easier to work with (since everything is just a normal window).
Here's an example of the difference:
Screenshot of AndLinux: The program in the bottom right corner is running in AndLinux. Notice how it just looks like a badly themed Windows program? Compare that to something like this, where you have another desktop in a Window.
And still.. there's no reason to virtualize Netbeans. It's a native Windows program and you can gain nothing and lose a lot of speed.
If you're interested in Android development and you want to use Linux, then I would recommend you do your development in Eclipse. Eclipse is available for Linux and if you get Ubuntu then Eclipse is amazingly easy to install. I used VirtualBox + Ubuntu + Eclipse for several projects I worked on. If you decide that Linux is not for you and your project was in Eclipse then you will have no problem switching back to Windows since Eclipse is available for both operating systems.
The ONLY problem I had was the screen size on the virtual machine... if you have a big screen and you use a virtual machine then you might get limited to a fraction of your actual screen resolution. It's very easy to install Linux on a second partition, so I would just recommend you go with a second partition if you want to fully utilize the size of your monitor.
My setup is sort of the opposite: I run Linux as my main OS, both at work an at home, and I have Windows in a virtual machine. On a modern computer with adequate memory the performance of development tools is not a problem. I work with Visual Studio in the virtual machine, and I have seen few performance issues. (But note that this is on a fast computer, and that you may need more memory than otherwise, since you are running two OS:es at the same time. On an old computer with less memory it can become unbearable.)
Dual-boot, where you have to restart the computer to switch OS, doesn't work well for me. It takes way too much time to switch, and really need to switch back and forth. Having Windows in a window works much better for me, and you can maximize that "Windows window", so it looks like you're just running Windows.
One thing you may want to look at is to have Linux running in a VM, then configuring Samba to allow the host to network-mount pieces of the Linux filesystem so that you can operate using Windows tools, and have Linux running the server processes (e.g., httpd). Alternatively, I'm sure that there are shell extensions for using FTP, NFS, or SSH/SFTP servers from within Explorer, but I've not looked at any for a long time.
If you should happen to need to use graphical Linux tools then you can use the X server found in cygwin for that.
The downside of this plan is that Samba can be a bit tricky to configure, but you get to use the Windows tools you're already familiar with.
I had no issues running Ubuntu via VMWare. You can easily switch to full screen mode anytime. Strongly recommended. One shortcoming is that Linux will not be exposed to the full potential of your hardware. Compbiz Fusion failed to work as a result.
Given that you're a gamer, I'm thinking your machine should be fast enough to run Linux in a VM. Best to try out the VM before messing with disk partitions.
I use physically separate machines to run Linux and Windows (and MacOS X). This means that I don't have to reboot to do something different, and each system gets the full power of the hardware.
Disadvantages: more desk space used, more time and money spent maintaining hardware (though if you do a rolling upgrade, this is mitigated - Linux runs most happily on not-quite-new machines). Doesn't work so well if you like carrying laptops around.
Be aware that VMs universally don't give you full graphics acceleration. This can be a non-issue (many programs must cope with Intel GMA anyway), or it can be a showstopper. Your choice.
I am using Turbo C 3.0 and Turbo c 2.0 for the programming. Added to this I am using Windows XP. While using Windows 98, the above said programs were really worked fine. But after installing XP, those programs were really slow-down my system. Those were really using high CPU power even when idle(idle refers to "no interaction between program and user").
Can anybody previously solved this issue, Post here.
Also, I want to know what is causing those slow-down!
Those are 16 bit DOS programs, and they probably will not run on XP. They are probably running in the NT Virtual DOS Machine. Use the task manager, or better yet, Process Explorer, to check this. You will probably not see your programs running; look for instances of ntvdm.exe instead.
I have noticed several antivirus programs (Checkpoint, Proventia Desktop) seem have a problem with ntvdm. It is as if they eat up quite a bit of cpu when an ntvdm instance is running.
Also, wasn't Turbo C finicky about its extended memory settings? If you still have your Autoexec.bat and Config.sys files from the Win98 system, you could try changing XP's settings to match. The XP equivalent to these files are autoexec.nt and config.nt; they are in the Windows\System32 directory.
I suspect Adrian's comment is the correct answer: old DOS programs did not account for multitasking and so tended to put themselves in tight loops when "idle". Back in the day, it didn't matter as nothing else was running at the same time and the operating system would interrupt the running program to handle hardware, well, interrupts.
I would highly recommend avoiding such tools on modern hardware because the programs the generate are likewise not multitasking friendly. They are also going to be optimized for ancient processors and have limited memory addressing. If you have some old hardware and want to goof around with it, then knock yourself out. But there are plenty of modern compilers that are free (either as Visual C++ Express is to get you hooked, or open source).
This can be avoided partially by setting process priority.
Start the App eg. Turbo C++ 3.0
Minimize and go to Task Manager
Find ntvdm.exe
Right Click > Set Priority > Low > Yes
Then it runs with not so annoying speeds.
I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)