I have some knowledge on OS (really little.)
I would like to know a lot about specifically the Windows OS (e.g. win 7)
I know, it's the most dominant OS out there, and there is an enormous amount of work I`ll have to do.
Where do I start? what are beginner/intermediate books/articles/websites that I should read?
The first thing I wonder about is that the compiler turns my C programs to binary code, however when I open the (exe) result files, I find something other than 0 and 1.
I can't point you in a direction as far as books go, but I can clarify this:
The first thing I wonder about is that the compiler turns my C programs to binary code, however when I open the (exe) result files, I find something other than 0 and 1.
Your programs are in fact compiled to binary. Everything on your computer is stored in binary.
The reason you do not see ones and zeroes is because of the makeup of character encodings. It takes eight bits, which can have the value 0 or 1, to store one byte. A lot of programs and character encodings represent one byte as one character (with the caveat of non-ASCII unicode characters, but that's not terribly important in this discussion).
So what's going on is that the program you are using to open the file is interpreting sequences of eight bits and turning those eight bits into one character. So each character you see when you open the file is, in fact, eight ones and zeros. The most basic mapping between bytes and characters is ASCII. The character "A", for example, is represented in binary as 01000001. so when the program you use to open the file sees that bit sequence, it will display "A" in its place.
A nice book to read if you are interested in the Microsoft Windows operating system is The Old New Thing by Microsoft legend Raymond Chen. It is very easy reading if you are a Win32 programmer, and even if you are not (even if you are not a programmer at all!) many of the chapters are still readily accessible.
Otherwise, to understand the Microsoft Windows OS, you need to understand the Windows API. You learn this by writing programs for the Windows (native) platform, and the official documentation, which is very good, is at MSDN.
There are a series of books titled "Windows Internals" that could probably keep you busy for the better part of a couple years. Also, Microsoft has been known to release source code to universities to study...
well, if you study the win32 api you will learn a lot about high-level os
(petzold is the king, and it's not about win7 just win32....)
If you want to study about low level, study the processor assembler language.
There are a ton of resources out there for learning operating systems in general, many of which don't really focus on Windows because, as John pointed out, it's very closed and not very useful academically. You may want to look into something like Minix, which is very useful academically. It's small, light, and made pretty much for the sole purpose of education.
From there you can branch out into other OSes (even Windows, as far as not being able to look under the hood can take you) armed with a greater knowledge of what an OS is and does, as well as more knowledge of the inner workings of the computer itself. (For example, opening executable code in I assume a text editor, such as Notepad, to try to see the 1s and 0s, which as cdhowie pointed out eloquently is not doing what you think it's doing.)
I would personally look into the ReactOS project - a working windows clone.
The code con give some ideas of how windows is implemented...
Here is the site:
www. reactos. org
Related
First of, this is a genuine question and not poking fun at anyone. I am trying to learn C++ after many years of not touching it and I found a very old article (last updated July, 96) while trying to remember how implement BCD addition. Even though the article is old, looking at the person who wrote it is a professor, I am in a wtf state after reading the first few lines. I am learning and don't want to disregard something so easily, so please excuse my naivety.
The BCD system was chosen for the internal number system in these
machines because it is easy to convert it to alphanumeric
representations for printouts and displays. The compelling advantages
of BCD have waned over time, and these digits are supported by more
modern hardware simply to provide backward compatibility with earlier
generations of machines.
Is the above statement true ??? !!! If yes, can someone explain how the modern CPUs perform addition if not using Binary ??? Or is the author trying to say something else and I misunderstood. I am concerned that maybe author might be hinting at something at the hardware level that might be different than the software abstraction. Or it might be some sort of translation issue.
I don't see any purpose solved by processors giving an outer appearance of being binary ("for backward compatibility") when internally they are decimal and don't need BCD system.
Does WinAPI wide-string functions support characters that consist of more than one code point (so called surrogate pairs)?
Is there anything about it in the documentation?
The MSDN article, Surrogates and Supplementary Characters says:
Note Windows 2000 introduces support for basic input, output, and
simple sorting of supplementary characters. However, not all system
components are compatible with supplementary characters.
Obviously, we're a bit beyond Windows 2000.
My experience has been that Windows does in fact handle surrogate pairs quite well. I know that there were some bugs here and there, but it's been a while since I kept up with the issue.
Short answer: Windows supports surrogate pairs, but there are likely some bugs in odd corners.
i have a question concerning computer programming. Let's say i have only one computer with no OS running. And would like to start to "develop" an OS. basically what i have is a blank sheet an a pen to do so. an a couple of electronic devices. how do i put my instruction into that computer?
because today we use interpreter of compiler that "turn" programming language into what they call "machine code". But my question could be how to generate machine code from nowhere.
Thank you for your replies, a link to learn how to do that will be must welcome.
The first computers where programmed making the "machine code" directly. Just punching one's an zeros into cards (well, in fact they punched octal digits).
This was done that way until somebody thought it would be a nice idea to have an assembler which translated the machine code instructions into that ones and zeros.
After that, another guy thought that it can be very nice idea to have a programming language, who will translate "top level" instructions to machine code.
And after that, or probably at the same time, some "internal procedures" where created to ease the programming: to open a file, to close a file the only thing you have to do is to call an internal subroutine in the machine instead of programming all the open file and close file subroutines by yourself: the seed for the operating systems was planted.
The cross compiling issue that is commented here is the way to create an operating system for a new computer nowadays: you use a working computer as a "lever" to create an operating system for a new computer.
it depends on how far back you want to go. the earliest ones "programming" was moving wires from one essentially analog alu to another.
The woman/women programming at this point were called computers and used use pencil and paper.
later you use a pencil and paper and the datasheet/documentation for the instruction set. assemble by hand basically, there are no compilers or even the concept of a programming language at this point, this has to evolve still. you wrote down the ones and zeros in whatever form you preferred (binary or octal).
one way to enter code at this point is with switches. certainly computers predated it but look for a picture of the front panel of a pdp8 or the altair, etc. you set the switches for the data value and address, and you manually strobe a write. you load the bootstrap in this way and/or the whole program. set the start address and switch to run mode.
over time they developed card and tape readers for which you loaded the bootstrap in by hand (switches) then you could use a reader to load larger programs easier. cards could be punched on a typewriter type thing, literally a keyboard but instead of striking through a ribbon onto paper, it cut slots in a card.
oses and programming languages started to evolve at this point. until you bootstrapped your compiler you had to write the first compiler for a new language in some other language (no different than today). so the first assembler had to be in machine code, then from assembler you could create some other language and so on.
If you wanted to repeat something like this today you would have to build a computer with some sort of manual input. you could certainly but you would have to design it that way, like then you need the debouncing out but you could for example have a processor with an external flash, be it parallel or serial, mux the lines to the switches (a switch controls the mux) and either address/data/write your program, or for fun could use a spi flash and serially load the program into the flash. much better to just use one of the pdp or altair, etc online simulators to get a feel for the experience.
there is no magic here, there is no chicken and egg problem at all. humans had to do it by hand before the computer could do it. a smaller/simpler program had to generate more complicated programs, and so on. this long, slow, evolution is well documented all over the internet and in books in libraries everywhere.
Computers are based on a physical processor which was designed to accept instructions (eg. in assembly code) that only allowed primitive instructions like shift, move, copy, add. This processor decided how it spoke (eg. how big were the words (8-bit) and and other specs (speed/standards etc). Using some type of storage, we could store the instructions (punch cards, disk) and execute huge streams of these instructions.
If instructions were repeated over and over, you could move to an address and execute what was at that location and create loops and other constructs (branches, context switches, recursion).
Since you would have peripherals, you would have some kind of way to interface with it (draw, print dots), and you could create routines to build up on that to build letters, fonts, boxes, lines. Then you could run a subroutine to print the letter 'a' on screen..
An OS is basically a high-level implementation of all those lower level instructions. It is really a collection of all the instructions to interface with different areas (i/o, computations etc). Unix is a perfect example of different folks working on different areas and plugging them all into a single OS.
I have an AT89S52, and I want to read the program burned on it.
Is there a way to do it with the programming interface?
(I am well aware it will be assembly code, but I think I can handle it, since I'm looking for a specific string in that code)
Thanks
You may not be able to (at all easily anyway) -- that chip has three protection bits that are intended to prevent you from doing so. If you're dealing with some sort of commercial product, chances are pretty good that those bits will be set.
Reference: Datasheet, page 20, section 17.
VB6 had a reputation for being too forgiving (and thereby allowing bad practices) and hiding complexities that perhaps developers would be better off needing to know. But I found that, say, 90% of applications could be done in VB6.
But I'd like to see more examples of pushing-the-envelope to work round VB6's limitations. For example, I once found some code for using pointers in VB6 by making calls to the Windows OS. The result was that some string manipulation on largish documents (about 2MB) was brought down from 30 minutes to just over 3 seconds. Does anyone have other examples of going past the limits of VB6?
N.B. not VB.Net.
One nasty trick was to abuse CallWindowProc to call arbitrary code by passing a pointer to it. This is technically breaking that function's contract, since it's only supposed to be used with handles (not direct code pointers) obtained via GetWindowLong; but in practice so few people actually know this that the implementation is forced to allow arbitrary code pointers. This lets you call any function pointer, so long as it's stdcall, and takes 4 arguments of the same sizes as WndProc arguments.
One even nastier trick that is a consequence of the above is that you can dynamically generate code that way - just stick it in a byte array, and use CallWindowProc to jump to it. This way you can embed non-VB6-produced native code into a VB6 application without any external DLLs. Of course, in this age of NX bit enabled by default, it's probably not such a good idea anymore (if it ever was, that is)...
Joel said some good stuff about VB6 back in 2001.
Many VB6 programs are spaghetti,
either because they're done as quick
and dirty one-offs, or because they're
written by hack programmers without
training in object oriented
programming, or even structured
programming.
What I wondered was, what happens if
you take top-notch C++ programmers who
dream in pointers, and let them code
in VB6. What I discovered at Fog Creek
was that they become super-efficient
coding machines. The code looks pretty
good, it's object-oriented and robust,
but you don't waste time using tools
that are at a level lower than you
need. I've spent years writing code
for C++/MFC and years writing code in
Visual Basic, and let me tell you, VB6
is just much, much more productive...
One of the things about Visual Basic 6
is that it doesn't always give you
access to the full repertoire of
Windows goodies that you need to make
a polished application. But what it
does do, better than almost any other
programming environment, is let you
drop into C++ code (or call C APIs)
when you're desperate or when you need
that extra speed.
That was written in 2001: when creating a new Windows program today, IMHO the obvious choice for best productivity is VB.Net or C#. (JOKE: C# is just Visual Basic with semicolons.)
Getting back to VB6: there are many good examples of how to call C APIs to do something special or just to run faster. Here's some of my favourite links:
Karl E Peterson's One Stop VB Shop - his StringBuilder sounds like your example, although it doesn't use API calls
Steve McMahon's VBAccelerator
And I give +10 for OneDayWhen for listing Matthew Curland's Advanced Visual Basic 6. Probably pushed the envelope further than anyone (and didn't quite burst it).
I'm not sure what he puts in his sandwiches but pretty much everything found in Matthew Curland's Advanced Visual Basic 6 is push-the-envelope programming usage of VB6. Truly great stuff.
Can't let this question go by without an answer mentioning Bruce McKinney's Hardcore Visual Basic, which is now (wonderfully) available online:
http://vb.mvps.org/hcvb.asp
It's a great read by an author who clearly loves the spirit of Basic.
Realizing that most of the Gang of Four design patterns rely on implementing an interface, not inheritance and thus can be easily used in Visual BASIC 6.
Being able to do so greatly improved the design of my CAD/CAM application.
How about an full XML parser in VB6.
Pixel based automatically-drawn
irregular forms.
Paul Caton's Subclassing, continued by LaVolpe (http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=68737) allows you to do whatever you need, hooking into windows events, with no IDE crashing.
With this you can implement whatever is necessary in Windows.
The samples have crazy things that you may never thought could be possible.