GUI paradigm history - user-interface

I know that Apple and Microsoft were inspired by Xerox PARC in building the GUI, but my question is: from the hardware point of view, which was the switch to GUI become available? I remember that I've read somewhere about OS running on 80KB at that time. Can you explain me(give some links) about what was necessary from the hardware point of view(also memory, speed) to make the GUI available?
I'm interested about the history of this paradigm switch.
TY.
ps: do you know how were those 80KB used?

here is an excellent article from arstechnica on the history of UI's. Wikipedia also has an article on it.
To answer your question about when the switch was available, i think that a GUI was always available, even in the days of analogue machines. You didnt have a windowing UI back then, but even back in the 60's, there were machines which displayed primitive user interfaces.
Machines that had user interfaces that ran on 64kb memory did so by using a lot of special modes in their hardware to display primitives without using alot of RAM. For example, back in those days, hardware sprites were crucial because that was the only way to get good graphics using low memory. Also dont forget about vector graphics and various other innovations which can create user interfaces without much RAM.

Related

What does the kernel do once put into memory?

I'm a first year grad student trying to write an Operating System from scratch as a side project. I've read the Linux Programming Interface, Modern Operating Systems 4th edition, a bunch of articles on OSdev wiki's, and anything I can find on Google, but I'm having a tough time finding what I should be doing next after writing a simple bootloader, and a kernel that can take user input and display it back onto the screen.
I have a feeling that I need to create some drivers that interact with the file system and memory, but I'm not entirely sure. I'm trying to work my way up with just physical memory and one process running "kernel" for right now. I'll worry about virtual memory (pagging) and multi-processes later. If anyone can give me some kind of direction or better understanding of what happens when the kernel is finally put into memory, that would be great.
Thanks.
I would like to point a resource that will be of great help for you to understand this stuff in real details. One of the great and evolving resource that is being maintained on git.
https://github.com/0xAX/linux-insides/tree/master/Booting

Assesing feasibility for OS X port

I need to figure out if porting an application to Mac OS X (not iOS) is feasible. I wrote some code for Mac around 20 years ago, but what I'm looking at now is completely different, and may require a complete re-write, which I cannot afford. After googling for some time, I found a variety of APIs, which appearing and get deprecated so often, that I feel completely lost.
The application draws through copying small fragments of bitmaps to the window. This is accomplished with BitBlt() on Windows or XCopyArea() on X11. In both cases, the source is stored in the video memory, so copying is really fast, 500K copies per second on a decent card, possibly more. On Mac, there used to be CopyBits() function which did the same, but it is now depreacted. I found CGContextDrawImage() which looks it's getting deprecated too, but copies from the user memory, and can only copy the whole image (not fragments). Is there any way to accomplish bitmap copying at decent speed?
I see everything is 64-bit. I would want to keep it in 32-bit for a number of reasons. 32-bit applications still seem to be supported, but with the fast pace deprecation, Apple may stop supporting at any time. Is this a correct assesment?
Software distribution. I cannot find any information on this. Looks like you need to be a member of the Apple Development program to be able to install your software on user's computers. Is this true? In some other places, I have read that any software must undergo Apple approval. Is this correct?
Thank you for your help.
So much has changed in the past twenty years, that it may indeed be quite difficult to port your app directly to modern OS X. You may be better served by taking the general design concept and application objectives, and create a fresh implementation using up-to-date software technology.
Your drawing system might be much easier to do with modern APIs, but the first step is deciding which framework to use. Invest some time in reading the documentation and watching the many videos available on the Developer website. A logical place to start is Getting Started with Graphics & Animation, but you may also wish to explore Metal Programming Guide and SpriteKit.
The notion of 64-bit vs 32-bit is irrelevant. All Mac computers run 64-bit code.
If you don't purchase a Developer program membership, you can still create an unsigned application with Xcode. It can be installed on another user's computer, but they'll need to specifically change the setting in System Preferences -> Security to "Allow apps downloaded from: Anywhere".
The WWDC videos are very useful in understanding the concepts and benefits of advancements in these frameworks made over the past few years.
After some investingation, it appears that OS X graphics is completely different from others. Screen is regarded as a target for vector graphics, not bitmap.
When the user changes the screen resolution, the screen resolution doesn't really cange (as it would in Linux or Windows) and remains native, but the scale at which the vector graphics is rendered changes. Consequently, it's perfectly possible to set screen "resolution" to be higher than the native one - you just see the things renderd smaller.
When you take a screenshot, the system simply renders everything to an off-screen bitmap (which can be any size), so you can get nice smooth screenshots at any size.
As everything is vectored, the applications that use bitmap graphics are at a huge disadvantage. It is very hard to get to native pixels without much overhead, and worse yet, an application that uses native pixels will behave strange because it won't scale when the user changes screen resolution. It also will have problems when screenshots are taken. Is it possible to make it work? I guess I won't find out until I fork over $2K for Macbook Pro and try it.
32-bit apps seems to be supported, and I don't think there's an intent to drop the support.
As to code distribution, my Thawte Authenticode certificate is supposed to work on OS X as well, so I probably don't need to become a member of Apple Developer program to distribute software, but again there's no definitive answer to that until I try.

Guidance : I want to work at Process Information level

I couldn't find a suitable title for this. I'm going to express my query with examples.
Consider following softwares:
Process explorer from sysinternals (an advanced task manager)
Resource Manager : resmon.exe (lists each and every fine detail about resource usage about each process).
For me these softwares seems like miracles. I wonder how these are even made. C'mon how a user process can know such fine details about other processes? Who tells this software, what processes are running and what all resources are utilized? Which dlls are used? etc..
Does windows operating system give these software that information? I mean though (obviously the most lower level api) WIN32API. Are there some functions,which on calling return these values
abstractly say:
GetAllRunningProcesses()
GetMemoryUsedByProcess(Process* proc)
etc..
Other similar applications are
network Packet Capture software. How does it get information about all those packets? It clearly sits just infront of the NIC card. How is it possible?
Anti-virus: It scans memory for viruses. Intercepts other processes. Acts like a sandbox for the user application space. How? How??
If its WIN32API. I swear, I'm going to master it.
I don't want to create a multi-threaded application. I want to get information about other multithreaded applications.
I don't want to create a program which communicates using sockets. I want to learn how to learn how to capture all communication packets.
I actually want to work at the lower level. But I don't know, what should I learn. Please guide me in proper direction.
This is really a pretty open-ended question. For things like a list of running processes, look up "PSAPI" or "Toolhelp32". For memory information about a particular process, you can use VirtualQuery.
Capturing network packets is normally done by installing a device driver. If you look, you should be able to find a fair amount about how to write device drivers, though don't expect to create wonders overnight, and do expect to crash your machine a few times in the process (device drivers run in kernel mode, so it's easy for a mistake to crash the machine hard).
I can't say as much with any certainty about anti-virus, because I've never tried to write one. My immediate guess would be that their primary technique is API hooking. There's probably more to it than that, but offhand I've never spent enough time looking at them to know what.
Mark Russinovich's classic, Windows Internals, is the go-to book if you want to get deep in this kind of stuff. I notice that the just-released 5th edition includes Vista. Here's a sample chapter to peek at.
If you like Process Explorer, this is the guy who wrote that, and there are lots of examples using it in the book.
Plus, at 1232 hardcover pages, you can use it to press your clothes.

How can I learn about cross platform game development?

How do companies like Valve manage to release games to all three major gaming platforms? I am interested in the best-practices regarding code sharing specifically between Windows, Xbox360 and PS3, since the ideal solution is to reuse as much code as possible instead of rewriting the whole thing for every platform.
It's not any different than writing platform-independent code in other contexts. Hide platform-specific details (input, window interaction, the main event loop, threading, etc) behind generic interfaces, and test regularly on all the platforms you intend to support.
Note that the Cell's threading model is unusual enough that doing threading "generically" takes some care. I am not a Valve employee and I know none of their secrets, but it's my understanding that most game developers who want to target the PS3 use a job queue that the individual cell processors grab tasks off of as needed. This isn't necessarily the best way to use the Cell, but it generalizes nicely to more conventional threading models (like, frex, the one that thet PC and the 360 both use).
There's a bunch of Game Developer Magazine articles and GDC talks on the subject. In fact, since you mentioned Valve, they delivered a talk describing their approach at GDC08.
This is really a huge subject that I could (and have) talk about for hours upon hours, but elevator summary is:
Determine which parts of the engine are completely platform-specific and put them behind an abstraction. File and asset loading, for example, need to be rewritten for each console; but you can hide that behind an IFileSystem interface which provides a uniform API that the game code talks to.
The PS3 makes this hard because its abstraction point has to be someplace completely different from the other platforms. Even game features like collision and nav will have to be written differently for the Cell.
Try to keep leaf game code (entities, AI, sim) as platform-agnostic as possible...
But accept that even the leafiest of game code will sometimes need some platform-specific #ifdefs for perf or memory or TCR reasons. A lot of UI will have to be rewritten because the manufacturers have conflicting certification requirements.
Anyone who says the words "I'm not worried about performance" or "memory isn't an issue" shouldn't be on the payroll.
This question can be divided up into two separate questions. "How can I write portable code?" and "What are the divergent requirements of mainstream gaming platforms?".
The first question is relatively easy to answer. Best practices for abstracting your non-portable code are covered in Write Portable Code:
http://books.google.ca/books?id=4VOKcEAPPO0C&printsec=frontcover
Turning theory into practice, the Quake 3 source code does a pretty good job of dividing out different platforms into separate areas for a C codebase, available at http://www.idsoftware.com/business/techdownloads/ However, it does not demonstrate C++ patterns such as abstract interfaces, implemented once per platform.
The second part of your question, "What are the divergent requirements of mainstream gaming platforms?" is tougher. However, it is notable that your largest areas of change are still your renderer, your audio subsystem and your networking.
Each console platform has a series of certification requirements, available under an agreement with the respective console owners. The requirements drive consistency in user experience and are not focused on gameplay or qualitative, high level issues. For instance, your game may need to display a reasonably interesting animating loading screen, and black screens are unacceptable.
Getting your hands on this documentation as soon as possible is key to making the right choices in developing for a specific console platform.
Finally, if you can't get your hands on a console devkit, I suggest you port your code to the Mac from Windows. The Mac gets you an OS port ensuring you are not tied to Windows as well as a processor port if you support universal binaries. This ensures your code is endian agnostic.
If you support both PC and Mac, you will be well positioned to support a third platform, should you gain access to it in the future.
Addendum You wrote:
the ideal solution is to reuse as much
code as possible instead of rewriting
the whole thing for every platform
In many game porting scenarios, the ideal solution is not to reuse as much code as possible, but to write the optimal code for each platform. Code can be reused between projects and is relatively inexpensive as compared to the content that the engine takes in. A more reasonable goal is to aim for lowest common denominator content that runs on all platforms without modification (a build phase that packs the content for media is okay).
It's great to do simultaneous development. You find all kinds of bugs you wouldn't find doing just one platform.
I remember that programmers in DOS had null pointers all the time because writing to low memory didn't immediately crash them. When you ported to an Amiga, Atari ST, or Macintosh, boom! I remember telling a DOS programmer that he had a couple null pointers on an aready-shipped game. He thought for a couple seconds and grinned, "That explains a few things."
Now that games have such large budgets, it's important to ship them all at the same time so you don't waste marketing and ad budgets.
My advice on simultaneous development is to pick one lead platform, but never let the other platform(s) get more than a week behind. It will become obvious as you program which parts of the code are common to all platforms and which are different. Pull out the differences into one or more platform-specific areas.
My experience is in C/C++. It's a bigger problem if you have to port against different languages (say, Java and Objective-c).
A few years ago the Opera CEO said in an interview that the key to developing for independent platforms is to move away from any single OS/platform libraries. He went on and said that they developed their own libraries that improve OS performance.
My assumption is that big companies will have a common, Xbox, PS, windows, FooOS, separate teams. Each platform needs to be tweaked differently and requires different implementation methods. I don't think they do one source for all platforms; rather, they build one for each OS thereby, improving efficiencies. I remember EA used to release some console games earlier than the PC versions and vice versa.
Another issue is that different consoles have different hardware thus requiring different programming techniques.
there are two extremes, build one source that fits all (java for instance) but you run the risk of inefficiency or write 40 versions; one optimized for each platform
Back when I had a friend into educational computer games (before The Learning Company gutted the field), he was a great fan of creating cross-platform libraries for doing everything.
This is easier for games than other apps. If you have a word processing app to run on the Mac and Windows, for example, it really does need to look and behave like a Mac app on the Mac, and a Windows app on Windows. Write a game, and it doesn't have to conform to the native behavior, look, and feel.
If you want open source examples, you could look at source code of Quake 1, 2 and 3 engines. They are structured quite portably. (Of course, no ps3 or xbox360 support, but same principles apply)
http://www.idsoftware.com/business/techdownloads/

Control multiple PCs with single Mouse and Keyboard [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
As a programmer I found it very hard to use my laptop and workstation with two different input devices, Can anyone suggest a good solution to use single mouse and keyboard to control my two machines
I am not looking for a Virtual Machine or RDP solution to see my machines in a single monitor,
Synergy.
Synergy lets you easily share a single mouse and keyboard between
multiple computers with different
operating systems, each with its own
display, without special hardware.
It's intended for users with multiple
computers on their desk since each
system uses its own monitor(s).
Redirecting the mouse and keyboard is
as simple as moving the mouse off the
edge of your screen. Synergy also
merges the clipboards of all the
systems into one, allowing
cut-and-paste between systems.
Furthermore, it synchronizes screen
savers so they all start and stop
together and, if screen locking is
enabled, only one screen requires a
password to unlock them all.
P. S.
See also how to fix Synergy problems on Vista.
What you want is a small gadget called a KVM switch (keyboard, video and mouse switch). Googling for that term will hook you up with plenty of suppliers.
There is also a neat software solution called Synergy that lets you use your cursor and keyboard input over multiple computers connected by a network.
Yet another vote for Synergy for a software KVM solution. I'm not sure about the others, but it's unique if your computers are running different operating systems. It worked very well when I had a W2k/Linux setup across 3 computers.
Synergy is great, but also give something like VNC a try: it consolidates not only the keyboard and mouse but also the screen. In my case my desktop monitor is much larger than my laptops, and I'm more comfortable facing forward anyway (not looking off to the side where the laptop is.)
There is a lag compared to using a KVM switch, but no loss in video quality.
In my experience Synergy is the best way to merge multiple monitors.
Others include:
- x2vnc
- x2x
- win2vnc
- osx2x
- win2x
... pretty much just take what OS/platform you're on, which one you want to connect to, and put a '2' in the middle. Type that into google and you're good2go.
For my linux machine I use QuickSynergy since it provides a gui for easier configuration. It also has a Mac OS version.
The best...
Synergy
I'll put in another vote for Synergy, but with a caveat - setup can be a little tricky. The first time I tried it, I could move my cursor over to another PC but I couldn't move it back. Spend some time with the documentation before you proceed.
InputDirector is better than Synergy. Here's why...
It has built-in AES encryption functionality (without requiring you to install OpenSSH) for secure transfer of input between machines.
It allows cut & paste of text and files between machines (by automatically translating to C$ and D$ shares)
Based on extensive use with a laptop, it is far more reliable and stable than Synergy when reconnecting after undocking & docking. Synergy would frequently just stop working after docking and undocking, requiring me to kill it, restart it, and reconnect. InputDirector rarely has any issues.
The configuration UI is easier to use, and has more options, than Synergy.
Lots of little things, like matching of cursor location between machines during screen-edge transitions, and overriding mouse settings of "Slave" machines with those of the "Master" machine.
Beyond that, as far as I can tell, it does everything Synergy does. There's only a Windows version, but apparently it's also Vista compliant as well.
I've used both tools extensively, first Synergy, and then InputDirector. InputDirector is just a more robust application. It has all the features of Synergy and then some, plus the key ones listed above. It's website isn't as attractive, and while it isn't GNU GPL'd like Synergy, it free nonetheless, and an oustandingly well-functioning tool.
I used to use a KVM switch, but lately I've started running all my computers as virtual machines on a single hardware platform. Each "system" is a window on my desktop!
I have a triple monitor display, and I just remote desktop into my other machines. I have 2-3 laptops on my desk at any given time, and 3 servers to administer. Over a 1 gbps connection, I have very little latency to worry about, and I can be working on three computers at once without much trouble. This may or may not help you, but I thought I would throw it in there for you.
If you mean: two machines on your desktop, a lot of places use KVM-style switches.
They come in legacy PC-style and also USB. The USB version works with Macs and PCs.
My experience is that the small desktop switches are a bargain, and if you learn the keyboard shortcuts, you'll jump back and forth without much problem.
The machine room, 3-level tree KVM's are also pretty useful. They flake out more often, but when you have 60 machines, you simply can't have 60 pairs on input devices.
I'll second Zarkonnens comment about KVM Switches as I use one for this purpose all the time. However I might share some rather frustrating experiences with them:
I have found that PS/2 interfaces tend to be somewhat more reliable on KVM switches than USB - I have had very bad experiences with some supposedly upmarket DVI-USB KVM kit from Gefen and Avocent. Due to a quirk of my Viewsonic monitor where it would drop back to analog most of the time these were exacerbated to the point of the system being nearly unusable.
DVI and USB are finicky. DVI monitors will often time out and sleep if they get no signal. The KVM switch will assume that there is no monitor if it is not active, which will then be passed back to the video card. USB interfaces will also get put to sleep randomly.
The net effect of this was that it was very difficult to get two machines to boot up and work on the KVM switch and the switch would lose keyboard or mouse input on one or both machines every few days. This was followed by an hour or more of trying to get all of the hardware to come up and play nicely. I got the same issue with the Avocent and Gefen switches on several different machines.
My older Belkin VGA/PS2 kit worked fine with the Viewsonic monitors on VGA but I spent nearly £1000 on switches and cabling to try and get a working DVI-USB KVM setup.
In the end I got two HP LP2065 screens that didn't have the bug that the Viewsonics exhibited. These have two DVI inputs and I used one of my older Belkin PS/2 switches to switch the keyboard and mouse. The computers are plugged directly into the monitor and the monitor's input selector is used to pick the computer. The keyboard and mouse are switched off the KVM switch. This is the setup that I'm using today.
The monitors and KVM have to be switched individually but it's much more reliable than the DVI-USB KVM switches that really did not work at all. Caveat emptor.
You should also check out Multiplicity from Stardock.

Resources