I have a question about a family of softwares, of which one example is INtime, which lets you run a real-time operating system in parallel with Windows.
I have a reasonable grasp on how Windows works, including kernel/driver/application security rings etc. Similarly, I know how a RTOS runs on a dedicated system.
The Simple Question:
How do these go about existing together without fighting over hardware or other similar problems? How is the allocation or resources made, and how is this integrated with Windows?
Slightly more complicated:
What are the steps I would have to take if I wanted to develop something similar myself? Are there some open-source embodiment's of this paradigm I can inspect to glean some more understanding?
Related
Here is my question:
Is tere any service or technology to run parallel algorythms on more computer without knowing them?
For example: I write a parallel algorythm. My friends install a simple client app, and if they have internet connection, they can help my calculation with their free processor capacity. I would like to see them like an additional core in my CPU.
If there is no technology like that, is there any unsolvable problems with developing one? (I know there must be a lot problems with code trasfering, operation systems, and compatibility)
I believe that you can use BOINC to set up your own volunteer computing project. But I have no experience of this to report.
Recently I get a test task from one company. And one question is:
Suppose you are given a task to write a simple debugger (for a
proprietary operating system) that is capable of setting a break-point
in an application and running it. What would be key design decisions
you make in such a task?
I think I don't know something but I have absolutely no idea about answer. I understand how debuggers work (INT 3 - should have access to virtual space of debuggie) but I suppose the answer about "proprietary operating system".
This question, as usual at the interview should make you asking more questions about the system and requirements.
Does operating system already provide some kind of primitive tracing tool you can use?
What language are tested applications written in?
Some inspection tools, for example valgrind, run inspected programs in their own environment, which seems a good way to go in your case. The other approach is to instrument the binary with tracing instructions, communicating with your debugger - this is probably more suitable and easier to do when your applications run under a VM.
I don't mean for this question to be a flame bait but I'll be using Microsoft and their win32 API as a example of a legacy API.
Now what I am wondering here is Microsoft is spending a lots of their money and energy in maintaining their legacy API, including all of the "glitches/bugs/workaround" that are needed to keep the API functioning the same. Now I'm aware that in Windows 7 they are providing a way for the customer to run their application in a "Windows XP" VM which would be one such way for them to start cleaning up their win32 API because they could then push all of the application into the "Windows XP" VM.
So now what I am wondering is, is it possible to virtualization a legacy API in such way that an customer/program can still access and use it, yet at the same time be able to take advantage of the newer version/API? Because as far as I understand it, if the application is ran in the "Windows XP" VM, it won't be able to access any of the newer API/feature of Windows 7.
The thing that puzzles me about this question when it comes up is that Windows has been doing this since NT came out in the mid nineties. This is how NT runs DOS and Win16 programs, and how it always has. The NTVDM virtualization layer runs 16-bit apps under Win32 with very little special support from the core OS. This is just one example - another is WINE, which as I understand it does a pretty reasonabl job of running windows apps on top of an API set which is very different from that of windows. So it is definitely possible.
The more pertinent question would be why Microsoft would consider it. In order for you to think it is necessary you have to think two things. 1) There is something better to replace the win32 API with and 2) Maintaining the Win32 API is a burden.
Both of these are questionable. In the case of kernel duties, such as accessing hardware and synchronizing and doing threads and processes and memory the Win32 API does a pretty good job, and is ultimately quite close to what the kernel really does. If you think there is a better API then that must mean there is also a better kernel. I personally don't think that NT needs replacing right now. For graphics and windowing, admitedly gdi32 is a bit long in the tooth. But Microsoft solved that problem by building WPF right alongside it. This then brings in the burden question. Well, sure there are two APIs to maintain, but if you virtualized GDI on top of WPF you'd still have to maintain both anyway so there is no benefit there. The advantage of running both in parallel is that GDI already exists and is already tested. All you have to do is to fix the occasional bug, whereas a new virtualization layer would have to be written and tested all over again, which takes time away from making WPF better.
In terms of maintaining back compat, that isn't as much of a burden as it sounds. It is mainly a test question - you have to test that the API behaviour doesn't change, but again - those tests have already been written, so it isn't really any extra work.
So, to answer a question with a question, why would they bother?
This is an interesting question, at least to me, here are some of my thoughts.
Your understanding is correct, an application running in the XP VM only has access to the Win32 APIs provided by XP in the VM. One of the many ways that I have seen Microsoft's approach to enhancing specific APIs is to create new functions with the enhanced/fixed functionality and name the new function by append Ex and even ExEx to the original name, for example
GetVersion
GetVersionEx
For functions that accept pointers to structures, the structures are 'versioned' by using the size of the structure to determine the functionality required, so older code would be passing a previous size of the structure while newer code would be passing in the newer larger strucure and the API functions accordingly.
I guess, the problem has become that it is no longer just differences in how an API works, but more integral to the functioning of the operating system and the internal structures which have changes significantly enough that arguably badly written code is effectively broken.
As to your actual question, I guess it would be quite tough. Even if one thought to let the OS adjust how it executes code based on a target OS version in the PE header of the executable, what would happen if a newer DLL was loaded into the process that targeted the latest OS, now how should the OS handle this when the code is executing? IMHO, I think this would be very challenging, one frought with pitfalls that would ultimately fail.
Of course that is just my raw thoughts on the topic so I might be 100% wrong and there is some simple approach that just did not come to mind.
I want to create an embedded system using Linux similar to E book reader using ARM9 processor. I am not an electronics expert but I would love to learn it. I know basics of electronics like transistors, flip-flops, multiplexers. etc. I love software and would like to create something like an E book reader. Is it possible for a software engineer to create an embedded system? I do not want to buy single board computer available in market, I want to create it myself.
Where do I get some kind of tutorial?
Is my knowledge of operating systems enough to create such a system?
Building a system requires knowledge from multiple engineering disciplines. You can only achieve such a task by buying off-the-shelf modular components and assemble them, and in the case of an e-book putting together the modular components won't be pleasant.
Also learning any of the single disciplines needed will take you a long and concentrated effort.
To (loosely) indicate the problem areas:
you need a computing platform of the right form-factor with all the right chipsets (Apple integrate their own single CPU, as of recently, using hardware designs from multiple companies). You will not find a suitable computing platform of the right form-factor.(Electronic Engineer: Digitial designer, Analog Designer)
You need to try to attach an LCD to the right platform, and other peripherals such as USB/ charging port/ WIFI etc etc.
(Electronic Engineer, Product Designer)
You need to build a case for the platform.
(Product Designer)
You need to get a embedded operating system (potentially real-time) (working on your platform) that fits your needs.
(Embedded programmer, Kernel Programmer)
You need to extend said operating system to behave the way you want it.
(Application Programmer, Graphics Programmer)
The most important part is the platform, and getting a suitable one is very hard and very expensive. The original iphone had a platform created by a third party that apple bought and used to apply points 2-5 -- and it still took their best engineers a long time to make a prototype.
Not really; hardware engineering is a degree-level subject in it's own right, and you need at least three different specialities to do that job. Not to mention that CAD software and CNC machines cost a heck of a lot more than gcc, so hardware engineers' overheads are huge.
However, you can hire that done, for a substantial fee. Or you can use embedded boards and get the case design done for you.
For example, a beagleboard with these accessories in a custom case.
Or, a Gumstix overo with one of these and one of these in a custom case.
In either case, running some embedded linux.
Development boards save a lot of time and money, but in both cases, if you have the capital you can get those boards boiled down into a custom board that will do just what you need for your application, and cost less in large numbers.
Do not underestimate the case design; you're looking at the thick end of a hundred thousand dollars just for the tooling to manufacture a plastic, die-cast metal or stamped metal case, without paying for the design work.
Creating embedded hardware from scratch requires a lot of expertise and resources. It would be better to start off with a low-cost evaluation board in order to learn the basics of embedded programming and interfacing first. That should keep you busy for a few months. Beyond that, embedded CPU suppliers typically have reference designs that you can incorporate into your own embedded product, but at this point you will need to start investing a lot of time, effort and money into tooling up for hardware design and development.
There is basically no need to create (I mean to solder) the embedded system. A good approach may be to buy some controller board like this this or this. You need to be careful with the board but there is nothing about it a software engineer could not manage; it has the familiar serial, USB and RJ45 ports and normally already boots Linux. Finding enclosure, connecting peripherials (including analog/digital converters, or adding some relays to the output ports) is fully in the range of capabilities of someone who wants also some work with hardware. Expect to develop in C.
You can buy off the shelf hardware for embedded software development.
PC 104 Boards
How do companies like Valve manage to release games to all three major gaming platforms? I am interested in the best-practices regarding code sharing specifically between Windows, Xbox360 and PS3, since the ideal solution is to reuse as much code as possible instead of rewriting the whole thing for every platform.
It's not any different than writing platform-independent code in other contexts. Hide platform-specific details (input, window interaction, the main event loop, threading, etc) behind generic interfaces, and test regularly on all the platforms you intend to support.
Note that the Cell's threading model is unusual enough that doing threading "generically" takes some care. I am not a Valve employee and I know none of their secrets, but it's my understanding that most game developers who want to target the PS3 use a job queue that the individual cell processors grab tasks off of as needed. This isn't necessarily the best way to use the Cell, but it generalizes nicely to more conventional threading models (like, frex, the one that thet PC and the 360 both use).
There's a bunch of Game Developer Magazine articles and GDC talks on the subject. In fact, since you mentioned Valve, they delivered a talk describing their approach at GDC08.
This is really a huge subject that I could (and have) talk about for hours upon hours, but elevator summary is:
Determine which parts of the engine are completely platform-specific and put them behind an abstraction. File and asset loading, for example, need to be rewritten for each console; but you can hide that behind an IFileSystem interface which provides a uniform API that the game code talks to.
The PS3 makes this hard because its abstraction point has to be someplace completely different from the other platforms. Even game features like collision and nav will have to be written differently for the Cell.
Try to keep leaf game code (entities, AI, sim) as platform-agnostic as possible...
But accept that even the leafiest of game code will sometimes need some platform-specific #ifdefs for perf or memory or TCR reasons. A lot of UI will have to be rewritten because the manufacturers have conflicting certification requirements.
Anyone who says the words "I'm not worried about performance" or "memory isn't an issue" shouldn't be on the payroll.
This question can be divided up into two separate questions. "How can I write portable code?" and "What are the divergent requirements of mainstream gaming platforms?".
The first question is relatively easy to answer. Best practices for abstracting your non-portable code are covered in Write Portable Code:
http://books.google.ca/books?id=4VOKcEAPPO0C&printsec=frontcover
Turning theory into practice, the Quake 3 source code does a pretty good job of dividing out different platforms into separate areas for a C codebase, available at http://www.idsoftware.com/business/techdownloads/ However, it does not demonstrate C++ patterns such as abstract interfaces, implemented once per platform.
The second part of your question, "What are the divergent requirements of mainstream gaming platforms?" is tougher. However, it is notable that your largest areas of change are still your renderer, your audio subsystem and your networking.
Each console platform has a series of certification requirements, available under an agreement with the respective console owners. The requirements drive consistency in user experience and are not focused on gameplay or qualitative, high level issues. For instance, your game may need to display a reasonably interesting animating loading screen, and black screens are unacceptable.
Getting your hands on this documentation as soon as possible is key to making the right choices in developing for a specific console platform.
Finally, if you can't get your hands on a console devkit, I suggest you port your code to the Mac from Windows. The Mac gets you an OS port ensuring you are not tied to Windows as well as a processor port if you support universal binaries. This ensures your code is endian agnostic.
If you support both PC and Mac, you will be well positioned to support a third platform, should you gain access to it in the future.
Addendum You wrote:
the ideal solution is to reuse as much
code as possible instead of rewriting
the whole thing for every platform
In many game porting scenarios, the ideal solution is not to reuse as much code as possible, but to write the optimal code for each platform. Code can be reused between projects and is relatively inexpensive as compared to the content that the engine takes in. A more reasonable goal is to aim for lowest common denominator content that runs on all platforms without modification (a build phase that packs the content for media is okay).
It's great to do simultaneous development. You find all kinds of bugs you wouldn't find doing just one platform.
I remember that programmers in DOS had null pointers all the time because writing to low memory didn't immediately crash them. When you ported to an Amiga, Atari ST, or Macintosh, boom! I remember telling a DOS programmer that he had a couple null pointers on an aready-shipped game. He thought for a couple seconds and grinned, "That explains a few things."
Now that games have such large budgets, it's important to ship them all at the same time so you don't waste marketing and ad budgets.
My advice on simultaneous development is to pick one lead platform, but never let the other platform(s) get more than a week behind. It will become obvious as you program which parts of the code are common to all platforms and which are different. Pull out the differences into one or more platform-specific areas.
My experience is in C/C++. It's a bigger problem if you have to port against different languages (say, Java and Objective-c).
A few years ago the Opera CEO said in an interview that the key to developing for independent platforms is to move away from any single OS/platform libraries. He went on and said that they developed their own libraries that improve OS performance.
My assumption is that big companies will have a common, Xbox, PS, windows, FooOS, separate teams. Each platform needs to be tweaked differently and requires different implementation methods. I don't think they do one source for all platforms; rather, they build one for each OS thereby, improving efficiencies. I remember EA used to release some console games earlier than the PC versions and vice versa.
Another issue is that different consoles have different hardware thus requiring different programming techniques.
there are two extremes, build one source that fits all (java for instance) but you run the risk of inefficiency or write 40 versions; one optimized for each platform
Back when I had a friend into educational computer games (before The Learning Company gutted the field), he was a great fan of creating cross-platform libraries for doing everything.
This is easier for games than other apps. If you have a word processing app to run on the Mac and Windows, for example, it really does need to look and behave like a Mac app on the Mac, and a Windows app on Windows. Write a game, and it doesn't have to conform to the native behavior, look, and feel.
If you want open source examples, you could look at source code of Quake 1, 2 and 3 engines. They are structured quite portably. (Of course, no ps3 or xbox360 support, but same principles apply)
http://www.idsoftware.com/business/techdownloads/