i've got a few old mobos and i was wondering whether it might be possible to create a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other? i'm sure it would be an arduous undertaking involving writing a device driver for the header boards and then writing an application to make use of the interconnect; perhaps a simple demo demonstrating a thread running on each processor and use of both sets of ram, perhaps creating a mini virtual machine that maps 2x3gb ram on 32 bit mobos to a single 6 gb address space. a microcontroller may be needed on each pci header card to act as a translator.
given that mobos almost always have multiple pci slots, i wonder if these interconnected card pairs could be used to daisychain mobos in a sort of high speed beowulf cluster.
i would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
does anyone have any advice or has this sort of thing been done before?
Update:
Thanks Martin. What you say makes sense, and it would also seem that if it were possible that it would have already been done before.
Instead, would it be possible to indirectly control the slave cpu by booting it using a "pretend" bootable storage device (hard disk, usb stick, etc)? As long as the slave mobo thinks its being operated by an operating system on a real device it should work.
This could potentially extend to any interface (sata, ide, usb etc); if you connected two pcs together with a sata/ide/usb cable (plug one end of an ide ribbon into one mobo and the other into another mobo), that would be all the hardware you need. the key is in creating a new driver for that interface on the master pc, so instead of the master pc treating that interface as having a storage device on it, it would be driven as a dummy bootable hard disk for the slave computer. this would still be a pretty difficult job for me because i've never done device drivers before, but at least i wouldn't need a soldering iron (which would be much further beyond me). i might be able to take an open source ide driver for linux, study it, and then butcher it to create something that kindof acts in reverse (instead of getting data off it, an application puts data onto it for the slave machine to access like a hard disk). i could then take a basic linux kernel and try booting the slave computer from an application on the master computer (via the butchered master pc ide/sata/usb device driver). for safety, i would probably try to isolate my customised driver as much as possible by targeting an interface not being used for anything else on the master pc (the master pc might use all sata hard disks with the ide bus normally unused, so if i created a custom ide driver it might cause less problems with the host system - because it is sata driven).
Does anyone know if anything like this (faking a bootable hard disk from another pc) has ever been tried before? It would make a pretty cool hackaday on youtube, but also seriously it could add a new dimension to parallel computing if it proved promising.
The PCI bus can't take over the other CPU.
You could make an interconnect that can transfer data from a program on one machine to another. An ethernet card is the most common implementation but for high performance clusters there are faster direct connections like infiband.
Unfortunately PCI is more difficult to build cards for than the old ISA bus, you need surface mount controller chips and specific track layouts to match the impedance requirements of PCI.
Going faster than a few megabit/s involves understanding things like transmission lines and the characteristics of the connection cable.
would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
LOL. PCI is an 32-Bit 33MHz Bus at minimum. So simply out of reach for an ATMEGA.
But your idea of:
a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other [...]
This is cheaply possible with just a pair of PCI Firewire (IEEE 1394) cards (and a Firewire cable). There is even a linux driver that allows remote debugging over firewire.
Related
I have a data buffer of data on my Serial port. This communication is developed in order to transfer the "maximum" value of possible data. Hence, the value of samples "send" if there is no connection with a computer is around ~11500 data/sec. If I attach the controller (who send the data) on a Windows 10 machine and I try to read (with java) the data, the information frequency drops down to ~ 950/1000 data/sec. Otherwise, if I attach the controller to the same machine, with the same software, but under Ubuntu, it reaches ~6000/7000 data/sec. So, there is a way to improve the Serial port under Windows?
If you are using a cheap USB-serial port, it is possible it only works at USB1 speeds, and/or the driver is basically a rebadged device driver demo from the Windows SDK (which is very poor). More expensive USB ports generally have their own drivers which are much better at higher data rates.
It is also possible that there is some sort of hardware flow control that isn't quite as efficient on Windows as it is on Linux.
Without knowing exactly which USB port you have, it's difficult to make suggestions.
I have read that USB HDD are hot-pluggable but not removable whereas USB Flash drives are both removable and hot-pluggable.Internally, the windows DEVICE_OBJECT structure has Characteristics flag that can have a value FILE_REMOVABLE_MEDIA for removable media (not the removable device). Also, STORAGE_HOTPLUG_INFO structure has Devicehotplug boolean member that says device is hot pluggable or not. Can you please justify your answer with a little details?
David Zeuthen explains it best:
[...] "removable" means that the media of the device is removable. For
example, CD-ROM drives or Nin1 card readers for flash media. [...]
ATA disks connected via eSATA aren't removable, you can't remove the
platters.
Yet of course, you can intuitively understand that even non-removable devices can be hotpluggable (i.e. you can plug and unplug the entire device as a whole, as opposed to inserting/removing the media it contains).
Now, all (modern) buses in use in current systems are hotpluggable -
most new systems allow you to add/remove SATA disks while the system
is running.
Indeed you shouldn't have to care much about whether something is hotpluggable or not anymore: virtually all storage devices are. (In the past, you had to shutdown the machine to manipulate the storage devices).
So, it should follow that external USB drives (either HDDs or flash sticks) for example should be non-removable and hopefully always hotpluggable.
Unfortunately:
Of course, hardware sucks so virtually all USB keyfobs reports
"removable==1" probably because the maker of the device wanted to be
"helpful" and make things work better on windows.
I have no sources regarding the real reasons, but it turns out that many USB drives report themselves as removable too. David's suggestion that it might be because of certain operating systems which didn't use to support hotplugging but did support removable devices (CD-ROMs, etc) sounds reasonable: the manufacturers reused the same technique to trick the OS into letting the user "eject" USB drives.
Nowadays I would guess all modern operating systems make the distinction clear, and this has many advantages from a management standpoint (e.g. you might have a hotpluggable DVD drive with removable DVDs and you would thus need to be more clear about which you want to interact with). Still, older drives and old habits die hard, so you'll still find some "removable" USB drives even if they're really not.
Note: The bug report linked is about udisks which is more often found in the free software world. But again, I'm sure all systems make the distinction now even if the terminology is not exactly the same. Also note that the terminology is really rather arbitrary, though whichever terms you use for these two concepts best be well understood.
A simple Google search could have answered your question...
Hot plugging is the ability to replace or install a device without shutting down the attached computer. Hot plugging is implemented when
a peripheral device is added or removed; a device or working system
requires reconfiguration; a defective component requires replacement
or a device and computer require data synchronization. Also known as hot swapping. Hot swapping
allows easy accessibility to equipment and the convenience of
uninterrupted systems.
Removable media are data storage devices capable of computer system removal without powering off the system. Removable media devices are
used for backup, storage or transportation of data.
source: techopedia dot com
A friend of mine asked me this question in the class and I could not answer it. He asked:
Since we know kernel controls the physical hardware via device drivers. What if all this functionality is kept inside the device controller itself rather than kernel managing them. What would be the consequences of such scenario? Good or Bad?
I searched online for this question but could not get information about this scenario. May be I'm not googling in the right keyword.
You insight into this will help me getting clearing my concepts.
Please answer.
Thanks.
Your question seems to propose the elimination of the "device driver" by "keeping" "control (of) the physical hardware ... inside the device controller". The premise for this seems to be:
kernel controls the physical hardware via device drivers.
That description of a device driver is something similar to what I've seem for end-user comprehension rather than from a developer's perspective. The end-user is aware of the device, and it is the device driver that takes that abstraction and can control that device down to the specific control bits of each device port.
But a device driver is responsible for mundane housekeeping tasks such as:
maintaining device status and availability;
configuring the device for operation;
managing data flow, setting-up/tearing-down data transfers, copying data between user space and kernel space;
handling interrupts and exceptions.
These tasks are integral to a device driver. These tasks cannot be transferred out of the purview of the kernel driver to a peripheral device.
Sometimes the device driver can only try to manage the device, rather than fully control it, for example, a NIC driver during a packet flood.
There is simply no possibility that you can eliminate a device driver no matter how much of "all this functionality is kept inside the device controller itself". And there would still be control directives/commands issued from the device driver to the peripheral.
The hardware device in question should be a computer peripheral device, not an autonomous robot device. The device should be designed to operate with a computer. Whatever interface there is between processor and device should be suitable for the task. If the peripheral is made more "intelligent", then perhaps the CPU can be unburdened and a high-level command interface can replace low-level sub-operation directives. But only "some" functionality can be transferred to the peripheral, not "all".
I'm looking to fuzz virtual drivers, I've read the other questions about this but they don't really go anywhere. Basically looking to see if there's an obvious tool I've missed and want to know if fuzzing IOCTLs from a windows guest would work? Or if I need to write one in low level eg IN/OUT?
Any tools out there for fuzzing drivers in a windows guest to hit the hypervisor either hyper-v or VMware
There are a number of ways to exercise virtualization code.
First, of course, if you're on Windows, is the IOCTL interface.
Then you should remember that all virtual devices are emulated in some way by some code in the guest OS and in the host OS. So, accessing input devices (keyboard and mouse), video device, storage (disks), network card, communication ports (serial, parallel), standard PC devices (PIC, PIT, RTC, DMA), CPU APIC, etc etc will also exercise virtualization code.
It's also very important to remember that virtualization of the various PC devices (unless we're talking about synthetic devices working over the VMBUS in Windows) is done by intercepting, parsing and emulating/executing instructions that access device memory-mapped buffers and registers and I/O ports. This gives you yet another "interface" to pound on.
By using it you might uncover not only device-related bugs but also instruction-related bugs. If you're interested in the latter, you need to have a good understanding of how the x86 CPU works at the instruction level in various modes (real, virtual 8086, protected, 64-bit), how it handles interrupts and exceptions and you'll also need to know how to access those PC devices (how and at what memory addresses and I/O port numbers).
Btw, Windows won't let you directly access these things unless your code is running in the kernel. You may want to have a non-Windows guest VM for things like this just to avoid overprotective functionality of Windows. Look for edge cases, unusual instruction encodings (including invalid encodings) or unusual instructions for usual tasks (e.g. using FPU/MMX/SSE/etc or special protected-mode instructions (like SIDT) to access devices). Think and be naughty.
Another thing to consider is race conditions and computational or I/O load. You may have some luck exploring in that direction too.
I am doing an OS experiment. Until now, all my code utilized the real mode BIOS interrupts to manipulate the hard disk and floppy. But once my code enables Protected Mode, all the real mode BIOS interrupt service routines won't be available.
I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop?
I know that hardware is controlled by reading from and writing to certain control or data registers. For example, I know that the "Command Block Registers" of a hard disk range from 0x1F0 to 0x1F7. I am wondering whether the register addresses of so many different hardware devices are consistent on different platforms? Or do I have to detect that before using them? How would I do that?
Since I am not sure about how to read/write a floppy or a hard disk in Protected Mode, I have to use BIOS interrupts to load all my necessary kernel files from the floppy before entering protected mode. What could I do if my kernel file exceeds the real mode 1M space limit?
How do I read/write a hard disk when the CPU is in Protected Mode?
I have a feeling that I need to do some hardware drivers now. Am I right?
Strictly speaking; (and depending on your requirements) "need" may be too strong - in theory you can switch back to real mode to use BIOS functions, or use a virtual8086 monitor, or write an interpreter that interprets the firmware's instructions instead of executing them directly.
However, the BIOS is awful (designed for an "only one thing can happen at a time" environment that is completely unsuitable for modern systems where its expected that all devices are able to do useful work at the same time), and the BIOS is deprecated (replaced by UEFI), and it's hard to call something an OS when it doesn't have control over the hardware (because the firmware still has control of the hardware).
Note that if you do continue using BIOS functions; the state of various pieces of hardware (interrupt controller, PCI configuration space for various devices, any PCI bridges, timer/s, etc) has to match the expectations of the BIOS. What this means is that you will either be forced to accept huge limitations (e.g. never being able to use IO APICs, etc. properly) because it will break BIOS functions used by other pre-existing code, or you will be forced to do a huge amount of work to make the BIOS happy (emulating various pieces of hardware so the BIOS thinks the hardware is still in the state it expects even though it's not).
In other words; if you want an OS that's good then you do need to write drivers; but if you only want an OS that doesn't work on modern computers (UEFI), has severe performance problems ("only one thing can happen at a time"), is significantly harder to improve, doesn't support any devices that the BIOS doesn't support (e.g. sound cards), and doesn't support any kind of "hot-plug" (e.g. plugging in a USB device), then you don't need to write drivers.
Is this why an OS is so difficult to develop?
A bad OS is easy to develop. For example, something that is as horrible as MS-DOS (but not compatible with MS-DOS) could probably be slapped together in 1 month.
What makes an OS difficult to develop is making it good. Things like caring about security, trying to get acceptable performance, supporting multi-CPU, providing fault tolerance, trying to make it more future-proof/extensible, providing a nice GUI, creating well thought-out standards (for APIs, etc), and power management - these are what makes an OS difficult.
Device drivers add to the difficulty. Before you can write drivers you'll need support for things that drivers depend on (memory management, IRQ handling, etc - possibly including scheduler and some kind of communication); then something to auto-detect devices (e.g. to scan PCI configuration space) and try to start the drivers for whatever was detected (possibly/hopefully from file system or initial RAM disk, with the ability to add/unload/replace drivers without rebooting); and something to manage the tree of devices - e.g. so that you know which "child devices" will be effected when you put a "parent device" to sleep (or the "parent device" has hardware faults, or its driver crashes, or the device is unplugged). Of course then you'd need to write the device drivers, where the difficulty depends on the device itself (e.g. a device driver for a NVidia GPU is probably harder to write than a device driver for a RS232 serial port controller).
For storage devices themselves (assuming "80x86 PC") there's about 8 standards that matter (ATA/ATAPI, AHCI and NVMe; then OHCI, UHCI, eHCI and xHCI for USB controllers, then the USB mass storage device spec). However, there is also various RAID controllers and/or SCSI controllers where there's no standard (each of these controllers need their own driver), and some obsolete stuff (floppy controller, tape drives that plugged into floppy controller or parallel port, three proprietary CD-ROM interfaces that were built into sound cards).
Please understand that supporting all of this isn't the goal. The goal should be to provide things device drivers depend on (described above), then provide specifications that describe the device driver interfaces (possibly/hopefully including things like IO priorities and synchronization, and notifications for device/media removal, error handling, etc) so that other people can write device drivers for you. Once that's done you might implement a few specific device drivers yourself (e.g. maybe just AHCI initially - everything else could be left until much later or until someone else writes it).
You don't necessarily HAVE to write drivers. You could drop back into real mode to call the BIOS service, and then hop back into protected mode when you're done. This is essentially how DPMI DOS extenders (DOS4GW, Causeway, etc) work.
The source code for the Causeway DOS extender is public domain, you can look at that for a reference. http://www.devoresoftware.com/freesource/cwsrc.htm