How does windows console subsystem work? - windows

So how does console subsystem work ? I understand high level stuff such as windows automatically creates console window for programs and then gives handle to console window to which you can write and read with WriteConsole and ReadConsole, but how does window itself work ? does windows use GDI to draw characters into console ? or some hidden internal functions ? what happens behind the curtains ?

This question is too vague to really answer in a detailed fashion but I'll give it a shot.
There are at least 3 different implementations of the console in 32-bit Windows:
MS-DOS box in Windows 95/98/ME
CSRSS owned console windows on NT4/2000/XP/2003/Vista
ConHost owned console windows on 7 and later
The NT based consoles use IPC to communicate between the client application and the console owner process. The ReadFile and WriteFile functions have a special hack and also communicate with the console owner when given a console handle (instead of calling into the kernel like they do with a "normal" handle).
The console window is a normal HWND and for the most part uses normal GDI.
The older console also supports native hardware full screen mode where it probably uses BIOS/VGA stuff directly. In windowed mode I believe it uses the undocumented GdiConsoleTextOut function. Because CSRSS is a core process they might be calling some undocumented NT functions to avoid loading higher level DLLs but there is nothing really special about the actual drawing code.
In newer versions of Windows the full screen mode was removed because of the DWM and a unprivileged process (ConHost.exe) owns the console window to prevent shatter attacks against CSRSS. ConHost.exe imports PolyTextOutW so I assume that is what it uses to draw the text.
The NT consoles also support a undocumented bitmap graphics mode and I assume that also uses plain GDI.
All of this is of course undocumented implementation details and could change at any time. The closest you will get to official documentation is probably this blog post where they also reveal that the IPC method used is the undocumented LPC feature.
In Windows 10 an alternate mode called pseudoconsole was added for the new Windows Terminal but in practice allows anyone to be a console host.

Related

how to run D-Bus on windows without console?

I'm porting a linux app on windows and I need dbus-daemon.exe running on my win session.
My app and dbus-daemon.exe work fine but the latter still opens a default console and, being not familiar with programming on windows, I don't know how to get rid of it.
Maybe by making it invisible ?
Windows, by default, opens a console window for executables compiled for the console subsystem (the "subsystem" being essentially a bit of metadata in the Portable Executable format, aka EXE/DLL). So you have at least two options:
Compile the dbus-daemon for the Windows subsystem, if you're the one doing the compilation. It is a linker option.
Launch the dbus-daemon process passing the CREATE_NO_WINDOW flag to the relevant API function (probably CreateProcess). If you're not using the Windows API directly, look how CreateProcess and CREATE_NO_WINDOW are exposed in the API you are using. In .NET, for example, it's the ProcessStartInfo.CreateNoWindow property.

Why does windows handle scrollbars in kernel?

The new 1-bit exploit of "all" windows versions uses a bug in the kernel code that handles scrollbars. That got me thinking. Why does windows handle scrollbars in kernel, rather than user mode? Historical reasons? Does any other OS do this?
TL;DR: Microsoft sacrificed security for performance.
Scrollbars are a bit special on Windows. Most scrollbars are not real windows but are implemented as decorations on the "parent" window. This leads us to a more general question; why are windows implemented in kernel mode on Windows?
Lets look at the alternatives:
Per-process in user mode.
Single "master" process in user mode.
Alternative 1 has a big advantage when dealing with your own windows; no context switch/kernel transition. The problem is of course that windows from different processes live on the same screen and somebody has to be responsible for deciding which window is active and coordinate changes when the user switches to a different window. This somebody would have to be a special system process or the kernel because this information cannot be per-process, it has to be stored somewhere global. This dual information design is going to be complicated because the per-process information cannot be trusted by the global window manager. I'm sure there are a ton of other downsides to this theoretical design but I'm not going to spend more time on it here.
Windows NT 3 implemented a variant of alternative 2. The window manager was moved into kernel mode in NT 4 mainly for performance reasons:
...the Window Manager (USER) and Graphics Device Interface (GDI) have
been moved from the Win32 subsystem to the Windows NT Executive. Win32
user-mode device drivers, including graphics display and printer
drivers, have also been moved to the Executive. These changes are
designed to simplify graphics handling, reduce memory requirements,
and improve performance.
...and further down in the same document there are more technical details and justifications:
When Windows NT was first designed, the Win32 environment subsystem
was designed as a peer to the environment subsystems supporting
applications in MS-DOS, POSIX, and OS/2. However, applications and
other subsystems needed to use the graphics, windowing, and messaging
functions in the Win32 subsystem. To avoid duplicating these
functions, the Win32 subsystem was used as a server for graphics
functions to all subsystems.
This design worked respectably for Windows NT 3.5 and 3.51, but it
underestimated the volume and frequency of graphics calls. Having
functions as basic as messaging and window control in a separate
process generated substantial memory overhead from client/server
message passing, data gathering, and managing multiple threads. It
also required multiple context switches, which consume CPU cycles as
well as memory. The volume of graphics support calls per second
degraded the performance of the system. It was clear that a redesign
of this facet in Windows NT 4.0 could reclaim these wasted system
resources and improve performance.
The other subsystems are not that relevant these days but the performance issues remain.
If we look at a simple function like IsWindowVisible then there is not a lot of overhead when the window manager is in kernel mode: The function will execute a couple of instructions in user mode and then switch the CPU to ring 0 where the entire operation (validate the window handle passed in and if valid, retrieve the visible property) is performed in kernel mode. It then switches back to user mode and that is about it.
If the window manager lives in another process then you will at least double the amount of kernel transitions and you must somehow pass the functions input and output to and from the window manager process and you must somehow cause the window manager process to execute while you wait for the result. NT 3 did this by using a combination of shared memory, LPC and a obscure feature called paired threads.

Operating complex applications as screensaver VB6

I'm in the process of writing a specification to convert one of our most complex applications into an application which runs as a screensaver.
Currently this application will read from the file system & registry (User, but will be converted to Local Machine) and spawn multiple child executables drawing media elements on screen using WMP SDK and other media display libraries for images and flash. Some native to the OS, some not.
It is written in VB6 and must continue to be for this conversion.
This application currently operates as an application in the interactive account space. Usually with an account logged in as an administrator, or other highly elevated account. This application must operate without been logged in as a Screensaver.
Resources on doing this for my research are scant.
I'm keen to know the opinions of the SO community. Are there any limitations when running applications as screen savers when not logged in, considering security limitations of operating EXE's in this context. Are EXEs running as screensavers prevented from spawning other child processes or limited in reading file or registry information.
Are there any graphics handling restrictions with direct show or direct draw? Can systen ODBC's still be used?
This applies to Windows XP & Windows 7.
Thank you for your time.
Thought I'd come back and close this off.
After some testing and discussions with Microsoft it turns out you cannot run complex applications as a screensaver when not logged in.
The session used at the windows log in screen has a limited desktop heap allocation by design. Attempting to use multiple resources or open many windows will not work as the heap will simply run out of memory.
Proven by testing and by MS's own word.

Windows protected mode - memory

I downloaded a disk and memory editor called HxD (available at http://mh-nexus.de/en/hxd/). I wonder how it is able to access (read and modify) virtual memory assigned to all applications running on my system (Windows XP Pro SP3). From what I know, Windows is running in protected mode, making such endeavours impossible. Yet it's not, how can that be?
Windows does indeed protect the memory of applications. Every application has its own address space and can simply not access anything outside it.
But, Windows also has functions that allow you to access memory from other processes. Not by simply accessing a pointer, but by calling a function to get the data from the other process.
This functionality seems strange, but it is essential if you want to write a debugger, or other kinds of diagnostics utilities.
If the program is run in administrator mode then the it can load a driver dynamically and see information via kernel mode to any process. An example is a debugger or similar like the process explorer tools from Sysinternals.

Invoke Blue Screen of Death using Managed Code

Just curious here: is it possible to invoke a Windows Blue Screen of Death using .net managed code under Windows XP/Vista? And if it is possible, what could the example code be?
Just for the record, this is not for any malicious purpose, I am just wondering what kind of code it would take to actually kill the operating system as specified.
The keyboard thing is probably a good option, but if you need to do it by code, continue reading...
You don't really need anything to barf, per se, all you need to do is find the KeBugCheck(Ex) function and invoke that.
http://msdn.microsoft.com/en-us/library/ms801640.aspx
http://msdn.microsoft.com/en-us/library/ms801645.aspx
For manually initiated crashes, you want to used 0xE2 (MANUALLY_INITIATED_CRASH) or 0xDEADDEAD (MANUALLY_INITIATED_CRASH1) as the bug check code. They are reserved explicitly for that use.
However, finding the function may prove to be a bit tricky. The Windows DDK may help (check Ntddk.h) - I don't have it available at the moment, and I can't seem to find decisive info right now - I think it's in ntoskrnl.exe or ntkrnlpa.exe, but I'm not sure, and don't currently have the tools to verify it.
You might find it easier to just write a simple C++ app or something that calls the function, and then just running that.
Mind you, I'm assuming that Windows doesn't block you from accessing the function from user-space (.NET might have some special provisions). I have not tested it myself.
I do not know if it really works and I am sure you need Admin rights, but you could set the CrashOnCtrlScroll Registry Key and then use a SendKeys to send CTRL+Scroll Lock+Scroll Lock.
But I believe that this HAS to come from the Keyboard Driver, so I guess a simple SendKeys is not good enough and you would either need to somehow hook into the Keyboard Driver (sounds really messy) or check of that CrashDump has an API that can be called with P/Invoke.
http://support.microsoft.com/kb/244139
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters
Name: CrashOnCtrlScroll
Data Type: REG_DWORD
Value: 1
Restart
I would have to say no. You'd have to p/invoke and interact with a driver or other code that lives in kernel space. .NET code lives far removed from this area, although there has been some talk about managed drivers in future versions of Windows. Just wait a few more years and you can crash away just like our unmanaged friends.
As far as I know a real BSOD requires failure in kernel mode code. Vista still has BSOD's but they're less frequent because the new driver model has less drivers in kernel mode. Any user-mode failures will just result in your application being killed.
You can't run managed code in kernel mode. So if you want to BSOD you need to use PInvoke. But even this is quite difficult. You need to do some really fancy PInvokes to get something in kernel mode to barf.
But among the thousands of SO users there is probably someone who has done this :-)
You could use OSR Online's tool that triggers a kernel crash. I've never tried it myself but I imagine you could just run it via the standard .net Process class:
http://www.osronline.com/article.cfm?article=153
I once managed to generate a BSOD on Windows XP using System.Net.Sockets in .NET 1.1 irresponsibly. I could repeat it fairly regularly, but unfortunately that was a couple of years ago and I don't remember exactly how I triggered it, or have the source code around anymore.
Try live videoinput using directshow in directx8 or directx9, most of the calls go to kernel mode video drivers. I succeded in lots of blue screens when running a callback procedure from live videocaptureing source, particulary if your callback takes a long time, can halt the entire Kernel driver.
It's possible for managed code to cause a bugcheck when it has access to faulty kernel drivers. However, it would be the kernel driver that directly causes the BSOD (for example, uffe's DirectShow BSODs, Terence Lewis's socket BSODs, or BSODs seen when using BitTorrent with certain network adapters).
Direct user-mode access to privileged low-level resources may cause a bugcheck (for example, scribbling on Device\PhysicalMemory, if it doesn't corrupt your hard disk first; Vista doesn't allow user-mode access to physical memory).
If you just want a dump file, Mendelt's suggestion of using WinDbg is a much better idea than exploiting a bug in a kernel driver. Unfortunately, the .dump command is not supported for local kernel debugging, so you would need a second PC connected over serial or 1394, or a VM connected over a virtual serial port. LiveKd may be a single-PC option, if you don't need the state of the memory dump to be completely self-consistent.
This one doesn't need any kernel-mode drivers, just a SeDebugPrivilege. You can set your process critical by NtSetInformationProcess, or RtlSetProcessIsCritical and just kill your process. You will see same bugcheck code as you kill csrss.exe, because you set same "critical" flag on your process.
Unfortunately, I know how to do this as a .NET service on our server was causing a blue screen. (Note: Windows Server 2008 R2, not XP/Vista).
I could hardly believe a .NET program was the culprit, but it was. Furthermore, I've just replicated the BSOD in a virtual machine.
The offending code, causes a 0x00000f4:
string name = string.Empty; // This is the cause of the problem, should check for IsNullOrWhiteSpace
foreach (Process process in Process.GetProcesses().Where(p => p.ProcessName.StartsWith(name, StringComparison.OrdinalIgnoreCase)))
{
Check.Logging.Write("FindAndKillProcess THIS SHOULD BLUE SCREEN " + process.ProcessName);
process.Kill();
r = true;
}
If anyone's wondering why I'd want to replicate the blue screen, it's nothing malicious. I've modified our logging class to take an argument telling it to write direct to disk as the actions prior to the BSOD weren't appearing in the log despite .Flush() being called. I replicated the server crash to test the logging change. The VM duly crashed but the logging worked.
EDIT: Killing csrss.exe appears to be what causes the blue screen. As per comments, this is likely happening in kernel code.
I found that if you run taskkill /F /IM svchost.exe as an Administrator, it tries to kill just about every service host at once.

Resources