Is there a reliable cross-platform way to programmatically get and set the NumLock state in Lazarus / Free Pascal?
I've found GetKeyState() in LclIntf which seems to work for getting the state (and is apparently cross-platform), but I can't find an equivalent SetKeyState().
Setting leds is rarely part of an userland API, since keyboard leds are hardware, and will require administrator access on most systems.
IIRC I did it once for FreeBSD using the console unit. (which basically sends IOCTLs), Linux is probably similar.
For Windows I found http://support.microsoft.com/kb/177674/en-us, but the fact that it is not an universal call, and differs between win9x and winnt says enough.
Related
Background
I've been bouncing around this for a while and still haven't come up with an adequate solution, hoping someone out there can point me in the right direction.
Essentially I need to identify whether I can run 64bit VM on a target machine (working in GO but happy to consider binding c code or some assembly (though I feel a bit out of depth there)
In order to run a 64 bit VM the system need Hardware Virtualisation support available and enabled in the bios (im only concerned with intel/amd at this time)
Journey so far
From windows 8 onwards, Windows ships with Hyper-V, and there is a nice function you can call IsProcessorFeaturePresent from the kernel32.dll with an arg of 'PF_VIRT_FIRMWARE_ENABLED' which will tell you if hardware virtualisation is enabled in firmware:
IsProcessorFeaturePresent
now I dont really like the way this behaves (it says not available if hyper-v is installed) but i can cope with it by checking if hyper-v is enabled through other means so this pretty much does the job from win8 upwards.
Problem is this function always return false on win 7 for some reason - even on a system on which I know hardware virtualization is enabled.
Coming from another angle I have used this lib to determine what instruction sets are available: intel processor feature lib - this allows me to know what type of virtualization instructions are available on the processor (if any)
But I'm still missing the final piece of knowing if its enabled in the bios on win 7. I figure in principle it should be easy from here - I should be able to call something which utilizes the virtualization extensions and see if it responds as expected. But unfortunately I have no idea how to do this.
Does anyone have any suggestions as to how I might do this?
Note: Im happy to consider 3rd party libs but this would be used in commercial software so licensing would have to allow for that (e.g nothing from Microsoft)
I am afraid you won't be able to achieve what you want unless you are ready to provide a kernel driver, because checking if BIOS has enabled virtualization requires kernel privileges.
Intel Software Developer Manual describes a model-specific register (MSR) with number 3Ah called IA32_FEATURE_CONTROL. Its bits 1 and 2 control whether VMX instructions are allowed in SMX and non-SMX modes. Also there is bit zero which, when written with 1, locks the whole register's value, thus making impossible to enable/disabled features until the next processor reset. This means that, if BIOS code has disabled VMX and locked it, an OS that boots later will be unable to change that fact, only to see it.
To read this or any other MSR one should use machine instruction RDMSR, and this instruction is only available when CPL is zero, that is, within an OS context. It will throw an exception if attempted to be used from application code.
Unless you find a program interface method that wraps RDMSR around and provides it to applications, you are out of luck. Typically that implies loading and running a dedicated kernel driver. I am aware about one for Linux, but cannot say if there is anything for Windows.
As an extra note, if your code is already running inside a virtual machine, like it is for some Windows installations which enable a Hyper-V environment for regular desktop, then you won't even be able to see an actual host MSR value. It will be up to the VMM to provide you with an emulated value, as well as it will show you whatever CPUID value it wants you to see, not the one from the host.
What is The Interrupt for User input and output in Console Window?
Hello I'm trying to learn Assembly Language, I know that in MS-DOS Operating System, the .COM programs were Supported, and were loaded into memory at 100h, isn't it? And the first 1k bytes were the IVT or Interrupt vector table, it is a list of interrupts, isn't it?? So in MS-DOS if we want to Ask an input from the user we first Move 01 to AH register like MOV ah, 01 and call the Interrupt INT 21h, I don't know if this is gonna work on DOS or not, I never tried the DOS virtual machine or something Similar Applications, i don't want the DOS program I just want a Console Window for Asking a user for Input...
I have just stepped into Assembly programming and found no Complete tutorials on Windows, Or not just One which Shows the use of Interrupts, Everywhere MASM is used with windows.inc libs, and the C standard libs, i don't wanna waste my time learning them, As after learning Assembly programming i Want to learn Writing Booting Programs, Which doesn't Sticks to one Operating system, And the reason i want to use Interrupts is just for learning purpose, Meaning this is just to learn Assembly language, Currently i know very less instructions like MOV ADD SUB INC INT & DEC And know nothing or very less about Registers, So with these limited Capability i cannot write Boot Programs, and for this reason im trying to learn or Practice assembly on Windows but with interrupts, and thus does not want to use those Predefined libraries in my Applications, and want to Only use Interrupts from IVTs and shift to BIOS interrupts, for Boot Programs.
I heard on a forum that Microsoft hides its Windows IVTs from the public as they want their APIs to become popular, Is this true? Thus this means there is no way i can use interrupts to handle I/O in Windows?? because there is no such Documentary on it, if No Kindly Post it with An Example for NASM, and Please Refer to any Online/Offline guide, and my last request please Tell me how can i Convert MASM written souce code for NASM, i mean what is the difference between them, In Short, i have Two requests and One Question as follows:
1. Q1: What is the Interrupt for Console I/O in Windows?
2. R1: Please share any Good tutorials on Assembly for windows Especially on using Interrupts just like TutorialsPoint but Its for Linux and I want for Windows.
And the Last One...
3. R2: How is MASM's input file source code different from NASM one?? I mean How much and where can i find difference writing source code for them, As many tutorials said that Many MASM programs wont run in NASM...
Thanks in Advance!!
And the first 1k bytes were the IVT or Interrupt vector table ...
Modern CPUs have two (in the case of 64-bit CPUs even three) operating modes: The "real mode" and the "protected mode".
When the CPU starts up and in MS-DOS the CPU runs in "real mode". When Windows is running the CPU runs in "protected mode". In this mode the interrupt system works completely different than in "real mode".
It is not possible to access the IVT directly and it is not possible to access memory belonging to a task from another task.
What is the Interrupt for Console I/O in Windows?
Windows NT, 32-bit, used interrupt 1Eh. However other Windows versions (Windows 9x for example) used another method of entering the operating system.
As far as I know 64-bit versions of Windows also do not use interrupts.
No Windows program (with exception of the old 16-bit Windows programs) is using interrupts directly.
hides its Windows IVTs from the public as they want their APIs to become popular, Is this true?
They hide the interrupt numbers because they are only available in some Windows versions.
The only possibility to write a program that works on all Windows versions currently in use is to use the Windows APIs.
i don't wanna waste my time learning them
In this case don't waste your time writing Assembler programs for Windows! Doing so only makes sense if you are interested in compiler development.
Use some virtual machine tool (Bochs, VMware, ...) and start writing "booting programs" NOW or use an MS-DOS emulator and write Assembler programs for MS-DOS.
The only use of Assembler programs for Windows is that the 32-bit assembler is a bit easier to learn than the 16-bit assembler (which is required for MS-DOS and "booting programs") so writing Assembler programs for Windows may be good for learning purposes...
I am trying to run a Linux kernel as the secure OS on a TrustZone enabled development board(Samsung exynos 4412). Although somebody would say secure os should be small and simple. But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
I bought the development board with a runnable secure OS based on Xv6 and the normal os is Android(android version 4.2.2, kernel version 3.0.15). I have tried to replace the simple secure os with the android Linux kernel, that is, with a little assembly code ahead, such as clearing the NS bit of SCR register, directly called the Linux kernel entry(with necessary kernel tagged list passed in).
The kernel uncompressed code is executed correctly and the first C function of the kernel, start_kernel(), is also executed. Almost all the initialization functions run well except running to calibrate_delay(). This function will wait for the jiffies changed:
/* wait for "start of" clock tick */
ticks = jiffies;
while (ticks == jiffies);
I guess the reason is no clock interrupt is generated(I print logs in clock interrupt callback functions, they are never gotten in). I have checked the CPSR state before and after the local_irq_enable() function. The IRQ and FIQ bit are set correctly. I also print some logs in the Linux kernel's IRQ handler defined in the interrupt vectors table. Nothing logged.
I know there may be some differences in interrupt system between secure world and non secure world. But I can't find the differences in any documentation. Can anybody point out them? And the most important question is, as Linux is a very complicated OS, can Linux kernel run as a TrustZone secure OS?
I am a newbie in Linux kernel and ARM TrustZone. Please help me.
Running Linux as a secure world OS should be standard by default. Ie, the secure world supervisor is the most trusted and can easily transition to the other modes. The secure world is an operating concept of the ARM CPU.
Note: Just because Linux runs in the secure world, doesn't make your system secure! TrustZone and the secure world are features that you can use to make a secure system.
But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
TrustZone allows partitioning of software. If you run both Linux and the trustlet application in the same layer, there is no use. The trustlet is just a normal application.
The normal mode for trustlets is to setup a monitor vector page on boot and lock down physical access. The Linux kernel can then use the smc instruction to call routines in the trustlet to access DRM type functionality to decrypt media, etc. In this mode, Linux runs as a normal world OS, but can call limited functionality with-in the secure world through the SMC API you define.
Almost all the initialization functions run well except running to calibrate_delay().
This is symptomatic of non-functioning interrupts. calibrate_delay() runs a tight loop while waiting for a tick count to increase via system timer interrupts. If you are running in the secure world, you may need to route interrupts. The register GICD_ISPENDR can be used to force an interrupt. You may use this to verify that the ARM GIC is functioning properly. Also, the kernel command line option lpj=XXXXX (where XXXX is some number) can skip this step.
Most likely some peripheral interrupt router, clock configuration or other interrupt/timer initialization is done by a boot loader in a normal system and you are missing this. Booting a new board is always difficult; even more so with TrustZone.
There is nothing technically preventing Linux from running in the Secure state of an ARM processor. But it defeats the entire purpose of TrustZone. A large, complex kernel and OS such as Linux is infeasible to formally verify to the point that it can be considered "Secure".
For further reading on that aspect, see http://www.ok-labs.com/whitepapers/sample/sel4-formal-verification-of-an-os-kernel
As for the specific problem you are facing - there should be nothing special about interrupt handling in Secure vs. Non-secure state (unless you explicitly configure it to be different). But it could be that the secure OS you have removed was performing some initial timer initializations that are now not taking place.
Also 3.0.15 is an absolutely ancient kernel - it was released 2.5 years ago, based on something released over 3 years ago.
There are several issues with what you are saying that need to be cleared up. First, are you trying to get the Secure world kernel running or the Normal world kernel? You said you wanted to run Linux in the SW and Android in the NW, but your question stated, "I have tried to replace the simple secure os with the android Linux kernel". So which kernel are you having problems with?
Second, you mentioned clearing the NS-bit. This doesn't really make sense. If the NS-bit is set, clearing it requires means that you are running in the NW already (as the bit being set would indicate), you have executed a SMC instruction, switched to Monitor mode, set the NS-bit to 0, and then restored the SW registers. Is this the case?
As far as interrupts are concerned, have you properly initialized the VBAR for each execution mode, i.e. Secure world VBAR, Normal world VBAR, and MVBAR. All three will need to be setup, in addition to setting the proper values in the NSACR and others to ensure interrupts are channeled to the correct execution world and not all just handled by the SW. Also, you do need separate exception vector tables and handlers for all three modes. You might be able to get away with only one set initially, but once you partition your memory system using the TZASC, you will need separate everything.
TZ requires a lot of configuration and is not simply handled by setting/unsetting the NS-bit. For the Exynos 4412, numerous TZ control registers that must be properly set in order to execute in the NW. Unfortunately, none of the information on them is covered in the Public version of the User's Guide. You need the full version in order to get all the values and address necessary to actually run a SW and NW kernel on this processor.
I'm currently learning about the different modes the Windows operating system runs in (kernel mode vs. user mode), device drivers, their respective advantages and disadvantages and computer security in general.
I would like to create a practical example of what a faulty device driver that runs in kernel mode can do to the system, by for example corrupting memory used for critical OS-processes.
How can I execute my code in kernel mode instead of user mode, directly?
Do I have to write a dummy device driver and install it to do this?
Where can I read more about kernel and user mode in Windows?
I know the dangers of this and will do all of the experiments on a virtual machine running Windows XP only
The "Windows Internals" book is rather shallow on the topic at question.
First I should note that any program also runs in kernel mode (KM). This is due to the fact that - not unlike in unixoid systems - for system calls the calling thread transitions into KM where the kernel itself or one of the drivers services the request and then returns to user mode (UM).
A first step to get started would be to download the latest Windows Driver Kit (WDK) and start reading the documentation. If you want a more digestive book, go for one of these:
Windows NT Device Driver Development - though an old title, many of the basics still apply.
Programming the Windows Driver Model (by Oney) - WDM programming in particular, also covers basics, has some errors (as most books).
Undocumented Windows 2000 Secrets (by Schreiber) - contains plenty of information about all kinds of internals at a more technical level than the book mentioned before.
Undocumented Windows NT - contains a more generic part about internals on a technical level followed by a reference of some native API functions.
Windows NT/2000 Native API - the classic, but it's more of a reference. Nevertheless there are several gems (and examples) in it.
Since you want to use Windows XP, many of the techniques described over at rootkit.com (even from some years ago) should work. They also got plenty of samples.
And as you notice by the name of the referenced website, you are in fact in what I'd call a gray area with that question ;)
It's a simple answer, and as you suspect, you do need to write a device driver in order to run in kernel mode. I'm afraid I don't know of a particularly good reference for kernel mode programming but a quick websearch reveals:
en.wikibooks.org/wiki/Windows_Programming/User_Mode_vs_Kernel_Mode
http://www.netomatix.com/Development/Kernelmode.aspx
http://technet.microsoft.com/en-us/library/cc750820.aspx
http://msdn.microsoft.com/en-us/library/ff553208(VS.85).aspx
You will need a good understanding of Windows Internals:
http://technet.microsoft.com/en-us/sysinternals
and yes they have a book: Windows Internals
http://technet.microsoft.com/en-us/sysinternals/bb963901
http://www.amazon.com/Windows%C2%AE-Internals-Including-Windows-PRO-Developer/dp/0735625301
Basically your questions are all answered in this book (and it even comes with samples and hands-on labs).
I am using Turbo C 3.0 and Turbo c 2.0 for the programming. Added to this I am using Windows XP. While using Windows 98, the above said programs were really worked fine. But after installing XP, those programs were really slow-down my system. Those were really using high CPU power even when idle(idle refers to "no interaction between program and user").
Can anybody previously solved this issue, Post here.
Also, I want to know what is causing those slow-down!
Those are 16 bit DOS programs, and they probably will not run on XP. They are probably running in the NT Virtual DOS Machine. Use the task manager, or better yet, Process Explorer, to check this. You will probably not see your programs running; look for instances of ntvdm.exe instead.
I have noticed several antivirus programs (Checkpoint, Proventia Desktop) seem have a problem with ntvdm. It is as if they eat up quite a bit of cpu when an ntvdm instance is running.
Also, wasn't Turbo C finicky about its extended memory settings? If you still have your Autoexec.bat and Config.sys files from the Win98 system, you could try changing XP's settings to match. The XP equivalent to these files are autoexec.nt and config.nt; they are in the Windows\System32 directory.
I suspect Adrian's comment is the correct answer: old DOS programs did not account for multitasking and so tended to put themselves in tight loops when "idle". Back in the day, it didn't matter as nothing else was running at the same time and the operating system would interrupt the running program to handle hardware, well, interrupts.
I would highly recommend avoiding such tools on modern hardware because the programs the generate are likewise not multitasking friendly. They are also going to be optimized for ancient processors and have limited memory addressing. If you have some old hardware and want to goof around with it, then knock yourself out. But there are plenty of modern compilers that are free (either as Visual C++ Express is to get you hooked, or open source).
This can be avoided partially by setting process priority.
Start the App eg. Turbo C++ 3.0
Minimize and go to Task Manager
Find ntvdm.exe
Right Click > Set Priority > Low > Yes
Then it runs with not so annoying speeds.