So I executed:
int elements[2] = {COLOR_HIGHLIGHT,COLOR_HIGHLIGHTTEXT};
DWORD newColors[2];
newColors[0] = Color::IndianRed.ToArgb();
newColors[1] = Color::Maroon.ToArgb();
SetSysColors(2,elements,newColors);
I thought it might do what I hoped it wouldn't.. but it did so now everything I select looks like this:
What are the default values for my system to set it back to how it was? I am running Windows 10.
Quoting MSDN:
However, this function affects only the current session. The new colors are not saved when the system terminates.
Just reboot your machine.
Related
I'm doing some poking around in Windows internals for my general edification, and I'm trying to understand the mechanism behind Image File Execution Options. Specifically, I've set a Debugger entry for calc.exe, with "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoLogo -NoProfile -NoExit -Command "& { start-process -filepath $args[0] -argumentlist $args[1..($args.Length - 1)] -nonewwindow -wait}" as the payload. This results in recursion, with many powershell instances being launched, which makes sense given that I'm intercepting their calls to calc.exe.
That begs the question, though: how do normal debuggers launch a program under test without causing this sort of recursive behavior?
It is anyway a good question about Windows internals, but the reason it has my interest right now is that it has become a practical question for me. At somewhere that I do paid work are three computers, each with a different Windows version and even different debuggers, for which using this IFEO trick results in the debugger debugging itself, apparently trapped in this very same circularity that troubles the OP.
How do debuggers usually avoid this circularity? Well, they themselves don't. Windows avoids it for them.
But let's look first at the circularity. Simple demonstrations are hardly ever helped by PowerShell concoctions and calc.exe is not what it used to be. Let's instead set the Debugger value for notepad.exe to c:\windows\system32\cmd.exe /k. Windows will interpret this as meaning that attempting to run notepad.exe should ordinarily run c:\windows\system32\cmd.exe /k notepad.exe instead. CMD will interpret this as meaning to run notepad.exe and hang about. But this execution too of notepad.exe will be turned into c:\windows\system32\cmd.exe /k notepad.exe, and so on. The Task Manager will soon show you many hundreds of cmd.exe instances. (The good news it that they're all on the one console and can be killed together.)
The OP's question is then why does CMD and its /k (or /c) switch for running a child go circular in a Debugger value but WinDbg, for instance, does not.
In one sense, the answer is down to one bit in an undocumented structure, the PS_CREATE_INFO, that's exchanged between user and kernel modes for the NtCreateUserProcess function. This structure has become fairly well known in some circles, not that they ever seem to say how. I think the structure dates from Windows Vista, but it's not known from Microsoft's public symbol files until Windows 8 and even then not from the kernel but from such things as the Internet Explorer component URLMON.DLL.
Anyway, in the modern form of the PS_CREATE_INFO structure, the 0x04 bit at offset 0x08 (32-bit) or 0x10 (64-bit) controls whether the kernel checks for the Debugger value. Symbol files tell us this bit is known to Microsoft as IFEOSkipDebugger. If this bit is clear and there's a Debugger value, then NtCreateUserProcess fails. Other feedback through the PS_CREATE_INFO structure tells KERNELBASE, for its handling of CreateProcessInternalW, to have its own look at the Debugger value and call NtCreateUserProcess again but for (presumably) some other executable and command line.
When instead the bit is set, the kernel doesn't care about the Debugger value and NtCreateUserProcess can succeed. How the bit ordinarily gets set is by KERNELBASE because the caller is asking not just to create a process but is specifically asking to be the debugger of the new process, i.e., has set either DEBUG_PROCESS or DEBUG_ONLY_THIS_PROCESS in the process creation flags. This is what I mean by saying that debuggers don't do anything themselves to avoid the circularity. Windows does it for them just for their wanting to debug the executable.
One way to look at the Debugger value as an Image File Execution Option for an executable X is that the value's presence means X cannot execute except under a debugger and the value's content may tell how to do that. As hackers have long noticed, and the kernel's programmers will have noticed well before, the content need not specify a debugger and the value can be adapted so that attempts to run X instead run Y. Less noticed is that Y won't be able to run X unless Y debugs X (or disables the Debugger value). Also less noticed is that not all attempts to run X will instead run Y: a debugger's attempt to run X as a debuggee will not be diverted.
TLDR of Geoff's great answer - use DEBUG_PROCESS or DEBUG_ONLY_THIS_PROCESS to bypass the Debugger / Image File Execution Options (IFEO) global flag and avoid the recursion.
In C#, using the excellent Vanara.PInvoke.Kernel32 NuGet:
var startupInfo = new STARTUPINFO();
var creationFlags = Kernel32.CREATE_PROCESS.DEBUG_ONLY_THIS_PROCESS;
CreateProcess(path, null, null, null, false, creationFlags, null, null, startupInfo, out var pi);
DebugActiveProcessStop(pi.dwProcessId);
Note that DebugActiveProcessStop was key for me (couldn't see a window when opened notepad.exe otherwise) - and makes sense anyway if your program is not really a debugger and you just want the bypass.
After seeing mathiasbynens' dotfiles, I've decided that I want to start building a script to configure all my system preferences to my liking.
As part of this, I need to decrease the Time Machine update rate (in order to reduce wear levels on my NAS' hard disk).
After doing some reading online [1], I have concluded that the file I need to edit is /System/Library/LaunchDaemons/com.apple.backupd-helper.plist.
I know that this is possible via the defaults command. Here is the section of the file I want to change:
$ defaults read /System/Library/LaunchDaemons/com.apple.backupd-helper LaunchEvents
{
"com.apple.xpc.activity" = {
"com.apple.backupd-auto" = {
AllowBattery = 1;
Delay = 3600;
GracePeriod = 1800;
Interval = 3600;
PowerNap = 1;
Priority = Utility;
Repeating = 1;
};
};
}
The problem is that, due to the dots (.) in the path to the Delay property, I cannot figure out how to specify said path directly.
I have tried LaunchEvents.\"com.apple.xpc.activity\", 'LaunchEvents."com.apple.xpc.activity"', and many variations thereof.
[1] https://staff.eecis.udel.edu/docs/timemachine/frequency/
I took a copy of /System/Library/LaunchDaemons/com.apple.backupd-helper.plist and saved it elsewhere as a.plist:
cp "/System/Library/LaunchDaemons/com.apple.backupd-helper.plist" /tmp/a.plist
Then I played around with PlistBuddy until I got this which seems to work:
/usr/libexec/PlistBuddy -c "Set :LaunchEvents:com.apple.xpc.activity:com.apple.backupd-auto:Interval 7200" /tmp/a.plist
Vaguely related to the original topic, it is way more flexible to disable time machine's auto scheduling and replace it with either TimeMachineEditor or simply create a launchd(8) rule. Eg. via LaunchControl
Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...
Is there a way to set matlab to come to the foreground of the windows when the command in complete? I can see it happening by executing a dos() but I'm unaware how window management works? Maybe there is a better way? Someone?
Two options. Neither exactly what you are asking for.
Option 1: Open a new figure.
figure();
imagesc(processingDoneSplashImage);
If you want to get fancy, put this in a script, with a timer, and flash the image between bright green, and bright red....
Option 2: My solution to your problem. (I find popping up windows extremely annoying.) I put this function call at the end of my long running scripts, and the computer tells me when it's done processing....
function [ ] = matSpeak( textToSpeak )
%matSpeak takes some text, and outputs onto the speaker the text,
% using the .Net SpeechSynthesizer.
% This only works on Windoze.
if ~exist('textToSpeak','var')
textToSpeak = 'Your processing has completed.';
end
NET.addAssembly('System.Speech');
speak = System.Speech.Synthesis.SpeechSynthesizer;
speak.Volume = 100;
speak.Speak(textToSpeak);
end
Why not just use Growl for your notification windows?
cmd = ['/usr/local/bin/growlnotify -m ' messagestr];
system(cmd);
Of course with Windows you need to fix the path to the growlnotify binary.
Source: http://www.mathworks.com/matlabcentral/newsreader/view_thread/259142
A wrapper with lots of features: Send a notification to Growl on MATLAB Exchange
Many more examples: https://www.google.com/search?q=growl+matlab
Can I, using an address found in a map file, use windbg to alter a variable in memory while the app is running?
I'm really interested in turning on/off functionality in run-time maybe with a variable.
How would you do this? Does it require breaking the app through the debugger?
If you have the address, you can use the any of the e* (Enter Value) commands.
You can attach to any running process if you know the process id, or you can launch the exe directly with cdb. You do have to break the process to make any modifications. In CDB, you can use Ctrl+C, and the it will inject a DebugBreak into the process, you can then look at the stack, threads, and memory.
I just did it. Assuming you got the symbos mapped, and are in a breakpoint where you can see a variable, you just do this - assuming "myvar" is an integer:
?? myvar
[[ this shows the current contents ]]
?? myvar=55
[[ this will change the value of myvar to 55]]
?? myvar
[[ this will show the updated contents of my var - which is 55 ]]
g
[[ your program will now run, and the next read of myvar will produce 55 ]]
You can set a breakpoint that is only hit once and edit a value and continue execution.
Something like:
bp /1 012ABCDEF "myVar=42;g"
replace the above with your address value and your variable name.
I'm not sure what exactly you trying to achieve, but debugger should be activated on some event (exception, break point or something), after it's activated, for example you can have a thread that create an exception and after get control back check the variable.
In debugger you can set a break point with command, see this guide, what will change the parameter.
I hope that this answer your question, if not please clarify the question.
In case of break point with command the application will be break and will continue execution without human intervention, i don't know any way how debugger can do something without application stops execution.
Just a thought, are you sure you need debugger for this? Can't you just use registry for that and use this to get notification about registry change.
You can definately do that. Either break on a function and edit it in the locals window. Or use e commands to edit values. Check out the windbg help for more on it.