Get permission to do PTRACE_ATTACH - ptrace

I'm trying to write a program to do ptrace(PTRACE_ATTACH, pid, nullptr, nullptr) but it returns -1 and errno is 3 (No such process). The tracee are running and kicked off by me so I guess the tracer should have permission. What should I do then?
Also ptrace seems to be on per-thread basis. Is there easy way to get all the thread ids given a process id? Only way is to check /proc/{pid}/task/{tid}? How to catch thread creation?

You can catch thread creation using ptrace(PTRACE_SETOPTIONS, pid, nullptr, PTRACE_O_TRACECLONE). I'd suggest carefully reading man 2 ptrace.
PTRACE_ATTACH might fail for numerous reasons. Try running your process as root with sudo.

Related

How to check if a process started in the background still running?

It looks like if you create a subprocess via exec.Cmd and Start() it, the Cmd.Process field is populated right away, however Cmd.ProcessState field remains nil until the process exits.
// ProcessState contains information about an exited process,
// available after a call to Wait or Run.
ProcessState *os.ProcessState
So it looks like I can't actually check the status of a process I Start()ed while it's still running?
It makes no sense to me ProcessState is set when the process exits. There's an ProcessState.Exited() method which will always return true in this case.
So I tried to go this route instead: cmd.Process.Pid field exists right after I cmd.Start(), however it looks like os.Process doesn't expose any mechanisms to check if the process is running.
os.FindProcess says:
On Unix systems, FindProcess always succeeds and returns a Process for the given pid, regardless of whether the process exists.
which isn't useful –and it seems like there's no way to go from os.Process to an os.ProcessState unless you .Wait() which defeats the whole purpose (I want to know if the process is running or not before it has exited).
I think you have two reasonable options here:
Spin off a goroutine that waits for the process to exit. When the wait is done, you know the process exited. (Positive: pretty easy to code correctly; negative: you dedicate an OS thread to waiting.)
Use syscall.Wait4() on the published Pid. A Wait4 with syscall.WNOHANG set returns immediately, filling in the status.
It might be nice if there were an exported os or cmd function that did the Wait4 for you and filled in the ProcessState. You could supply WNOHANG or not, as you see fit. But there isn't.
The point of ProcessState.Exited() is to distinguish between all the various possibilities, including:
process exited normally (with a status byte)
process died due to receiving an unhandled signal
See the stringer for ProcessState. Note that there are more possibilities than these two ... only there seems to be no way to get the others into a ProcessState. The only calls to syscall.Wait seem to be:
syscall/exec_unix.go: after a failed exec, to collect zombies before returning an error; and
os/exec_unix.go: after a call to p.blockUntilWaitable().
If it were not for the blockUntilWaitable, the exec_unix.go implementation variant for wait() could call syscall.Wait4 with syscall.WNOHANG, but blockUntilWaitable itself ensures that this is pointless (and the goal of this particular wait is to wait for exit anyway).

Trying to implement `signal.CTRL_C_EVENT` in Python3.6

I'm reading about signals and am attempting to implement signal.CTRL_C_EVENT
From what I"m understanding, if the user presses CTRC + C while the program is running, a signal will be sent to kill a program. I can specify the program as a parameter?
My attempt to test out the usage:
import sys
import signal
import time
import os
os.kill('python.exe', signal.CTRL_C_EVENT)
while(1):
print ("Wait...")
time.sleep(10)
However, it seems I need a pid number and 'python.exe' doesn't work. I looked under processes and I can't seem to find a PID number. I did see a PID column under services, but there were so many services -- I couldn't find a python one.
So how do I find PID number?
Also, does signal_CTRL_C_EVENT always have to be used within os.kill?
Can It be used for other purposes?
Thank you.
Windows doesn't implement Unix signals, so Python fakes os.kill. Unfortunately its implementation is confusing. It should have been split up into os.kill and os.killpg, but we're stuck with an implementation that mixes the two. To send Ctrl+C or Ctrl+Break, you need to use os.kill as if it were really os.killpg.
When its signal argument is either CTRL_C_EVENT (0) or CTRL_BREAK_EVENT (1), os.kill calls WinAPI GenerateConsoleCtrlEvent. This instructs the console (i.e. the conhost.exe instance that's hosting the console window of the current process) to send the event to a given process group ID (PGID). Group ID 0 is special cased to broadcast the event to all processes attached to the console. Otherwise
a process group ID is the ID of the lead process in a process group. Every process is either created as the leader of a new group or inherits the group of its parent. A new group can be created via the CreateProcess creation flag CREATE_NEW_PROCESS_GROUP.
If either calling GenerateConsoleCtrlEvent fails (e.g. the current process isn't attached to a console) or the signal argument isn't one of the above-mentioned control events, then os.kill instead attempts to open a handle for the given process ID (PID) with terminate access and call WinAPI TerminateProcess. This function is like sending a SIGKILL signal in Unix, but with a variable exit code. Note the confusion in that it operates on an individual process (i.e. kill), not a process group (i.e. killpg).
Windows doesn't provide a function to get the group ID of a process, so generally the only way to get a valid PGID is to create the process yourself. You can pass the CREATE_NEW_PROCESS_GROUP flag to subprocess.Popen via its creationflags parameter. Then you can send Ctrl+Break to the child process and all of its children that are in the same group, but only if it's a console process that's attached to the same console as your current process, i.e. it won't work if you also also use any of these flags: CREATE_NEW_CONSOLE, CREATE_NO_WINDOW, or DETACHED_PROCESS. Also, Ctrl+C is disabled in such a process, unless the child manually enables it via WinAPI SetConsoleCtrlHandler.
Only use os.kill(os.getpid(), signal.CTRL_C_EVENT) when you know for certain that your current process was started as the lead process of a group. Otherwise the behavior is undefined, and in practice it works like sending to process group ID 0.
You can get pid via os.getpid()
os.kill(os.getpid(), signal.CTRL_C_EVENT)

Windows Named Pipe Access control

My process (server) creates a child process (client) by CreateProcess and I am doing IPC between these processes. I begin with anonymous pipe, but soon I find that it does not support overlapped operations as explained here.
So, named-pipe is my second choice. My confusion is: if I create a named-pipe, is it possible to limit the access of this pipe only to my child process created by previously call to CreateProcess? Thus, even if another process obtains the Pipe's Name, it still cannot read or write to the pipe.
My IPC usage only limits to local machine and single platform (Windows).
BTW, I can change both codes for these processes.
You could explicitly assign an ACL to the new pipe by using the lpSecurityAttributes parameter. This would allow you to ensure that, if another user is logged on, they can't connect to the pipe.
However, if you create both ends of the pipe in the parent process there is very little scope for malfeasance, so in general explicitly setting an ACL is not necessary. Once you have opened the client end of the pipe, no other process can connect to the pipe anyway (you would have to create a second instance if you wanted them to do so) so there is only a very brief interval during which another process could interfere; and even if that happened, you wouldn't be able to connect the client end, so you would know something had gone wrong.
In other words, the scope for attack is limited to denial of service, and since the attacking process would need to be running on the same machine, it can achieve a much more effective denial of service simply by tanking the CPU.
Note that:
You should use the FILE_FLAG_FIRST_PIPE_INSTANCE flag when creating the pipe, to ensure that you know if there is a name collision.
You should also use PIPE_REJECT_REMOTE_CLIENTS for obvious reasons.
The default permissions on a named pipe do not allow other non-administrative users to create a new instance, so a man-in-the-middle style attack is not a risk in this case.
A malicious process running as the same user, or as an administrative user, could potentially man-in-the-middle your connection (regardless of whether you set an ACL or not) but since any such malicious process could also inject malicious code directly into the parent and/or child there is little point in worrying about it. The attacker is already on the wrong side of the air-tight hatchway; locking the windows won't do you any good.
If your process is running with elevated privilege, you probably should set an ACL on the pipe. The default ACL would potentially allow non-elevated processes running as the same user context to man-in-the-middle the connection. You can resolve this by setting an ACL that grants full access only to Administrators. The risk is still minimal, but in this particular case a defense-in-depth measure is probably appropriate.
An anonymous pipe is implemented as a named pipe with a unique name, so you haven't actually lost anything by using a named pipe. An attacker could in principle man-in-the-middle an anonymous pipe just as easily as a named one. (Edit: according to RbMm, this is no longer true.)
Asynchronous (overlapped) operations of course full supported by anonymous pipes. supported asynchronous operations or no - depending only from are FILE_SYNCHRONOUS_IO_[NO]NALERT used in call ZwCreateNamedPipeFile and ZwOpenFile, but not from which name (or empty) have pipe. CreatePipe create pipe pair with FILE_SYNCHRONOUS_IO_NONALERT option - only because this handles returned from this api can not be used in asynchronous operation. unfortunately CreatePipe have no parameters to change this behavior, but we can yourself do this task
begin from vista we can create anonymous (unnamed) and asynchronous pipe pair, but for this you need use ndll api. next code is almost similar CreatePipe internal code, except i create asynchronous pipe pair.
NTSTATUS CreatePipeAnonymousPair(PHANDLE phServerPipe, PHANDLE phClientPipe)
{
HANDLE hFile;
IO_STATUS_BLOCK iosb;
static UNICODE_STRING NamedPipe = RTL_CONSTANT_STRING(L"\\Device\\NamedPipe\\");
OBJECT_ATTRIBUTES oa = { sizeof(oa), 0, &NamedPipe, OBJ_CASE_INSENSITIVE };
NTSTATUS status;
if (0 <= (status = ZwOpenFile(&hFile, SYNCHRONIZE, &oa, &iosb, FILE_SHARE_VALID_FLAGS, 0)))
{
oa.RootDirectory = hFile;
static LARGE_INTEGER timeout = { 0, MINLONG };
static UNICODE_STRING empty = {};
oa.ObjectName = ∅
if (0 <= (status = ZwCreateNamedPipeFile(phServerPipe,
FILE_READ_ATTRIBUTES|FILE_READ_DATA|
FILE_WRITE_ATTRIBUTES|FILE_WRITE_DATA|
FILE_CREATE_PIPE_INSTANCE,
&oa, &iosb, FILE_SHARE_READ|FILE_SHARE_WRITE,
FILE_CREATE, 0, FILE_PIPE_BYTE_STREAM_TYPE, FILE_PIPE_BYTE_STREAM_MODE,
FILE_PIPE_QUEUE_OPERATION, 1, 0, 0, &timeout)))
{
oa.RootDirectory = *phServerPipe;
oa.Attributes = OBJ_CASE_INSENSITIVE|OBJ_INHERIT;
if (0 > (status = ZwOpenFile(phClientPipe, FILE_READ_ATTRIBUTES|FILE_READ_DATA|
FILE_WRITE_ATTRIBUTES|FILE_WRITE_DATA, &oa, &iosb, FILE_SHARE_VALID_FLAGS, 0)))
{
ZwClose(oa.RootDirectory);
*phServerPipe = 0;
}
}
ZwClose(hFile);
}
return status;
}
note that hClientPipe created as Inherited - so can pass it to child process. also when you will be use hServerPipe in ConnectNamedPipe you got FALSE with GetLastError() == ERROR_PIPE_CONNECTED (because client is already connected)/ or if you will be use FSCTL_PIPE_LISTEN - you got STATUS_PIPE_CONNECTED - this is really not error but ok code

Communicating between Ruby processes, loops

I have a Ruby application which must run 24/7 to process information for a web API, both of which are operating on Google Compute Engine on a Debian Instance - the API is served by Sinatra. When I run this script in loop, it uses up the 1-core vCPU. Using a message queuing system like RabbitMQ to pass messages from the API to the backend script seems to me to skip a learning opportunity for communicating between Ruby scripts natively.
How do I keep a script dormant, i.e. awaiting instruction but not consuming memory 99% CPU? I'm assuming it's not going to be in an infinite loop, but I'm stumped on this.
How would it be best to communicate this message from one script to another? I read about Kernel#Select and forking of subprocesses, but I haven't encountered any definitive or comprehensible solution.
Forking may indeed be a good solution for you, and you only need to understand three system calls to make good use of it: fork(), waitpid() and exec(). I'm not a Ruby guy, so hopefully my C-like explanation will make enough sense for you to fill in the blanks.
The way fork() works is by the operating system making a byte-for-byte copy of the calling process' virtual memory space as it was when fork() was called and carving out new memory to place the copy into. This creates a new process with its parent's exact state--except for that the child process' fork() call returns 0, while the parent's returns the PID of the new child process. This allows the child process to know that it is a child, and the parent process to know who its children are.
While fork() copies its caller's process image, the exec() system call replaces its caller's process image with a brand new one, as specified by its arguments.
The waitpid() system call is used by the parent process to wait for a return value from a specific child process (one whose process ID was returned to the parent by the fork() call), and then properly log the process' completion with the OS. Even if you don't need your child process' return value, you should call waitpid() on it anyway so you don't end up accumulating "zombie processes."
Again, I'm not a Ruby guy, so hopefully my C-like pseudocode makes sense. Consider the following server:
while(1) { # an infinite loop
# Wait for and accept connections from your web API.
pid = fork(); # fork() returns a process ID number
# If fork() returns a negative number, something went wrong.
if(pid < 0) {
exit(1);
}
# If fork() returns 0, this is the child process.
else if(pid == 0) {
# Remember that because fork() copies your program's state,
# you can use variables you assigned before the fork to
# send to the new process as arguments.
exec(./processingscript.rb, "processingscript.rb", arg1, arg2, arg3, ...);
}
# If fork() returns a number greater than 0 (the PID of the forked
# child process), this is the parent process.
else if(pid > 0) {
childreturnvalue = waitpid(pid); # parent process hangs here until
# the process with the ID number
# pid returns.
}
}
Written this way, your CPU-intenive script only runs when a connection is received from the web API. It does its processing and then terminates, waiting to be called again. You can also specify "no hang" options for waitpid() so that you can fork multiple instances of your processing script concurrently without having your server hang every time it needs to wait for an instance of that script to complete.
Hope this helps! Perhaps somebody who knows Ruby can edit this to be a bit more idiomatic to the language.

CreateProcess returns non 0 but GetExitCodeProcess() returns 128

I am creating an application that will start another process using CreateProcess(). And in the parent process I will use GetExitCodeProcess() to check whether the process active or not.
Here CreateProcess() is successful (returned a non negative value) but GetExitCodeProcess() returns 128 (There are no child processes to wait for). I am not seeing any trace of the child process started(usually some debugs). It happens intermittently.
Any idea what really happened to the child process?. Where we get more information (in system/application event logs?).
Please guide me.
Thanks,
Naga
Thanks for your comments.
I have found the following MSDN articles that gives the same symptoms and resolution for the problem.
Cmd.exe, Perl.exe, or other console-mode applications may fail to initialize properly and terminate prematurely when launched by a service using the CreateProcess() or CreateProcessAsUser() APIs. The calling process has no way of knowing that the launched console-mode application has terminated prematurely.
In some instances, calling GetExitCode() against the failed process indicates the following exit code:
128L ERROR_WAIT_NO_CHILDREN - There are no child processes to wait for.
http://support.microsoft.com/kb/156484
http://support.microsoft.com/kb/142676/EN-US
http://support.microsoft.com/kb/175687/EN-US
Thanks,
Naga

Resources