If we let a thread hold a semaphore by down_read(¤t->mm->mmap_sem) function, it provides the read-only access to mmap (list of vma areas), so other threads are not able to change mmap anymore. I'm reading the source code, but still confused about how down_read achieves that.
The basic idea is:
free lock = 0
down_read() decrements lock ... -1 for each reader (only if it's <= 0)
up_read() increments lock ... +1 when 1 reader finished reading
down_write() increments lock to 1 ... only if it's 0 -- free
up_write() decrements back to 0
Related
Up until Linux 5.8 CAP_SYSADMIN was required to load any but the most basic BPF program. The recently introduced CAP_BPF is a welcome addition as it allows to run software leveraging BPF with less privileges.
Certain types of BPF programs can access packet data. The pre-4.7 way of doing it is via bpf_skb_load_bytes() helper. As the verifier got smarter, it became possible to perform "direct packet access", i.e. to access packet bytes by following pointers in the context structure. E.g:
static const struct bpf_insn prog[] = {
// BPF_PROG_TYPE_SK_REUSEPORT: gets a pointer to sk_reuseport_md (r1).
// Get packet data pointer (r2) and ensure length >= 2, goto Drop otherwise
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1,
offsetof(struct sk_reuseport_md, data)),
BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1,
offsetof(struct sk_reuseport_md, data_end)),
BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 2),
BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, /* Drop: */ +4),
// Ensure first 2 bytes are 0, goto Drop otherwise
BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_2, 0),
BPF_JMP_IMM(BPF_JNE, BPF_REG_4, 0, /* Drop: */ +2),
// return SK_PASS
BPF_MOV32_IMM(BPF_REG_0, SK_PASS),
BPF_EXIT_INSN(),
// Drop: return SK_DROP
BPF_MOV32_IMM(BPF_REG_0, SK_DROP),
BPF_EXIT_INSN()
};
It is required to ensure that the accessed bytes are within bounds explicitly. The verifier will reject the program otherwise.
The program above loads successfully if the caller bears CAP_SYSADMIN. Supposedly, CAP_BPF should suffice as well, but it doesn't (Linux 5.13). Earlier kernels behave similarly. The verifier output follows:
Permission denied
0: (79) r2 = *(u64 *)(r1 +0)
1: (79) r3 = *(u64 *)(r1 +8)
2: (bf) r4 = r2
3: (07) r4 += 2
4: (2d) if r4 > r3 goto pc+4
R3 pointer comparison prohibited
processed 5 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
I understand that arbitrary pointer comparison is restricted as it reveals kernel memory layout. However, comparing a pointer to a packet data offset by a certain amount with a pointer to the packet end is safe.
I'd like to find a way to load the program without granting CAP_SYSADMIN.
Is there a way to write bounds checks in a way that doesn't trigger pointer comparison error?
The relevant code is in check_cond_jmp_op(). It looks like one can't get away with pointer comparison, even with the latest kernel version.
If there's no way to write bounds check in a way that keeps verifier happy, I wonder if lifting the limitation is on the roadmap.
As a workaround, I can grant CAP_PERFORM on top of CAP_BPF, removing the "embargo" on pointer comparison. The program loads successfully. I can probably restrict perf_event_open() and other superfluous bits with seccomp. Doesn't feel nice though.
Reproducer.
To make direct packet accesses in your program, you will need CAP_PERFMON in addition to CAP_BPF. I'm not aware of any way around it.
Why?
Because of Spectre vulnerabilities, someone able to perform arithmetic on unbounded pointers (i.e., all except stack and map value pointers) can read arbitrary memory via speculative out-of-bounds loads.
Such operations thus need to be forbidden for unprivileged users. Allowing CAP_BPF users to perform those operations would essentially give read access to arbitrary memory to CAP_BPF. For those reasons, I doubt this limitation will be lifted in the future.
I want to read certain performance counters. I know that there are tools like perf, that can do it for me in the user space itself, I want the code to be inside the Linux kernel.
I want to write a mechanism to monitor performance counters on Intel(R) Core(TM) i7-3770 CPU. On top of using I am using Ubuntu kernel 4.19.2. I have gotten the following method from easyperf
Here's part of my code to read instructions.
struct perf_event_attr *attr
memset (&pe, 0, sizeof (struct perf_event_attr));
pe.type = PERF_TYPE_HARDWARE;
pe.size = sizeof (struct perf_event_attr);
pe.config = PERF_COUNT_HW_INSTRUCTIONS;
pe.disabled = 0;
pe.exclude_kernel = 0;
pe.exclude_user = 0;
pe.exclude_hv = 0;
pe.exclude_idle = 0;
fd = syscall(__NR_perf_event_open, hw, pid, cpu, grp, flags);
uint64_t perf_read(int fd) {
uint64_t val;
int rc;
rc = read(fd, &val, sizeof(val));
assert(rc == sizeof(val));
return val;
}
I want to put the same lines in the kernel code (in the context switch function) and check the values being read.
My end goal is to figure out a way to read performance counters for a process, every time it switches to another, from the kernel(4.19.2) itself.
To achieve this I check out the code for the system call number __NR_perf_event_open. It can be found here
To make to usable I copied the code inside as a separate function, named it perf_event_open() in the same file and exported.
Now the problem is whenever I call perf_event_open() in the same way as above, the descriptor returned is -2. Checking with the error codes, I figured out that the error was ENOENT. In the perf_event_open() man page, the cause of this error is defined as wrong type field.
Since file descriptors are associated to the process that's opened them, how can one use them from the kernel? Is there an alternative way to configure the pmu to start counting without involving file descriptors?
You probably don't want the overhead of reprogramming a counter inside the context-switch function.
The easiest thing would be to make system calls from user-space to program the PMU (to count some event, probably setting it to count in kernel mode but not user-space, just so the counter overflows less often).
Then just use rdpmc twice (to get start/stop counts) in your custom kernel code. The counter will stay running, and I guess the kernel perf code will handle interrupts when it wraps around. (Or when its PEBS buffer is full.)
IDK if it's possible to program a counter so it just wraps without interrupting, for use-cases like this where you don't care about totals or sample-based profiling, and just want to use rdpmc. If so, do that.
Old answer, addressing your old question which was based on a buggy printf format string that was printing non-zero garbage even though you weren't counting anything in user-space either.
Your inline asm looks correct, so the question is what exactly that PMU counter is programmed to count in kernel mode in the context where your code runs.
perf virtualizes the PMU counters on context-switch, giving the illusion of perf stat counting a single process even when it migrates across CPUs. Unless you're using perf -a to get system-wide counts, the PMU might not be programmed to count anything, so multiple reads would all give 0 even if at other times it's programmed to count a fast-changing event like cycles or instructions.
Are you sure you have perf set to count user + kernel events, not just user-space events?
perf stat will show something like instructions:u instead of instructions if it's limiting itself to user-space. (This is the default for non-root if you haven't lowered sysctl kernel.perf_event_paranoid to 0 or something from the safe default that doesn't let user-space learn anything about the kernel.)
There's HW support for programming a counter to only count when CPL != 0 (i.e. not in ring 0 / kernel mode). Higher values for kernel.perf_event_paranoid restrict the perf API to not allow programming counters to count in kernel+user mode, but even with paranoid = -1 it's possible to program them this way. If that's how you programmed a counter, then that would explain everything.
We need to see your code that programs the counters. That doesn't happen automatically.
The kernel doesn't just leave the counters running all the time when no process has used a PAPI function to enable a per-process or system-wide counter; that would generate interrupts that slow the system down for no benefit.
Linux provides many functions to make a file descriptor close-on-exec upon creation.
int efd = eventfd(0, O_CLOEXEC);
int sfd = socket(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0);
...
My question is that: is this mechanism thread-safe? What if one thread forks at the same time another thread calls these functions to create fds? Will I run into file-descriptor leak problem?
This is the whole point of the CLOEXEC flag: make it impossible to get this kind of race. The flag gets passed all the way down to the kernel, so when the fd is created, it already has the CLOEXEC flag set on it. Here is an example. Let's say we have two threads. Thread 1 opens an fd and then sets the CLOEXEC flag on it with a separate fcntl system call. Thread 2 forks between the calls to open and fcntl. We have an fd leak.
If Thread 1 instead passed CLOEXEC into the open (or socket) call, race is resolved. If Thread 2 forks before the open then there is no fd, and so nothing to leak. If it forks after, the fd will get closed because it is already marked as CLOEXEC.
I am debugging a deadlock issue and call stack shows that threads are waiting on some events.
Code is using critical section as synchronization primitive I think there is some issue here.
Also the debugger is pointing to a critical section that is owned by some other thread,but lock count is -2.
As per my understanding lock count>0 means that critical section is locked by one or more threads.
So is there any possibility that I am looking at right critical section which could be the culprit in deadlock.
In what scenarios can a critical section have negative lock count?
Beware: since Windows Server 2003 (for client OS this is Vista and newer) the meaning of LockCount has changed and -2 is a completely normal value, commonly seen when a thread has entered a critical section without waiting and no other thread is waiting for the CS. See Displaying a Critical Section:
In Microsoft Windows Server 2003 Service Pack 1 and later versions of Windows, the LockCount field is parsed as follows:
The lowest bit shows the lock status. If this bit is 0, the critical section is locked; if it is 1, the critical section is not locked.
The next bit shows whether a thread has been woken for this lock. If this bit is 0, then a thread has been woken for this lock; if it is 1, no thread has been woken.
The remaining bits are the ones-complement of the number of threads waiting for the lock.
I am assuming that you are talking about CCriticalSection class in MFC. I think you are looking at the right critical section. I have found that the critical section's lock count can go negative if the number of calls to Lock() function is less than the number of Unlock() calls. I found that this generally happens in the following type of code:
void f()
{
CSingleLock lock(&m_synchronizer, TRUE);
//Some logic here
m_synchronizer.Unlock();
}
At the first glance this code looks perfectly safe. However, note that I am using CCriticalSection's Unlock() method directly instead of CSingleLock's Unlock() method. Now what happens is that when the function exits, CSingleLock in its destructor calls Unlock() of the critical section again and its lock count becomes negative. After this the application will be in a bad shape and strange things start to happen. If you are using MFC critical sections then do check for this type of problems.
If I use fork() system call to create a child process then I use #ps aux command to see the size of the processes, I always see that the child process is of 4 KB more than the parent one!!!
I also want to know how can I know the actual address of any process in RAM not the relative shown in /proc/pid/maps
Thanks in advance
pid = fork();
if( pid == 0)
{
getchar();
execlp("/usr/bin/top", NULL);
}
else
{
wait(&childstatus);
printf("Hello From Parent\n");
}
I run #ps aux when the child process is waiting for getchar().
Kernel employs copy-on-write methods while copying memory from parent's process address space to child's one.
Another thing is that kernel will add about 2kB (on 32-bit system) descriptor to process table on fork.
I also want to know how can I know the actual address of any process in RAM not the relative shown in /proc/pid/maps
Well, this will require some kind of kernel debugger. There was some kdb or kgdb as far as I remember.
I have just realized that this 4 KB extra is because of wait(). If I remove it, both parent and child processes will be of the same size.
But can anyone tell me why does wait() consume more 4 KB extra?