Fork() process behaviour - fork

Say I have a piece of a code
#include<stdio.h>
#include<unistd.h>
#include<sys/types.h>
int main()
{
pid_t p ;
p = fork (); // p is a new process
// three cases -1, 0 , >0
// -1 will report the error
// 0 will tell us that the process is created
int i =0 ;
switch(p)
{
case -1 : printf("Error; \n");
break;
case 0: printf("I am child and my pid is %d",getpid());
printf("\nMy parent pid is : %d\n",getppid());
break;
default:printf("You are inside parent whose pid is %d\n",getpid());
for (int i =20 ; i <= 29; i++)
{
printf("%d\n",i);
}
break;
}
}
This code is giving different outputs in different operating systems. I have Ubuntu 14.04 on my system and OS in our college labs is Red Hat and when we are executing the same program on different machine the output is different.
Output is Like (on my System):
You are inside parent whose pid is 5283
20
21
22
23
24
25
26
27
28
29
I am child and my pid is 5284
My parent pid is : 5283
and the system which is inside the labs is giving the output like
I am child and my pid is 5284
My parent pid is : 5283
You are inside parent whose pid is 5283
20
21
22
23
24
25
26
27
28
29
If we check the program carefully, in my system's output the parent completes the task first and then gives the control to the child process, and on the other system the parent process first creates the child, child first does its task and then resumes back to parent. So what's the difference? Does it depend on the System Architecture or any other parameter like OS. Please let us know

It only depends on the system's scheduler. If you launch your program several times on the same machine, the results may differ.
As soon as two processes are concurrently launched, you can't assume an execution order between their instructions.

Related

In Rust, how can I capture process output with colors?

I would like to capture output from another process (for example git status), process it, and print with all styles (bold, italics, underscore) and colors. It's very important for me to further process that String, I don't want only to print it.
In the Unix world, I think this would involve escape codes, I'm not sure about Windows world but it's important for me too.
I know how to do it without colors:
fn exec_git() -> String {
let output = Command::new("git")
.arg("status")
.output()
.expect("failed to execute process");
String::from_utf8_lossy(&output.stdout).into_owned()
}
Maybe I should use spawn instead?
Your code already works:
use std::process::Command;
fn main() {
let output = Command::new("ls")
.args(&["-l", "--color"])
.env("LS_COLORS", "rs=0:di=38;5;27:mh=44;38;5;15")
.output()
.expect("Failed to execute");
let sout = String::from_utf8(output.stdout).expect("Not UTF-8");
let serr = String::from_utf8(output.stderr).expect("Not UTF-8");
println!("{}", sout);
println!("{}", serr);
}
Prints the output:
total 68
-rw-r--r-- 4 root root 56158 Dec 23 00:00 [0m[44;38;5;15mCargo.lock[0m
-rw-rw-r-- 4 root root 2093 Dec 9 02:54 [44;38;5;15mCargo.toml[0m
drwxr-xr-x 1 root root 4096 Dec 30 15:24 [38;5;27msrc[0m
drwxr-xr-x 1 root root 4096 Dec 23 00:19 [38;5;27mtarget[0m
Note that there's a bunch of junk scattered inside the output ([44;, [0m, etc.). Those are ANSI escape codes, and the terminal emulator interprets those to change the color of the following text.
If you print the string with debugging, you will see:
\u{1b}[0m\u{1b}[44;38;5;15mCargo.lock\u{1b}[0m
Each escape code starts with an ESC (\u{1b}) followed by the actual command. You will have to parse those in order to ignore them for whatever processing you are doing.
Windows does not use escape codes (although maybe it can in Windows 10?), and instead a program directly modifies the console it is connected to. There is nothing in the output to indicate the color.
You can force git to output colors by using git -c color.status=always status
use std::process::Command;
fn main() {
let output = Command::new("git")
.arg("-c")
.arg("color.status=always")
.arg("status")
.output()
.expect("failed to execute process");
let output = String::from_utf8_lossy(&output.stdout).into_owned();
println!("{}", output);
}
This works for git status only. For a more general solution, you either have to check the programs documentation and hope there is a way to force colored output or check how the program determines if it should output colors or not (such as checking for the COLORTERM environment variable).

sched_wakeup_granularity_ns , sched_min_granularity_ns and SCHED_RR

The following value from my box :
sysctl -A | grep "sched" | grep -v "domain"
kernel.sched_autogroup_enabled = 0
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_latency_ns = 18000000
kernel.sched_migration_cost_ns = 5000000
kernel.sched_min_granularity_ns = 10000000
kernel.sched_nr_migrate = 32
kernel.sched_rr_timeslice_ms = 100
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_shares_window_ns = 10000000
kernel.sched_time_avg_ms = 1000
kernel.sched_tunable_scaling = 1
kernel.sched_wakeup_granularity_ns = 3000000
It means in one second , 0.95 second is for SCHED_FIFO or SCHED_RR ,
only 0.05 reserved for SCHED_OTHER , What I am curious is
sched_wakeup_granularity_ns , I have googled it and get the explanation :
Ability of tasks being woken to preempt the current task.
The smaller the value, the easier it is for the task to force the preemption
I think sched_wakeup_granularity_ns only effect SCHED_OTHER task ,
the SCHED_FIFO and SCHED_RR should not in sleep mode , so no need to "wakeup",
am I correct ?!
and for sched_min_granularity_ns, the explanation is :
Minimum preemption granularity for processor-bound tasks.
Tasks are guaranteed to run for this minimum time before they are preempted
I like to know , although SCHED_RR tasks can has 95% of cpu time , But
since the sched_min_granularity_ns value = 10000000 , it is 0.01 second ,
that means that every SCHED_OTHER get 0.01 second timeslice to run before been preempted unless it is blocked by blocking socket or sleep or else , it imply that if I have 3 tasks in core 1 for example , 2 tasks with SCHED_RR , the third task with SCHED_OTHER , and the third task just run a endless loop without blocking socket recv and without yield , so once the third task get the cpu and run , it will run 0.01 second
and then context switch out , even the next task is priority with SCHED_RR ,
it is the right understaning for sched_min_granularity_ns usage ?!
Edit :
http://lists.pdxlinux.org/pipermail/plug/2006-February/045495.html
describe :
No SCHED_OTHER process may be preempted by another SCHED_OTHER process.
However a SCHED_RR or SCHED_FIFO process will preempt SCHED_OTHER
process before their time slice is done. So a SCHED_RR process
should wake up from a sleep with fairly good accuracy.
means SCHED_RR task can preempt the endless while loop without blocking even
time slice is not done ?!
Tasks with a higher scheduling class "priority" will preempt all tasks with a lower priority scheduling class, regardless of any timeouts. Take a look at the below snippet from kernel/sched/core.c:
void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
{
const struct sched_class *class;
if (p->sched_class == rq->curr->sched_class) {
rq->curr->sched_class->check_preempt_curr(rq, p, flags);
} else {
for_each_class(class) {
if (class == rq->curr->sched_class)
break;
if (class == p->sched_class) {
resched_curr(rq);
break;
}
}
}
/*
* A queue event has occurred, and we're going to schedule. In
* this case, we can save a useless back to back clock update.
*/
if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr))
rq_clock_skip_update(rq, true);
}
for_each_class will return the classes in this order: stop, deadline, rt, fair, idle. The loop will stop when trying to preempt a task with the same scheduling class as the preempting task.
So for your question, the answer is yes, an "rt" task will preempt a "fair" task.

Garbage collector in Ruby 2.2 provokes unexpected CoW

How do I prevent the GC from provoking copy-on-write, when I fork my process ? I have recently been analyzing the garbage collector's behavior in Ruby, due to some memory issues that I encountered in my program (I run out of memory on my 60core 0.5Tb machine even for fairly small tasks). For me this really limits the usefulness of ruby for running programs on multicore servers. I would like to present my experiments and results here.
The issue arises when the garbage collector runs during forking. I have investigated three cases that illustrate the issue.
Case 1: We allocate a lot of objects (strings no longer than 20 bytes) in the memory using an array. The strings are created using a random number and string formatting. When the process forks and we force the GC to run in the child, all the shared memory goes private, causing a duplication of the initial memory.
Case 2: We allocate a lot of objects (strings) in the memory using an array, but the string is created using the rand.to_s function, hence we remove the formatting of the data compared to the previous case. We end up with a smaller amount of memory being used, presumably due to less garbage. When the process forks and we force the GC to run in the child, only part of the memory goes private. We have a duplication of the initial memory, but to a smaller extent.
Case 3: We allocate fewer objects compared to before, but the objects are bigger, such that the amount of memory allocated stays the same as in the previous cases. When the process forks and we force the GC to run in the child all the memory stays shared, i.e. no memory duplication.
Here I paste the Ruby code that has been used for these experiments. To switch between cases you only need to change the “option” value in the memory_object function. The code was tested using Ruby 2.2.2, 2.2.1, 2.1.3, 2.1.5 and 1.9.3 on an Ubuntu 14.04 machine.
Sample output for case 1:
ruby version 2.2.2
proces pid log priv_dirty shared_dirty
Parent 3897 post alloc 38 0
Parent 3897 4 fork 0 37
Child 3937 4 initial 0 37
Child 3937 8 empty GC 35 5
The exact same code has been written in Python and in all cases the CoW works perfectly fine.
Sample output for case 1:
python version 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2]
proces pid log priv_dirty shared_dirty
Parent 4308 post alloc 35 0
Parent 4308 4 fork 0 35
Child 4309 4 initial 0 35
Child 4309 10 empty GC 1 34
Ruby code
$start_time=Time.new
# Monitor use of Resident and Virtual memory.
class Memory
shared_dirty = '.+?Shared_Dirty:\s+(\d+)'
priv_dirty = '.+?Private_Dirty:\s+(\d+)'
MEM_REGEXP = /#{shared_dirty}#{priv_dirty}/m
# get memory usage
def self.get_memory_map( pids)
memory_map = {}
memory_map[ :pids_found] = {}
memory_map[ :shared_dirty] = 0
memory_map[ :priv_dirty] = 0
pids.each do |pid|
begin
lines = nil
lines = File.read( "/proc/#{pid}/smaps")
rescue
lines = nil
end
if lines
lines.scan(MEM_REGEXP) do |shared_dirty, priv_dirty|
memory_map[ :pids_found][pid] = true
memory_map[ :shared_dirty] += shared_dirty.to_i
memory_map[ :priv_dirty] += priv_dirty.to_i
end
end
end
memory_map[ :pids_found] = memory_map[ :pids_found].keys
return memory_map
end
# get the processes and get the value of the memory usage
def self.memory_usage( )
pids = [ $$]
result = self.get_memory_map( pids)
result[ :pids] = pids
return result
end
# print the values of the private and shared memories
def self.log( process_name='', log_tag="")
if process_name == "header"
puts " %-6s %5s %-12s %10s %10s\n" % ["proces", "pid", "log", "priv_dirty", "shared_dirty"]
else
time = Time.new - $start_time
mem = Memory.memory_usage( )
puts " %-6s %5d %-12s %10d %10d\n" % [process_name, $$, log_tag, mem[:priv_dirty]/1000, mem[:shared_dirty]/1000]
end
end
end
# function to delay the processes a bit
def time_step( n)
while Time.new - $start_time < n
sleep( 0.01)
end
end
# create an object of specified size. The option argument can be changed from 0 to 2 to visualize the behavior of the GC in various cases
#
# case 0 (default) : we make a huge array of small objects by formatting a string
# case 1 : we make a huge array of small objects without formatting a string (we use the to_s function)
# case 2 : we make a smaller array of big objects
def memory_object( size, option=1)
result = []
count = size/20
if option > 3 or option < 1
count.times do
result << "%20.18f" % rand
end
elsif option == 1
count.times do
result << rand.to_s
end
elsif option == 2
count = count/10
count.times do
result << ("%20.18f" % rand)*30
end
end
return result
end
##### main #####
puts "ruby version #{RUBY_VERSION}"
GC.disable
# print the column headers and first line
Memory.log( "header")
# Allocation of memory
big_memory = memory_object( 1000 * 1000 * 10)
Memory.log( "Parent", "post alloc")
lab_time = Time.new - $start_time
if lab_time < 3.9
lab_time = 0
end
# start the forking
pid = fork do
time = 4
time_step( time + lab_time)
Memory.log( "Child", "#{time} initial")
# force GC when nothing happened
GC.enable; GC.start; GC.disable
time = 8
time_step( time + lab_time)
Memory.log( "Child", "#{time} empty GC")
sleep( 1)
STDOUT.flush
exit!
end
time = 4
time_step( time + lab_time)
Memory.log( "Parent", "#{time} fork")
# wait for the child to finish
Process.wait( pid)
Python code
import re
import time
import os
import random
import sys
import gc
start_time=time.time()
# Monitor use of Resident and Virtual memory.
class Memory:
def __init__(self):
self.shared_dirty = '.+?Shared_Dirty:\s+(\d+)'
self.priv_dirty = '.+?Private_Dirty:\s+(\d+)'
self.MEM_REGEXP = re.compile("{shared_dirty}{priv_dirty}".format(shared_dirty=self.shared_dirty, priv_dirty=self.priv_dirty), re.DOTALL)
# get memory usage
def get_memory_map(self, pids):
memory_map = {}
memory_map[ "pids_found" ] = {}
memory_map[ "shared_dirty" ] = 0
memory_map[ "priv_dirty" ] = 0
for pid in pids:
try:
lines = None
with open( "/proc/{pid}/smaps".format(pid=pid), "r" ) as infile:
lines = infile.read()
except:
lines = None
if lines:
for shared_dirty, priv_dirty in re.findall( self.MEM_REGEXP, lines ):
memory_map[ "pids_found" ][pid] = True
memory_map[ "shared_dirty" ] += int( shared_dirty )
memory_map[ "priv_dirty" ] += int( priv_dirty )
memory_map[ "pids_found" ] = memory_map[ "pids_found" ].keys()
return memory_map
# get the processes and get the value of the memory usage
def memory_usage( self):
pids = [ os.getpid() ]
result = self.get_memory_map( pids)
result[ "pids" ] = pids
return result
# print the values of the private and shared memories
def log( self, process_name='', log_tag=""):
if process_name == "header":
print " %-6s %5s %-12s %10s %10s" % ("proces", "pid", "log", "priv_dirty", "shared_dirty")
else:
global start_time
Time = time.time() - start_time
mem = self.memory_usage( )
print " %-6s %5d %-12s %10d %10d" % (process_name, os.getpid(), log_tag, mem["priv_dirty"]/1000, mem["shared_dirty"]/1000)
# function to delay the processes a bit
def time_step( n):
global start_time
while (time.time() - start_time) < n:
time.sleep( 0.01)
# create an object of specified size. The option argument can be changed from 0 to 2 to visualize the behavior of the GC in various cases
#
# case 0 (default) : we make a huge array of small objects by formatting a string
# case 1 : we make a huge array of small objects without formatting a string (we use the to_s function)
# case 2 : we make a smaller array of big objects
def memory_object( size, option=2):
count = size/20
if option > 3 or option < 1:
result = [ "%20.18f"% random.random() for i in xrange(count) ]
elif option == 1:
result = [ str( random.random() ) for i in xrange(count) ]
elif option == 2:
count = count/10
result = [ ("%20.18f"% random.random())*30 for i in xrange(count) ]
return result
##### main #####
print "python version {version}".format(version=sys.version)
memory = Memory()
gc.disable()
# print the column headers and first line
memory.log( "header") # Print the headers of the columns
# Allocation of memory
big_memory = memory_object( 1000 * 1000 * 10) # Allocate memory
memory.log( "Parent", "post alloc")
lab_time = time.time() - start_time
if lab_time < 3.9:
lab_time = 0
# start the forking
pid = os.fork() # fork the process
if pid == 0:
Time = 4
time_step( Time + lab_time)
memory.log( "Child", "{time} initial".format(time=Time))
# force GC when nothing happened
gc.enable(); gc.collect(); gc.disable();
Time = 10
time_step( Time + lab_time)
memory.log( "Child", "{time} empty GC".format(time=Time))
time.sleep( 1)
sys.exit(0)
Time = 4
time_step( Time + lab_time)
memory.log( "Parent", "{time} fork".format(time=Time))
# Wait for child process to finish
os.waitpid( pid, 0)
EDIT
Indeed, calling the GC several times before forking the process solves the issue and I am quite surprised. I have also run the code using Ruby 2.0.0 and the issue doesn't even appear, so it must be related to this generational GC just like you mentioned.
However, if I call the memory_object function without assigning the output to any variables (I am only creating garbage), then the memory is duplicated. The amount of memory that is copied depends on the amount of garbage that I create - the more garbage, the more memory becomes private.
Any ideas how I can prevent this ?
Here are some results
Running the GC in 2.0.0
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 3664 post alloc 67 0
Parent 3664 4 fork 1 69
Child 3700 4 initial 1 69
Child 3700 8 empty GC 6 65
Calling memory_object( 1000*1000) in the child
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 3703 post alloc 67 0
Parent 3703 4 fork 1 70
Child 3739 4 initial 1 70
Child 3739 8 empty GC 15 56
Calling memory_object( 1000*1000*10)
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 3743 post alloc 67 0
Parent 3743 4 fork 1 69
Child 3779 4 initial 1 69
Child 3779 8 empty GC 89 5
UPD2
Suddenly figured out why all the memory is going private if you format the string -- you generate garbage during formatting, having GC disabled, then enable GC, and you've got holes of released objects in your generated data. Then you fork, and new garbage starts to occupy these holes, the more garbage - more private pages.
So i added a cleanup function to run GC each 2000 cycles (just enabling lazy GC didn't help):
count.times do |i|
cleanup(i)
result << "%20.18f" % rand
end
#......snip........#
def cleanup(i)
if ((i%2000).zero?)
GC.enable; GC.start; GC.disable
end
end
##### main #####
Which resulted in(with generating memory_object( 1000 * 1000 * 10) after fork):
RUBY_GC_HEAP_INIT_SLOTS=600000 ruby gc-test.rb 0
ruby version 2.2.0
proces pid log priv_dirty shared_dirty
Parent 2501 post alloc 35 0
Parent 2501 4 fork 0 35
Child 2503 4 initial 0 35
Child 2503 8 empty GC 28 22
Yes, it affects performance, but only before forking, i.e. increase load time in your case.
UPD1
Just found criteria by which ruby 2.2 sets old object bits, it's 3 GC's, so if you add following before forking:
GC.enable; 3.times {GC.start}; GC.disable
# start the forking
you will get(the option is 1 in command line):
$ RUBY_GC_HEAP_INIT_SLOTS=600000 ruby gc-test.rb 1
ruby version 2.2.0
proces pid log priv_dirty shared_dirty
Parent 2368 post alloc 31 0
Parent 2368 4 fork 1 34
Child 2370 4 initial 1 34
Child 2370 8 empty GC 2 32
But this needs to be further tested concerning the behavior of such objects on future GC's, at least after 100 GC's :old_objects remains constant, so i suppose it should be OK
Log with GC.stat is here
By the way there's also option RGENGC_OLD_NEWOBJ_CHECK to create old objects from the beginning, but i doubt it's a good idea, but may be useful for a particular case.
First answer
My proposition in the comment above was wrong, actually bitmap tables are the savior.
(option = 1)
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 14807 post alloc 27 0
Parent 14807 4 fork 0 27
Child 14809 4 initial 0 27
Child 14809 8 empty GC 6 25 # << almost everything stays shared <<
Also had by hand and tested Ruby Enterprise Edition it's only half better than worst cases.
ruby version 1.8.7
proces pid log priv_dirty shared_dirty
Parent 15064 post alloc 86 0
Parent 15064 4 fork 2 84
Child 15065 4 initial 2 84
Child 15065 8 empty GC 40 46
(I made the script run strictly 1 GC, by increasing RUBY_GC_HEAP_INIT_SLOTS to 600k)

Analyzing readdir() performance

It's bothering me that linux takes so long to list all files for huge directories, so I created a little test script that recursively lists all files of a directory:
#include <stdio.h>
#include <dirent.h>
int list(char *path) {
int i = 0;
DIR *dir = opendir(path);
struct dirent *entry;
char new_path[1024];
while(entry = readdir(dir)) {
if (entry->d_type == DT_DIR) {
if (entry->d_name[0] == '.')
continue;
strcpy(new_path, path);
strcat(new_path, "/");
strcat(new_path, entry->d_name);
i += list(new_path);
}
else
i++;
}
closedir(dir);
return i;
}
int main() {
char *path = "/home";
printf("%i\n", list(path));
return 0;
When compiling it with gcc -O3, the program runs about 15 sec (I ran the programm a few times and it's approximately constant, so the fs cache should not play a role here):
$ /usr/bin/time -f "%CC %DD %EE %FF %II %KK %MM %OO %PP %RR %SS %UU %WW %XX %ZZ %cc %ee %kk %pp %rr %ss %tt %ww %xx" ./a.out
./a.outC 0D 0:14.39E 0F 0I 0K 548M 0O 2%P 178R 0.30S 0.01U 0W 0X 4096Z 7c 14.39e 0k 0p 0r 0s 0t 1692w 0x
So it spends about S=0.3sec in kernelspace and U=0.01sec in userspace and has 7+1692 context switches.
A context switch takes about 2000nsec * (7+1692) = 3.398msec [1]
However, there are more than 10sec left and I would like to find out what the program is doing in this time.
Are there any other tools to investigate what the program is doing all the time?
gprof just tells me the time for the (userspace) call graph and gcov does not list time spent in each line but only how often a time is executed...
[1] http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html
oprofile is a decent sampling profiler which can profile both user and kernel-mode code.
According to your numbers, however, approximately 14.5 seconds of the time is spent asleep, which is not really registered well by oprofile. Perhaps what may be more useful would be ftrace combined with a reading of the kernel code. ftrace provides trace points in the kernel which can log a message and stack trace when they occur. The event that would seem most useful for determining why your process is sleeping would be the sched_switch event. I would recommend that you enable kernel-mode stacks and the sched_switch event, set a buffer large enough to capture the entire lifetime of your process, then run your process and stop tracing immediately after. By reviewing the trace, you will be able to see every time your process went to sleep, whether it was runnable or non-runnable, a high resolution time stamp, and a call stack indicating what put it to sleep.
ftrace is controlled through debugfs. On my system, this is mounted in /sys/kernel/debug, but yours may be different. Here is an example of what I would do to capture this information:
# Enable stack traces
echo "1" > /sys/kernel/debug/tracing/options/stacktrace
# Enable the sched_switch event
echo "1" > /sys/kernel/debug/tracing/events/sched/sched_switch/enable
# Make sure tracing is enabled
echo "1" > /sys/kernel/debug/tracing/tracing_on
# Run the program and disable tracing as quickly as possible
./your_program; echo "0" > /sys/kernel/debug/tracing/tracing_on
# Examine the trace
vi /sys/kernel/debug/tracing/trace
The resulting output will have lines which look like this:
# tracer: nop
#
# entries-in-buffer/entries-written: 22248/3703779 #P:1
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
<idle>-0 [000] d..3 2113.437500: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/0:0 next_pid=878 next_prio=120
<idle>-0 [000] d..3 2113.437531: <stack trace>
=> __schedule
=> schedule
=> schedule_preempt_disabled
=> cpu_startup_entry
=> rest_init
=> start_kernel
kworker/0:0-878 [000] d..3 2113.437836: sched_switch: prev_comm=kworker/0:0 prev_pid=878 prev_prio=120 prev_state=S ==> next_comm=your_program next_pid=898 next_prio=120
kworker/0:0-878 [000] d..3 2113.437866: <stack trace>
=> __schedule
=> schedule
=> worker_thread
=> kthread
=> ret_from_fork
The lines you will care about will be when your program appears as the prev_comm task, meaning the scheduler is switching away from your program to run something else. prev_state will indicate that your program was still runnable (R) or was blocked (S, U or some other letter, see the ftrace source). If blocked, you can examine the stack trace and the kernel source to figure out why.

Xcode 4.6 / 5 with lldb breakpoints not working

I had a problem with using debugger LLDB,
if in "main.c" , I include another file like "a.c" , and set breakpoint in "a.c"
the breakpoint will never been stopped.
Is anyone else getting this?
ok, here is the example
// main.c
#include "a.c"
int main()
{
test();
}
// a.c
void test()
{
return; // (Using UI to)set break point here, the gdb will stop, and lldb will not
}
======================================================================
To trojanfoe:
I had tried these steps in Xcode 4.6.3 command line utilities,
the result is like yours, but my problem is on GUI,
When I use the mouse to set a break point on "a.c", it will not work.
I had tried to stop on main(), and enter this cmd "br list",
here is the message on console,
(lldb) br list
Current breakpoints:
1: file ='a.c', line = 13, locations = 0 (pending)
2: file ='main.c', line = 15, locations = 1, resolved = 1
2.1: where = test`main + 15 at main.c:15, address = 0x0000000100000f3f, resolved, hit count = 1
(lldb)
if you need the log by using command line utilities, please tell me,
thanks~
See "File and line breakpoints are not getting hit" in http://lldb.llvm.org/troubleshooting.html - that seems to be talking about exactly your build scenario, and I've just had this problem. To solve it I not only had to put this in $HOME/.lldbinit:
settings set target.inline-breakpoint-strategy always
I also had to do a clobber (distclean) build and restart Xcode.
NOTE This is not an answer, however I wanted to document the works for me response fully.
OP: Please follow these steps to see how it differs for you.
$ clang -g -o bptest main.c
$ ls -l
total 32
-rw-r--r-- 1 andy staff 110 Oct 31 10:55 a.c
-rwxr-xr-x 1 andy staff 4664 Oct 31 10:56 bptest
drwxr-xr-x 3 andy staff 102 Oct 31 10:56 bptest.dSYM (NOTE THIS)
-rw-r--r-- 1 andy staff 42 Oct 31 10:55 main.c
$ lldb
(lldb) target create bptest
Current executable set to 'bptest' (x86_64).
(lldb) break set -b test
Breakpoint 1: where = bptest`test + 4 at a.c:4, address = 0x0000000100000f34
(lldb) run
Process 9743 launched: '/Users/andy/tmp/bptest/bptest' (x86_64)
Process 9743 stopped
* thread #1: tid = 0x65287, 0x0000000100000f34 bptest`test + 4 at a.c:4, queue = 'com.apple.main-thread, stop reason = breakpoint 1.1
frame #0: 0x0000000100000f34 bptest`test + 4 at a.c:4
1 // a.c
2 void test()
3 {
-> 4 return; // (Using UI to)set break point here, the gdb will stop, and lldb will not
5 }
(lldb) bt
* thread #1: tid = 0x65287, 0x0000000100000f34 bptest`test + 4 at a.c:4, queue = 'com.apple.main-thread, stop reason = breakpoint 1.1
frame #0: 0x0000000100000f34 bptest`test + 4 at a.c:4
frame #1: 0x0000000100000f49 bptest`main + 9 at main.c:4
frame #2: 0x00007fff8eb3f7e1 libdyld.dylib`start + 1
(lldb)
Note: I am using the Xcode 5.0.1 command line utilities.

Resources