Different result from omreport command and snmpwalk command - snmp

I want to find right OID for temperature cpu use in zabbix.
I use command snmpwalk.
#snmpwalk -c public -v2c 127.0.0.1 .1.3.6.1.4.1.674.10892.1
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.1 = STRING: "Mainboard MB Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.2 = STRING: "Front Panel FP Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.3 = STRING: "BP Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.4 = STRING: "CPU0 Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.5 = STRING: "CPU1 Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.6 = STRING: "DIMM Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.8.1.7 = STRING: "IOH Temp"
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.1 = INTEGER: 750
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.2 = INTEGER: 500
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.3 = INTEGER: 550
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.4 = INTEGER: 1020
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.5 = INTEGER: 1020
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.6 = INTEGER: 1000
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.10.1.7 = INTEGER: 1050
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.1 = INTEGER: 700
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.2 = INTEGER: 450
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.3 = INTEGER: 520
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.4 = INTEGER: 980
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.5 = INTEGER: 980
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.6 = INTEGER: 950
SNMPv2-SMI::enterprises.674.10892.1.700.20.1.11.1.7 = INTEGER: 950
Very much OID about temperature CPU 1 and CPU 2. What's right value i should use in zabbix ?
omreport command.
#omreport chassis temps
Temperature Probes Information
------------------------------------
Main System Chassis Temperatures: Ok
------------------------------------
Index : 0
Status : Ok
Probe Name : Mainboard MB Temp
Reading : 44.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 70.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 75.0 C
Index : 1
Status : Ok
Probe Name : Front Panel FP Temp
Reading : 17.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 45.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 50.0 C
Index : 2
Status : Ok
Probe Name : BP Temp
Reading : 21.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 52.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 55.0 C
Index : 3
Status : Ok
Probe Name : CPU0 Temp
Reading : 80.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 98.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 102.0 C
Index : 4
Status : Ok
Probe Name : CPU1 Temp
Reading : 78.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 98.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 102.0 C
Index : 5
Status : Ok
Probe Name : DIMM Temp
Reading : 51.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 95.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 100.0 C
Index : 6
Status : Ok
Probe Name : IOH Temp
Reading : 72.0 C
Minimum Warning Threshold : [N/A]
Maximum Warning Threshold : 95.0 C
Minimum Failure Threshold : [N/A]
Maximum Failure Threshold : 105.0 C
Sorry about my english.

According to MIB file from Dell you should be using: 1.3.6.1.4.1.674.10892.1.700.20.1.6 which is iso.org.dod.internet.private.enterprises.dell.server3.baseboardGroup.thermalGroup.temperatureProbeTable.temperatureProbeTableEntry.temperatureProbeReading. The description of this MIB Object states:
This attribute defines the reading for a temperature probe of type other than temperatureProbeTypeIsDiscrete. When the value for temperatureProbeType is other than temperatureProbeTypeIsDiscrete, the value returned for this attribute is the temperature that the probe is reading in tenths of degrees Centigrade. When the value for temperatureProbeType is temperatureProbeTypeIsDiscrete, a value is not returned for this attribute.

Related

Ilp - count number of times variables received a value

In an ILP, is it possible to have a variable whose value will be the number of variables with value N?
N is a bounded integer, with lower bound 1.
Thank you
This achieves the goal. It is written in pyomo but should be fairly easy to translate to other frameworks.
Code:
# magic number counter
import pyomo.environ as pyo
M = 100
magic_number=7
m = pyo.ConcreteModel()
m.I = pyo.Set(initialize=[1,2,3,4])
m.x = pyo.Var(m.I, domain=pyo.NonNegativeIntegers, bounds=(1, M))
m.magic = pyo.Var(m.I, domain=pyo.Binary)
# obj: max sum of x, plus some sugar for magic numbers's
m.obj = pyo.Objective(expr=sum(m.x[i] + 0.1*m.magic[i] for i in m.I), sense=pyo.maximize)
# constraints
m.sum_limit = pyo.Constraint(expr=pyo.sum_product(m.x) <= 19)
#m.Constraint(m.I)
def linking_1(m, i):
return m.x[i] <= magic_number + (1 - m.magic[i]) * M
#m.Constraint(m.I)
def linking_2(m, i):
return m.x[i] >= magic_number * m.magic[i]
solver = pyo.SolverFactory('glpk')
soln = solver.solve(m)
print(soln)
m.display()
print(f"\nmagic numbers {magic_number}'s produced: {pyo.value(pyo.sum_product(m.magic))}")
Output:
Problem:
- Name: unknown
Lower bound: 19.2
Upper bound: 19.2
Number of objectives: 1
Number of constraints: 10
Number of variables: 9
Number of nonzeros: 21
Sense: maximize
Solver:
- Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 5
Number of created subproblems: 5
Error rc: 0
Time: 0.005478858947753906
Solution:
- number of solutions: 0
number of solutions displayed: 0
Model unknown
Variables:
x : Size=4, Index=I
Key : Lower : Value : Upper : Fixed : Stale : Domain
1 : 1 : 7.0 : 100 : False : False : NonNegativeIntegers
2 : 1 : 4.0 : 100 : False : False : NonNegativeIntegers
3 : 1 : 7.0 : 100 : False : False : NonNegativeIntegers
4 : 1 : 1.0 : 100 : False : False : NonNegativeIntegers
magic : Size=4, Index=I
Key : Lower : Value : Upper : Fixed : Stale : Domain
1 : 0 : 1.0 : 1 : False : False : Binary
2 : 0 : 0.0 : 1 : False : False : Binary
3 : 0 : 1.0 : 1 : False : False : Binary
4 : 0 : 0.0 : 1 : False : False : Binary
Objectives:
obj : Size=1, Index=None, Active=True
Key : Active : Value
None : True : 19.200000000000003
Constraints:
sum_limit : Size=1
Key : Lower : Body : Upper
None : None : 19.0 : 19.0
linking_1 : Size=4
Key : Lower : Body : Upper
1 : None : 0.0 : 0.0
2 : None : -103.0 : 0.0
3 : None : 0.0 : 0.0
4 : None : -106.0 : 0.0
linking_2 : Size=4
Key : Lower : Body : Upper
1 : None : 0.0 : 0.0
2 : None : -4.0 : 0.0
3 : None : 0.0 : 0.0
4 : None : -1.0 : 0.0
magic numbers 7's produced: 2.0

How to find memory and runtime used by a NuSMV model

Given a NuSMV model, how to find its runtime and how much memory it consumed?
So the runtime can be found using this command at system prompt: /usr/bin/time -f "time %e s" NuSMV filename.smv
The above gives the wall-clock time. Is there a better way to obtain runtime statistics from within NuSMV itself?
Also how to find out how much RAM memory the program used during its processing of the file?
One possibility is to use the usage command, which displays both the amount of RAM currently being used, as well as the User and the System time used by the tool since when it was started (thus, usage should be called both before and after each operation which you want to profile).
An example execution:
NuSMV > usage
Runtime Statistics
------------------
Machine name: *****
User time 0.005 seconds
System time 0.005 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 6932K
Virtual text size = 8139K
Virtual data size = 34089K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 30487K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 0
Minor page faults = 2607
Swaps = 0
Input blocks = 0
Output blocks = 0
Context switch (voluntary) = 9
Context switch (involuntary) = 0
NuSMV > reset; read_model -i nusmvLab.2018.06.07.smv ; go ; check_property ; usage
-- specification (L6 != pc U cc = len) IN mm is true
-- specification F (min = 2 & max = 9) IN mm is true
-- specification G !((((max > arr[0] & max > arr[1]) & max > arr[2]) & max > arr[3]) & max > arr[4]) IN mm is true
-- invariant max >= min IN mm is true
Runtime Statistics
------------------
Machine name: *****
User time 47.214 seconds
System time 0.284 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 270714K
Virtual text size = 8139K
Virtual data size = 435321K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 431719K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 1
Minor page faults = 189666
Swaps = 0
Input blocks = 48
Output blocks = 0
Context switch (voluntary) = 12
Context switch (involuntary) = 145

Getting poor performance while saving to Redis cache (using ServiceStack.Redis)

I am getting very poor performance while saving data to Redis cache.
Scenario :
1) Utilizing Redis cache service (provided by Microsoft Azure).
2) Running code in Virtual Machine created on Azure.
3) Both VM and Cache service are created on same Location
Code Snippet:
public void MyCustomFunction()
{
Stopwatch totalTime = Stopwatch.StartNew();
RedisEndpoint config = new RedisEndpoint();
config.Ssl = true;
config.Host = "redis.redis.cache.windows.net";
config.Password = Form1.Password;
config.Port = 6380;
RedisClient client = new RedisClient(config);
int j = 0;
for (int i = 0; i < 500; i++)
{
var currentStopWatchTime = Stopwatch.StartNew();
var msgClient = client.As<Message>();
List<string> dataToUpload = ClientData.GetRandomData();
string myCachedItem_1 = dataToUpload[1].ToString();
Random ran = new Random();
string newKey = string.Empty;
newKey = Guid.NewGuid().ToString();
Message newItem = new Message
{
Id = msgClient.GetNextSequence(), // Size : Long variable
//Id = (long)ran.Next(),
Key = j.ToString(), // Size: Int32 variable
Value = newKey, // Size : Guid string variable
Description = myCachedItem_1 // Size : 5 KB
};
string listName = ran.Next(1, 6).ToString();
msgClient.Lists[listName].Add(newItem);
//msgClient.Store(newItem);
Console.WriteLine("Loop Count : " + j++ + " , Total no. of items in List : " + listName + " are : " + msgClient.Lists[listName].Count);
Console.WriteLine("Current Time: " + currentStopWatchTime.ElapsedMilliseconds + " Total time:" + totalTime.ElapsedMilliseconds);
Console.WriteLine("Cache saved");
}
}
Performance (While Saving):
Note : (All times are in milliseconds)
Loop Count : 0 , Total no. of items in List : 2 are : 1
Current Time: 310 Total time:342
Cache saved
Loop Count : 1 , Total no. of items in List : 3 are : 1
Current Time: 6 Total time:349
Cache saved
Loop Count : 2 , Total no. of items in List : 5 are : 1
Current Time: 3 Total time:353
Cache saved
Loop Count : 3 , Total no. of items in List : 5 are : 2
Current Time: 3 Total time:356
Cache saved
Loop Count : 4 , Total no. of items in List : 5 are : 3
Current Time: 3 Total time:360
Cache saved
.
.
.
.
.
Loop Count : 330 , Total no. of items in List : 4 are : 69
Current Time: 2 Total time:7057
Cache saved
Loop Count : 331 , Total no. of items in List : 4 are : 70
Current Time: 3 Total time:7061
Cache saved
Loop Count : 332 , Total no. of items in List : 4 are : 71
Current Time: 2 Total time:7064
Cache saved
Performance (While Fetching)
List : 1
No. of items : 110
Time : 57
List : 2
No. of items : 90
Time : 45
List : 3
No. of items : 51
Time : 23
List : 4
No. of items : 75
Time : 32
List : 5
No. of items : 63
Time : 33
If you're dealing in batches you should look at reducing the number of synchronous network requests that you're making to reduce your latency which is going to be the major performance issue when communicating with network services.
For this example you're making a read when you call:
msgClient.GetNextSequence();
and a write when you make:
msgClient.Lists[listName].Add(newItem);
Which is a total of 1000 synchronous request/reply network requests in a single thread where each operation is dependent and has to complete before the next one can be sent which is why network latency is going to be a major source of performance issues which you should look at optimizing.
Batching Requests
If you're dealing with batched requests this can be optimized greatly by reducing the number of reads and writes by fetching all ids in a single request and storing them using the AddRange() batch operation, e.g:
var redisMessages = Redis.As<Message>();
const int batchSize = 500;
//fetch next 500 sequence of ids in a single request
var nextIds = redisMessages.GetNextSequence(batchSize);
var msgBatch = batchSize.Times(i =>
new Message {
Id = nextIds - (batchSize - i) + 1,
Key = i.ToString(),
Value = Guid.NewGuid().ToString(),
Description = "Description"
});
//Store all messages in a single multi operation request
redisMessages.Lists[listName].AddRange(msgBatch);
This will condense the 1000 redis operations down to 2 operations.
Than if you need to you can fetch all messages with:
var allMsgs = redisMessages.Lists[listName].GetAll();
or specific ranges using GetRange(startingFrom,endingAt) API's.

Libavcodec: How to tell end of access unit when decoding H.264 stream

I'm receiving H.264 video over RTP and decoding it with libavcodec. I'm unpackaging the NAL units from the RTP packets before feeding them to avcodec (including reassembling fragmentation units).
I'm trying to show effective decoding frame rate. I used to log the time after a successful decode video call where *got_picture_ptr is non-zero. So far this worked since I only ever got video where there was one slice per frame. But now I receive video where both I and P frames consist of 2 NAL units each, of types 5 and 1 respectively. Now when I feed the either slice of a frame, decode_video return that it got a picture, and the pAVFrame->coded_picture_number is increased from every slice.
How do I go about reliably finding the beginning or end of a video frame/picture/access unit?
I've dumped out a few NAL units from the stream and run them through h264_analyze from h264bitstream.
Output from h264_analyze on 4 NAL Units
!! Found NAL at offset 695262 (0xA9BDE), size 25 (0x0019)
==================== NAL ====================
forbidden_zero_bit : 0
nal_ref_idc : 1
nal_unit_type : 7 ( Sequence parameter set )
======= SPS =======
profile_idc : 66
constraint_set0_flag : 1
constraint_set1_flag : 1
constraint_set2_flag : 1
constraint_set3_flag : 0
reserved_zero_4bits : 0
level_idc : 32
seq_parameter_set_id : 0
chroma_format_idc : 0
residual_colour_transform_flag : 0
bit_depth_luma_minus8 : 0
bit_depth_chroma_minus8 : 0
qpprime_y_zero_transform_bypass_flag : 0
seq_scaling_matrix_present_flag : 0
log2_max_frame_num_minus4 : 12
pic_order_cnt_type : 2
log2_max_pic_order_cnt_lsb_minus4 : 0
delta_pic_order_always_zero_flag : 0
offset_for_non_ref_pic : 0
offset_for_top_to_bottom_field : 0
num_ref_frames_in_pic_order_cnt_cycle : 0
num_ref_frames : 1
gaps_in_frame_num_value_allowed_flag : 0
pic_width_in_mbs_minus1 : 79
pic_height_in_map_units_minus1 : 44
frame_mbs_only_flag : 1
mb_adaptive_frame_field_flag : 0
direct_8x8_inference_flag : 1
frame_cropping_flag : 0
frame_crop_left_offset : 0
frame_crop_right_offset : 0
frame_crop_top_offset : 0
frame_crop_bottom_offset : 0
vui_parameters_present_flag : 1
=== VUI ===
aspect_ratio_info_present_flag : 1
aspect_ratio_idc : 1
sar_width : 0
sar_height : 0
overscan_info_present_flag : 0
overscan_appropriate_flag : 0
video_signal_type_present_flag : 1
video_format : 5
video_full_range_flag : 1
colour_description_present_flag : 0
colour_primaries : 0
transfer_characteristics : 0
matrix_coefficients : 0
chroma_loc_info_present_flag : 0
chroma_sample_loc_type_top_field : 0
chroma_sample_loc_type_bottom_field : 0
timing_info_present_flag : 1
num_units_in_tick : 1
time_scale : 25
fixed_frame_rate_flag : 0
nal_hrd_parameters_present_flag : 0
vcl_hrd_parameters_present_flag : 0
low_delay_hrd_flag : 0
pic_struct_present_flag : 0
bitstream_restriction_flag : 1
motion_vectors_over_pic_boundaries_flag : 1
max_bytes_per_pic_denom : 0
max_bits_per_mb_denom : 0
log2_max_mv_length_horizontal : 6
log2_max_mv_length_vertical : 6
num_reorder_frames : 0
max_dec_frame_buffering : 1
=== HRD ===
cpb_cnt_minus1 : 0
bit_rate_scale : 0
cpb_size_scale : 0
initial_cpb_removal_delay_length_minus1 : 0
cpb_removal_delay_length_minus1 : 0
dpb_output_delay_length_minus1 : 0
time_offset_length : 0
!! Found NAL at offset 695290 (0xA9BFA), size 4 (0x0004)
==================== NAL ====================
forbidden_zero_bit : 0
nal_ref_idc : 1
nal_unit_type : 8 ( Picture parameter set )
======= PPS =======
pic_parameter_set_id : 0
seq_parameter_set_id : 0
entropy_coding_mode_flag : 0
pic_order_present_flag : 0
num_slice_groups_minus1 : 0
slice_group_map_type : 0
num_ref_idx_l0_active_minus1 : 0
num_ref_idx_l1_active_minus1 : 0
weighted_pred_flag : 0
weighted_bipred_idc : 0
pic_init_qp_minus26 : 3
pic_init_qs_minus26 : 0
chroma_qp_index_offset : 0
deblocking_filter_control_present_flag : 1
constrained_intra_pred_flag : 0
redundant_pic_cnt_present_flag : 0
transform_8x8_mode_flag : 1
pic_scaling_matrix_present_flag : 0
second_chroma_qp_index_offset : 1
!! Found NAL at offset 695297 (0xA9C01), size 50725 (0xC625)
==================== NAL ====================
forbidden_zero_bit : 0
nal_ref_idc : 1
nal_unit_type : 5 ( Coded slice of an IDR picture )
======= Slice Header =======
first_mb_in_slice : 0
slice_type : 2 ( I slice )
pic_parameter_set_id : 0
frame_num : 0
field_pic_flag : 0
bottom_field_flag : 0
idr_pic_id : 0
pic_order_cnt_lsb : 0
delta_pic_order_cnt_bottom : 0
redundant_pic_cnt : 0
direct_spatial_mv_pred_flag : 0
num_ref_idx_active_override_flag : 0
num_ref_idx_l0_active_minus1 : 0
num_ref_idx_l1_active_minus1 : 0
cabac_init_idc : 0
slice_qp_delta : 5
sp_for_switch_flag : 0
slice_qs_delta : 0
disable_deblocking_filter_idc : 0
slice_alpha_c0_offset_div2 : 0
slice_beta_offset_div2 : 0
slice_group_change_cycle : 0
=== Prediction Weight Table ===
luma_log2_weight_denom : 0
chroma_log2_weight_denom : 0
luma_weight_l0_flag : 0
chroma_weight_l0_flag : 0
luma_weight_l1_flag : 0
chroma_weight_l1_flag : 0
=== Ref Pic List Reordering ===
ref_pic_list_reordering_flag_l0 : 0
ref_pic_list_reordering_flag_l1 : 0
=== Decoded Ref Pic Marking ===
no_output_of_prior_pics_flag : 0
long_term_reference_flag : 0
adaptive_ref_pic_marking_mode_flag : 0
!! Found NAL at offset 746025 (0xB6229), size 38612 (0x96D4)
==================== NAL ====================
forbidden_zero_bit : 0
nal_ref_idc : 1
nal_unit_type : 5 ( Coded slice of an IDR picture )
======= Slice Header =======
first_mb_in_slice : 1840
slice_type : 2 ( I slice )
pic_parameter_set_id : 0
frame_num : 0
field_pic_flag : 0
bottom_field_flag : 0
idr_pic_id : 0
pic_order_cnt_lsb : 0
delta_pic_order_cnt_bottom : 0
redundant_pic_cnt : 0
direct_spatial_mv_pred_flag : 0
num_ref_idx_active_override_flag : 0
num_ref_idx_l0_active_minus1 : 0
num_ref_idx_l1_active_minus1 : 0
cabac_init_idc : 0
slice_qp_delta : 5
sp_for_switch_flag : 0
slice_qs_delta : 0
disable_deblocking_filter_idc : 0
slice_alpha_c0_offset_div2 : 0
slice_beta_offset_div2 : 0
slice_group_change_cycle : 0
=== Prediction Weight Table ===
luma_log2_weight_denom : 0
chroma_log2_weight_denom : 0
luma_weight_l0_flag : 0
chroma_weight_l0_flag : 0
luma_weight_l1_flag : 0
chroma_weight_l1_flag : 0
=== Ref Pic List Reordering ===
ref_pic_list_reordering_flag_l0 : 0
ref_pic_list_reordering_flag_l1 : 0
=== Decoded Ref Pic Marking ===
no_output_of_prior_pics_flag : 0
long_term_reference_flag : 0
adaptive_ref_pic_marking_mode_flag : 0
Both I slices show the frame_num = 0. The next 2 (not shown) have frame_num = 1.
What kind of packetization do you have with this H.264 stream? For example, with FU-A/FU-B fragmentation https://www.rfc-editor.org/rfc/rfc3984#page-11 you always can tell end of NAL unit since it's aligned with end of fragment marked as last fragment for current NALU.

.gvs (GuideView openmp statistics) file format

Is there a format of *.gvs files, used by GuideView OpenMP performance analyser?
The "guide.gvs" is generated, f.e. by intel's OpenMP'ed programmes with
$ export LD_PRELOAD=<path_to_icc_or_redist>/lib/libiompprof5.so
$ ./openmp_parallelized_prog
$ ls -l guide.gvs
It s a plain text.
Here is an example of such from very short omp programme:
$ cat guide.gvs
*** KAI statistics library k3301
*** Begin Task 0
Environment variables:
OMP_NUM_THREADS : 2
OMP_SCHEDULE : static
OMP_DYNAMIC : FALSE
OMP_NESTED : FALSE
KMP_STATSFILE : guide.gvs
KMP_STATSCOLS : 80
KMP_INTERVAL : 0
KMP_BLOCKTIME : 200
KMP_PARALLEL : 2
KMP_STACKSIZE : 2097152
KMP_STACKOFFSET : 0
KMP_SCHEDULING : <unknown>
KMP_CHUNK : <unknown>
KMP_LIBRARY : throughput
end
System parameters:
start : Wed Nov 1 12:26:52 2010
stop : Wed Nov 1 12:26:52 2010
host : localhost
ncpu : 2
end
Unix process parameters:
maxrss : 0
minflt : 440
majflt : 2
nswap : 0
inblock : 208
oublock : 0
nvcsw : 6
nivcsw : 7
end
Region counts:
serial regions : 2
barrier regions : 0
parallel regions : 1
end
Program execution time (in seconds):
cpu : 0.00 sec
elapsed : 0.04 sec
serial : 0.00 sec
parallel : 0.04 sec
cpu percent : 0.01 %
end
Summary over all regions (has 2 threads):
# Thread #0 #1
Sum Parallel : 0.036 0.027
Sum Imbalance : 0.035 0.026
Min Parallel : 0.036 0.027
Min Imbalance : 0.035 0.026
Max Parallel : 0.036 0.027
Max Imbalance : 0.035 0.026
end
Region #1 (has 2 threads) at main/9 in "/home/user/icc/omp.c":
# Thread #0 #1
Sum Parallel : 0.036 0.027
Sum Imbalance : 0.035 0.026
Min Parallel : 0.036 0.027
Min Imbalance : 0.035 0.026
Max Parallel : 0.036 0.027
Max Imbalance : 0.035 0.026
end
Region #1 (has 2 threads) profile:
# Thread Incl Excl Routine
0,0 : 0.000 0.000 main/9 "/home/user/icc/omp.c"
1,0 : 0.000 0.000 main/9 "/home/user/icc/omp.c"
end
Serial program regions:
Serial region #1 executes for 0.00 seconds
begins at START OF PROGRAM
ends before region #1 (using 2 threads) at main/9 in "/home/user/icc/omp.c"
Serial region #2 executes for 0.00 seconds
begins after region #1 (using 2 threads) at main/9 in "/home/user/icc/omp.c"
ends at END OF PROGRAM
end
Serial region #1 profile:
# Thread Incl Excl Routine
end
Serial region #2 profile:
# Thread Incl Excl Routine
end
Program events (total):
# Thread #0 #1
mppbeg : 1 0
mppend : 1 0
serial : 2 0
mppfkd : 1 0
mppfrk : 1 0
mppjoi : 1 0
mppadj : 1 0
mpptid : 51 50
end
Region #1 (has 2 threads) events:
# Thread #0 #1
mppfrk : 1 0
mppjoi : 1 0
mpptid : 50 50
end
Serial section events:
# Serial #1 #2
mppbeg : 1 0
mppend : 0 1
serial : 1 1
mppfkd : 1 0
mppadj : 1 0
mpptid : 1 0
end
*** end

Resources