Karaf error installing OSGi bundle listed in startup.properties with url - osgi

I am getting following error while Karaf comes up:
Error installing bundle listed in startup.properties with url - mvn:org.apache.karaf.service/org.apache.karaf.service.guard/3.0.6 = 10
Content of startup.properties is as follows:
mvn\:org.ops4j.pax.url/pax-url-aether/2.4.5 = 5
mvn\:org.ops4j.pax.url/pax-url-wrap/2.4.5/jar/uber = 5
mvn\:org.ops4j.pax.logging/pax-logging-api/1.8.4 = 8
#mvn\:org.ops4j.pax.logging/pax-logging-service/1.8.4 = 8
mvn\:org.ops4j.pax.logging/pax-logging-logback/1.8.4 = 8
mvn\:org.apache.karaf.service/org.apache.karaf.service.guard/3.0.6 = 10
mvn\:org.apache.felix/org.apache.felix.configadmin/1.8.4 = 10
mvn\:org.apache.felix/org.apache.felix.fileinstall/3.5.2 = 11
mvn\:org.ow2.asm/asm-all/5.0.3 = 12
mvn\:org.apache.aries/org.apache.aries.util/1.1.1 = 20
mvn\:org.apache.aries.proxy/org.apache.aries.proxy.api/1.0.1 = 20
mvn\:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.0.7 = 20
mvn\:org.apache.aries.proxy/org.apache.aries.proxy.impl/1.0.4 = 20
mvn\:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1 = 20
mvn\:org.apache.aries.blueprint/org.apache.aries.blueprint.core.compatibility/1.0.0 = 20
mvn\:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.4.4 = 20
mvn\:org.apache.karaf.deployer/org.apache.karaf.deployer.spring/3.0.6 = 24
mvn\:org.apache.karaf.deployer/org.apache.karaf.deployer.blueprint/3.0.6 = 24
mvn\:org.apache.karaf.deployer/org.apache.karaf.deployer.wrap/3.0.6 = 24
mvn\:org.apache.karaf.region/org.apache.karaf.region.core/3.0.6 = 25
mvn\:org.apache.karaf.features/org.apache.karaf.features.core/3.0.6 = 25
mvn\:org.apache.karaf.deployer/org.apache.karaf.deployer.features/3.0.6 = 26
mvn\:jline/jline/2.13 = 30
mvn\:org.jledit/core/0.2.1 = 30
mvn\:org.apache.karaf.features/org.apache.karaf.features.command/3.0.6 = 30
mvn\:org.apache.karaf.bundle/org.apache.karaf.bundle.core/3.0.6 = 30
mvn\:org.apache.karaf.bundle/org.apache.karaf.bundle.command/3.0.6 = 30
mvn\:org.apache.karaf.shell/org.apache.karaf.shell.console/3.0.6 = 30
mvn\:org.apache.karaf.jaas/org.apache.karaf.jaas.modules/3.0.6 = 30
mvn\:org.apache.karaf.jaas/org.apache.karaf.jaas.config/3.0.6 = 30
mvn\:org.apache.sshd/sshd-core/0.14.0 = 30
mvn\:org.apache.karaf.shell/org.apache.karaf.shell.help/3.0.6 = 30
mvn\:org.apache.karaf.shell/org.apache.karaf.shell.table/3.0.6 = 30
mvn\:org.apache.karaf.system/org.apache.karaf.system.core/3.0.6 = 30
mvn\:org.apache.karaf.system/org.apache.karaf.system.command/3.0.6 = 30
mvn\:org.apache.karaf.shell/org.apache.karaf.shell.commands/3.0.6 = 30
mvn\:org.apache.aries.quiesce/org.apache.aries.quiesce.api/1.0.0 = 30
Any idea what could be reason?

All the URIs (with scheme mvn:) specified in etc/startup.properties are translated directly by Karaf when it starts.
It's not possible to actually resolve mvn: URI, because mvn\:org.ops4j.pax.url/pax-url-aether/2.4.5 bundle is the one that can perform such resolution and we'd have chicken&egg problem here.
So Karaf translates these mvn: URIs to file: URIs pointing to ${karaf.home}/system. For example mvn\:org.ops4j.pax.logging/pax-logging-logback/1.8.4 → file:${karaf.home}/system/org/ops4j/pax/logging/pax-logging-logback/1.8.4/pax-logging-logback-1.8.4.jar.
If it's not found, it's not resolved. Please ensure you have the bundle org.apache.karaf.service.guard-3.0.6.jar in system/ directory of Karaf.

Related

SNMP - Decode Hex String Value

This is my first question here, so hope it's correctly done.
Im trying to get some information from a ZTE C300 OLT.
The thing is when i try to get the SN of one of the ONTS I get the response in HEX-String
snmpwalk -cpublic -v2c [OLTIP] 1.3.6.1.4.1.3902.1082.500.10.2.2.5.1.2
And this is the response that I get
SNMPv2-SMI::enterprises.3902.1082.500.10.2.2.5.1.2.285278736.1 = Hex-STRING: 5A 54 45 47 C8 79 9B 27
This is the SN that i have on the OLT ZTEGC8799B27, but im trying to convert the HEX-STRING into text and i don't get that SN text.
Indeed i have a python script for SNMP and the response that i get for that OID is
{'1.3.6.1.4.1.3902.1082.500.10.2.2.5.1.2.285278736.1': "ZTEGÈy\x9b'"}
Can someone give me a hand on this?. I'm new on SNMP and this is giving me some headache.
Thanks in advace!
This is a 8 octet hex string, the first 4 octets are ASCII.
Just convert hex 2 ascii.
Indeed it was easier. The firts 4 bytes were encoded, and the other 4 is the actual serial number splitted every 2 digits. So i only need to decode the first part and concatenate the rest.
Works with OLT ZTE C320
def hex_str(str):
str = str.strip()
str = str.split(' ')
vendor_id = ''
serial = str[4:]
serial = "".join(serial)
for hex_byte in str[:4]:
vendor_id += chr(int(hex_byte, 16))
normalized_serial = vendor_id + serial
return normalized_serial
def ascii_to_hex(str):
arr = []
hex_byte = ''
for i in range(len(str)):
hex_byte += hex(ord(str[i]))
hex_byte = hex_byte.replace('0x', ' ')
hex_byte = hex_str(hex_byte)
return hex_byte
# value = f"5A 54 45 47 C8 79 9B 27 "
# value = f"49 54 42 53 8B 69 A2 45 "
# value = f"ZTEGÈy\x9b'"
value = f"ITBS2Lz/"
# value = f"ITBS2HP#"
if(len(value) == 24):
print(hex_str(value))
else:
print(ascii_to_hex(value))

removing bad data from a data file using pig

I have a data file like this
1943 49 1
1975 91 L
1903 56 3
1909 52 3
1953 96 3
1912 82
1976 66 3
1913 35
1990 45 1
1927 92 A
1912 2
1924 22
1971 2
1959 94 E
now using pig script I want to remove the bad data like removing those rows which have characters and empty fields
I tried this way
records = load '/user/a106524609/test.txt' using PigStorage(' ') as
(year:chararray, temperature:int, quality:int);
rec1 = filter records by temperature != 'null' and (quality != 'null ')
Load it as lines
A = load 'data.txt' using PigStorage('\n') as (line:chararray);
Split on all whitespaces
B = FOREACH A GENERATE FLATTEN(STRSPLIT(line, '\\s+')) as (year:int,temp:int,quality:chararray);
Filter by valid strings
C = FILTER B BY quality IN ('0','1','2','3','4','5','6','7','8','9');
(Optionally) Cast to an int
D = FOREACH C GENERATE year,temp,(int)quality;
In Spark, I would start with a regex match of the expected format.
val cleanRows = sc.textFile("data.txt")
.filter(line => line.matches("(?:\\d+\\s+){2}\\d+"))

Use Oracle client tracing and tkprof to review submitted SQL queries

I would like to learn how to trace an Oracle client and view the SQL queries submitted.
I started by adding these lines to my client's sqlnet.ora file:
TRACE_LEVEL_CLIENT=16
TRACE_FILE_CLIENT=sqlnet.trc
TRACE_DIRECTORY_CLIENT=c:\temp
LOG_DIRECTORY_CLIENT=c:\temp
TRACE_UNIQUE_CLIENT=TRUE
TRACE_TIMESTAMP_CLIENT=TRUE
DIAG_ADR_ENABLED=OFF
Then I logged into the database on that same client using SQL*Plus. I submitted two queries:
select * from all_tables where table_name = 'ADDRESS';
select * from all_users where username like 'AB%';
Then I exited SQL*Plus. The trace file was created in c:\temp. The file is about 4000 lines long. I can definitely see my two SQL statements in there. The format is a pain to read though, as they are just hex dumps:
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 00 00 31 73 65 6C 65 63 |..1selec|
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 74 20 2A 20 66 72 6F 6D |t.*.from|
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 20 61 6C 6C 5F 75 73 65 |.all_use|
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 72 73 20 77 68 65 72 65 |rs.where|
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 20 75 73 65 72 6E 61 6D |.usernam|
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 65 20 6C 69 6B 65 20 27 |e.like.'|
(10632) [29-AUG-2016 17:08:40:240] nsbasic_bsd: 41 42 25 27 01 00 00 00 |AB%'....|
My research leads me to believe that tkprof is the way to get a readable report of my trace file. I tried the following:
tkprof c:\temp\sqlnet_10632.trc report.txt
But that gives me a pretty pointless file:
0 session in tracefile
0 user SQL statements in trace file.
0 internal SQL statements in trace file.
0 SQL statements in trace file.
0 unique SQL statements in trace file.
4361 lines in trace file.
0 elapsed seconds in trace file.
Ideally, I'd like to see a report that for this situation shows me easy-to-read SQL text submitted by the client (including the two I manually typed in), in the order they were submitted. Am I on the right track? What am I missing? If I'm not on the right track, what should I do instead to trace the SQL submitted by the client?
Note: I am using a 12c client. I do not have access to the database server.
Just for reference, Oracle provides the trcasst utility to perform this action:
$ORACLE_HOME/bin/trcasst client_Tract_file.trc > client_Tract_file.txt
The tkprof utility is used to generate reports from 10046 trace files.
These trace files show database operations.
Here is a good article to get you started with those:
sql trace 10046
tkprof would not be at all useful for sqlnet trace files.
For sqlnet trace files, you would want to use the trcasst utility.
While trcasst is useful, if you really want to find out what is going on, you will need to develop some understanding of the files themselves.
Here are some good references to get started with understanding sqlnet trace files:
Tracing Error Information for Oracle Net Services
If you have access to My Oracle Support, the following notes will be invaluable:
SQLNET PACKET STRUCTURE: NS PACKET HEADER (Doc ID 1007807.6)
Examining Oracle Net, Net8, SQLNet Trace Files (Doc ID 156485.1)
That second article has a PDF attached that explains quite a bit.
Documentation from 11g - 19c will all state that you should set the following in sqlnet.ora:
diag_adr_enabled=off
If you want accurate timestamps, then do this instead:
diag_adr_enabled=on
That is a bug that I hope to see fixed by the time Oracle 20c is released.
This isn't the answer I was hoping for, but I needed to get-r-done and move on, so I wrote a quick and dirty Windows console application (C#):
static void Main(string[] args)
{
using (var sr = new StreamReader(args[0]))
{
var line = string.Empty;
var parsingSqlHex = false;
var timestamp = string.Empty;
var parsedSql = string.Empty;
var patternStart = #"nsbasic_bsd\: packet dump";
var patternTimeStamp = #"\[\d{2}-[A-Z]{3}-\d{4} (\d\d\:){3}\d{3}\]";
var patternHex = #"nsbasic_bsd\: ([0-9A-F][0-9A-F] ){8}";
var patternEnd = #"nsbasic_bsd\: exit \(0\)$";
while (line != null)
{
if (Regex.IsMatch(line, patternStart))
{
timestamp = Regex.Match(line, patternTimeStamp).Value;
parsingSqlHex = true;
}
else if (parsingSqlHex)
{
if (Regex.IsMatch(line, patternEnd))
{
if (!string.IsNullOrEmpty(parsedSql))
{
Console.WriteLine(timestamp);
Console.WriteLine(parsedSql + "\r\n");
}
parsedSql = string.Empty;
parsingSqlHex = false;
}
else if (Regex.IsMatch(line, patternHex))
{
parsedSql += HexToString(line.Substring(line.Length - 35, 23));
}
}
line = sr.ReadLine();
}
}
}
static string HexToString(string hexValues)
{
var hexCodeArray = hexValues.Split(" ".ToCharArray());
var n = 0;
var s = string.Empty;
for (var i = 0; i < hexCodeArray.Length; i++)
{
n = Convert.ToInt32(hexCodeArray[i], 16);
if (n > 31 && n < 127) s += Convert.ToChar(Convert.ToUInt32(hexCodeArray[i], 16));
}
return s;
}
I'm using it to parse my trace files like so:
OracleTraceParser.exe c:\temp\trace.txt > report.txt
Then my report.txt file has some odd characters here and there, but nonetheless gives me what I'm after:
[30-AUG-2016 13:50:51:534]
i^qx(SELECT DECODE('A','A','1','2') FROM DUAL
[30-AUG-2016 13:50:51:534]
i
[30-AUG-2016 13:51:05:003]
^a5select * from all_tables where table_name = 'ADDRESS'
[30-AUG-2016 13:51:21:081]
i^a1select * from all_users where username like 'AB%'

How to use StudentT distribution in pymc3?

I'm not sure whether this counts as a question or a bug report. I posted a GitHub gist here: https://gist.github.com/jbwhit/a9012e04b0f48e582c22
I found this question (pymc3: hierarchical model with multiple obsesrved variables) to be an excellent starting point for my own hierarchical model, but ran into difficulties as soon as I tried to modify it in any substantial way.
First, the model and setup that works:
import numpy as np
import pymc3 as pm
n_individuals = 200
points_per_individual = 10
means = np.random.normal(30, 12, n_individuals)
observed = np.random.normal(means, 1, (points_per_individual, n_individuals))
model = pm.Model()
with model:
hyper_mean = pm.Normal('hyper_mean', mu=0, sd=100)
hyper_sigma = pm.HalfNormal('hyper_sigma', sd=3)
means = pm.Normal('means', mu=hyper_mean, sd=hyper_sigma, shape=n_individuals)
sigmas = pm.HalfNormal('sigmas', sd=100)
ye = pm.Normal('ye', mu=means, sd=sigmas, observed=observed)
trace = pm.sample(10000)
All of the above works as expected (and the traces look nice). The next piece of code makes one change (swapping a T distribution for the Normal):
model = pm.Model()
with model:
hyper_mean = pm.Normal('hyper_mean', mu=0, sd=100)
hyper_sigma = pm.HalfNormal('hyper_sigma', sd=3)
### Changed to a T distribution ###
means = pm.StudentT('means', nu=hyper_mean, sd=hyper_sigma, shape=n_individuals)
sigmas = pm.HalfNormal('sigmas', sd=100)
ye = pm.Normal('ye', mu=means, sd=sigmas, observed=observed)
trace = pm.sample(10000)
The following is the output:
Assigned NUTS to hyper_mean
Assigned NUTS to hyper_sigma_log
Assigned NUTS to means
Assigned NUTS to sigmas_log
---------------------------------------------------------------------------
PositiveDefiniteError Traceback (most recent call last)
<ipython-input-12-69f59e2f3d47> in <module>()
18 ye = pm.Normal('ye', mu=means, sd=sigmas, observed=observed)
19
---> 20 trace = pm.sample(10000)
/Users/jonathan/miniconda2/envs/pymc3/lib/python3.5/site-packages/pymc3/sampling.py in sample(draws, step, start, trace, chain, njobs, tune, progressbar, model, random_seed)
121 """
122 model = modelcontext(model)
--> 123
124 step = assign_step_methods(model, step)
125
/Users/jonathan/miniconda2/envs/pymc3/lib/python3.5/site-packages/pymc3/sampling.py in assign_step_methods(model, step, methods)
66 selected_steps[selected].append(var)
67
---> 68 # Instantiate all selected step methods
69 steps += [s(vars=selected_steps[s]) for s in selected_steps if selected_steps[s]]
70
/Users/jonathan/miniconda2/envs/pymc3/lib/python3.5/site-packages/pymc3/sampling.py in <listcomp>(.0)
66 selected_steps[selected].append(var)
67
---> 68 # Instantiate all selected step methods
69 steps += [s(vars=selected_steps[s]) for s in selected_steps if selected_steps[s]]
70
/Users/jonathan/miniconda2/envs/pymc3/lib/python3.5/site-packages/pymc3/step_methods/nuts.py in __init__(self, vars, scaling, step_scale, is_cov, state, Emax, target_accept, gamma, k, t0, model, profile, **kwargs)
76
77
---> 78 self.potential = quad_potential(scaling, is_cov, as_cov=False)
79
80 if state is None:
/Users/jonathan/miniconda2/envs/pymc3/lib/python3.5/site-packages/pymc3/step_methods/quadpotential.py in quad_potential(C, is_cov, as_cov)
33 return QuadPotential_SparseInv(C)
34
---> 35 partial_check_positive_definite(C)
36 if C.ndim == 1:
37 if is_cov != as_cov:
/Users/jonathan/miniconda2/envs/pymc3/lib/python3.5/site-packages/pymc3/step_methods/quadpotential.py in partial_check_positive_definite(C)
56 if len(i):
57 raise PositiveDefiniteError(
---> 58 "Simple check failed. Diagonal contains negatives", i)
59
60
PositiveDefiniteError: Scaling is not positive definite. Simple check failed. Diagonal contains negatives. Check indexes [202]
Any suggestion on how to get this to work?
As I mentioned in the comment, try running:
model = pm.Model()
with model:
hyper_mean = pm.Normal('hyper_mean', mu = 0, sd = 100)
hyper_sigma = pm.HalfNormal('hyper_sigma', sd = 3)
nu = pm.Exponential('nu', 1./10, testval = 5.)
### Changed to a T distribution ###
means = pm.StudentT('means', nu = nu, mu = hyper_mean, sd = hyper_sigma, shape = n_individuals)
sigmas = pm.HalfNormal('sigmas', sd = 100)
ye = pm.Normal('ye', mu = means, sd = sigmas, observed = observed)
trace = pm.sample(10000)
In other words: use the mu argument of the pm.StudentT method for hyper_mean and nu for the degrees of freedom.
Once it starts working, you might also try to add the pm.find_MAP method (as suggested by #Chris Fonnesbeck).
Try finding the MAP estimate and use that as the starting point for the MCMC run:
start = pm.find_MAP()
trace = pm.sample(10000, start=start)

Calculation of application speedup using gnuplot and awk

Here's the problem:
Speedup formula: S(p) = T(1)/T(p) = (avg time for one process / avg time for p processes)
There are 5 logs, from which one wants to extract the information.
cg.B.1.log contains the execution times for one process, so we do the calculation of the average time to obtain T(1). The other log files contain the execution times for 2, 4, 8 and 16 processes. Averages of those times must also be calculated, since they are T(p).
Here's the code that calculates the averages:
tavg(n) = "awk 'BEGIN { FS = \"[ \\t]*=[ \\t]*\" } /Time in seconds/ { s += $2; c++ } /Total processes/ { if (! CP) CP = $2 } END { print s/c }' cg.B.".n.".log ".(n == 1 ? ">" : ">>")." tavg.dat;"
And the code that calculates the speedup:
system "awk 'NR==1{n=$0} {print n/$0}' tavg.dat > speedup.dat;"
How do I combine those two commands so that the output 'speedup.dat' is produced directly without using file tavg.dat?
Here are the contents of files, the structure of all log files is identical. I attached only the first two executions for abbreviation purposes.
cg.B.1.log
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
Start in 16:45:15--25/12/2014
NAS Parallel Benchmarks 3.3 -- CG Benchmark
Size: 75000
Iterations: 75
Number of active processes: 1
Number of nonzeroes per row: 13
Eigenvalue shift: .600E+02
iteration ||r|| zeta
1 0.30354859861452E-12 59.9994751578754
2 0.11186435488267E-14 21.7627846142536
3 0.11312258511928E-14 22.2876617043224
4 0.11222160585284E-14 22.5230738188346
5 0.11244234177219E-14 22.6275390653892
6 0.11330434819384E-14 22.6740259189533
7 0.11334259623050E-14 22.6949056826251
8 0.11374839313647E-14 22.7044023166872
9 0.11424877443039E-14 22.7087834345620
10 0.11329475190566E-14 22.7108351397177
11 0.11337364093482E-14 22.7118107121341
12 0.11379928308864E-14 22.7122816240971
13 0.11369453681794E-14 22.7125122663243
14 0.11430390337015E-14 22.7126268007594
15 0.11400318886400E-14 22.7126844161819
16 0.11352091331197E-14 22.7127137461755
17 0.11350923439124E-14 22.7127288402000
18 0.11475378864565E-14 22.7127366848296
19 0.11366777929028E-14 22.7127407981217
20 0.11274243312504E-14 22.7127429721364
21 0.11353930792856E-14 22.7127441294025
22 0.11299685800278E-14 22.7127447493900
23 0.11296405041170E-14 22.7127450834533
24 0.11381975597887E-14 22.7127452643881
25 0.11328127301663E-14 22.7127453628451
26 0.11367332658939E-14 22.7127454166517
27 0.11283372178605E-14 22.7127454461696
28 0.11384734158863E-14 22.7127454624211
29 0.11394011989719E-14 22.7127454713974
30 0.11354294067640E-14 22.7127454763703
31 0.11412988029103E-14 22.7127454791343
32 0.11358088407717E-14 22.7127454806740
33 0.11263266152515E-14 22.7127454815316
34 0.11275183080286E-14 22.7127454820131
35 0.11328306951409E-14 22.7127454822840
36 0.11357880314891E-14 22.7127454824349
37 0.11332687790488E-14 22.7127454825202
38 0.11324108818137E-14 22.7127454825684
39 0.11365065523777E-14 22.7127454825967
40 0.11361185361321E-14 22.7127454826116
41 0.11276519820716E-14 22.7127454826202
42 0.11317183424878E-14 22.7127454826253
43 0.11236007481770E-14 22.7127454826276
44 0.11304065564684E-14 22.7127454826296
45 0.11287791356431E-14 22.7127454826310
46 0.11297028000133E-14 22.7127454826310
47 0.11281236869666E-14 22.7127454826314
48 0.11277254075548E-14 22.7127454826317
49 0.11320327289847E-14 22.7127454826309
50 0.11287655285563E-14 22.7127454826321
51 0.11230503422400E-14 22.7127454826324
52 0.11292089094944E-14 22.7127454826313
53 0.11366728396408E-14 22.7127454826315
54 0.11222618466968E-14 22.7127454826310
55 0.11278193276516E-14 22.7127454826315
56 0.11244624896030E-14 22.7127454826316
57 0.11264508872685E-14 22.7127454826318
58 0.11255583774760E-14 22.7127454826314
59 0.11227129146723E-14 22.7127454826314
60 0.11189480800173E-14 22.7127454826318
61 0.11163241472678E-14 22.7127454826315
62 0.11278839424218E-14 22.7127454826318
63 0.11226804133008E-14 22.7127454826313
64 0.11222456601361E-14 22.7127454826317
65 0.11270879524310E-14 22.7127454826308
66 0.11303771390006E-14 22.7127454826319
67 0.11240101357287E-14 22.7127454826319
68 0.11240278884391E-14 22.7127454826321
69 0.11207748067718E-14 22.7127454826317
70 0.11178755187571E-14 22.7127454826327
71 0.11195935245649E-14 22.7127454826313
72 0.11260715126337E-14 22.7127454826322
73 0.11281677964997E-14 22.7127454826316
74 0.11162340034815E-14 22.7127454826318
75 0.11208709203921E-14 22.7127454826310
Benchmark completed
VERIFICATION SUCCESSFUL
Zeta is 0.2271274548263E+02
Error is 0.3128387698896E-15
CG Benchmark Completed.
Class = B
Size = 75000
Iterations = 75
Time in seconds = 88.72
Total processes = 1
Compiled procs = 1
Mop/s total = 616.64
Mop/s/process = 616.64
Operation type = floating point
Verification = SUCCESSFUL
Version = 3.3
Compile date = 25 Dec 2014
Compile options:
MPIF77 = mpif77
FLINK = $(MPIF77)
FMPI_LIB = -L/usr/lib/openmpi/lib -lmpi -lopen-rte -lo...
FMPI_INC = -I/usr/lib/openmpi/include -I/usr/lib/openm...
FFLAGS = -O
FLINKFLAGS = -O
RAND = randi8
Please send the results of this run to:
NPB Development Team
Internet: npb#nas.nasa.gov
If email is not available, send this to:
MS T27A-1
NASA Ames Research Center
Moffett Field, CA 94035-1000
Fax: 650-604-3957
Finish in 16:46:46--25/12/2014
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
Start in 17:03:13--25/12/2014
NAS Parallel Benchmarks 3.3 -- CG Benchmark
Size: 75000
Iterations: 75
Number of active processes: 1
Number of nonzeroes per row: 13
Eigenvalue shift: .600E+02
iteration ||r|| zeta
1 0.30354859861452E-12 59.9994751578754
2 0.11186435488267E-14 21.7627846142536
3 0.11312258511928E-14 22.2876617043224
4 0.11222160585284E-14 22.5230738188346
5 0.11244234177219E-14 22.6275390653892
6 0.11330434819384E-14 22.6740259189533
7 0.11334259623050E-14 22.6949056826251
8 0.11374839313647E-14 22.7044023166872
9 0.11424877443039E-14 22.7087834345620
10 0.11329475190566E-14 22.7108351397177
11 0.11337364093482E-14 22.7118107121341
12 0.11379928308864E-14 22.7122816240971
13 0.11369453681794E-14 22.7125122663243
14 0.11430390337015E-14 22.7126268007594
15 0.11400318886400E-14 22.7126844161819
16 0.11352091331197E-14 22.7127137461755
17 0.11350923439124E-14 22.7127288402000
18 0.11475378864565E-14 22.7127366848296
19 0.11366777929028E-14 22.7127407981217
20 0.11274243312504E-14 22.7127429721364
21 0.11353930792856E-14 22.7127441294025
22 0.11299685800278E-14 22.7127447493900
23 0.11296405041170E-14 22.7127450834533
24 0.11381975597887E-14 22.7127452643881
25 0.11328127301663E-14 22.7127453628451
26 0.11367332658939E-14 22.7127454166517
27 0.11283372178605E-14 22.7127454461696
28 0.11384734158863E-14 22.7127454624211
29 0.11394011989719E-14 22.7127454713974
30 0.11354294067640E-14 22.7127454763703
31 0.11412988029103E-14 22.7127454791343
32 0.11358088407717E-14 22.7127454806740
33 0.11263266152515E-14 22.7127454815316
34 0.11275183080286E-14 22.7127454820131
35 0.11328306951409E-14 22.7127454822840
36 0.11357880314891E-14 22.7127454824349
37 0.11332687790488E-14 22.7127454825202
38 0.11324108818137E-14 22.7127454825684
39 0.11365065523777E-14 22.7127454825967
40 0.11361185361321E-14 22.7127454826116
41 0.11276519820716E-14 22.7127454826202
42 0.11317183424878E-14 22.7127454826253
43 0.11236007481770E-14 22.7127454826276
44 0.11304065564684E-14 22.7127454826296
45 0.11287791356431E-14 22.7127454826310
46 0.11297028000133E-14 22.7127454826310
47 0.11281236869666E-14 22.7127454826314
48 0.11277254075548E-14 22.7127454826317
49 0.11320327289847E-14 22.7127454826309
50 0.11287655285563E-14 22.7127454826321
51 0.11230503422400E-14 22.7127454826324
52 0.11292089094944E-14 22.7127454826313
53 0.11366728396408E-14 22.7127454826315
54 0.11222618466968E-14 22.7127454826310
55 0.11278193276516E-14 22.7127454826315
56 0.11244624896030E-14 22.7127454826316
57 0.11264508872685E-14 22.7127454826318
58 0.11255583774760E-14 22.7127454826314
59 0.11227129146723E-14 22.7127454826314
60 0.11189480800173E-14 22.7127454826318
61 0.11163241472678E-14 22.7127454826315
62 0.11278839424218E-14 22.7127454826318
63 0.11226804133008E-14 22.7127454826313
64 0.11222456601361E-14 22.7127454826317
65 0.11270879524310E-14 22.7127454826308
66 0.11303771390006E-14 22.7127454826319
67 0.11240101357287E-14 22.7127454826319
68 0.11240278884391E-14 22.7127454826321
69 0.11207748067718E-14 22.7127454826317
70 0.11178755187571E-14 22.7127454826327
71 0.11195935245649E-14 22.7127454826313
72 0.11260715126337E-14 22.7127454826322
73 0.11281677964997E-14 22.7127454826316
74 0.11162340034815E-14 22.7127454826318
75 0.11208709203921E-14 22.7127454826310
Benchmark completed
VERIFICATION SUCCESSFUL
Zeta is 0.2271274548263E+02
Error is 0.3128387698896E-15
CG Benchmark Completed.
Class = B
Size = 75000
Iterations = 75
Time in seconds = 87.47
Total processes = 1
Compiled procs = 1
Mop/s total = 625.43
Mop/s/process = 625.43
Operation type = floating point
Verification = SUCCESSFUL
Version = 3.3
Compile date = 25 Dec 2014
Compile options:
MPIF77 = mpif77
FLINK = $(MPIF77)
FMPI_LIB = -L/usr/lib/openmpi/lib -lmpi -lopen-rte -lo...
FMPI_INC = -I/usr/lib/openmpi/include -I/usr/lib/openm...
FFLAGS = -O
FLINKFLAGS = -O
RAND = randi8
Please send the results of this run to:
NPB Development Team
Internet: npb#nas.nasa.gov
If email is not available, send this to:
MS T27A-1
NASA Ames Research Center
Moffett Field, CA 94035-1000
Fax: 650-604-3957
Finish in 17:04:43--25/12/2014
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
tavg.dat
88.3055
45.1482
37.7202
37.4035
53.777
speedup.dat
1
1.9559
2.34107
2.36089
1.64207
You can do it all in one awk script that processes all the log files:
#!/usr/bin/awk -f
BEGIN { FS="=" }
lfname != FILENAME { lfname = FILENAME; split(FILENAME, a, "."); fnum=a[3] }
/Time in seconds/ { tsecs[fnum] += $2; tcnt[fnum]++ }
/Total processes/ { cp[fnum] = int($2) }
END {
tavg1 = tsecs[1]/tcnt[1]
for( k in tsecs ) {
tavgk = tsecs[k]/tcnt[k]
if( tavgk > 0 ) {
print k OFS cp[k] OFS tavgk OFS tavg1/tavgk
}
}
}
If you put that in a file called awk.script and make it executable with chmod +x awk.script you can run it in bash like:
./awk.script cg.B.*.log
If you're using GNU awk, the output will be ordered( extra steps may be needed to ensure the output is ordered using other awk flavors ).
Where I generated a 2nd and 3rd file, the output is like:
1 1 88.095 1
2 2 68.095 1.29371
3 4 49.595 1.77629
where the unnamed columns are like: file number, # processes, avg per file, speedup. You could get just the speedups by changing the print in the END block to be like print tavg1/tavgk.
Here's a breakdown of the script:
Use a simpler field separator in BEGIN
lfname != FILENAME - parse out file number from the filename as fnum but only when the FILENAME changes.
/Time in seconds/ - store the values in tsecs and tcnt arrays with an fnum key. Use int() function to strip whitespace from processes value.
/Total processes/ - store the process in the cp array with an fnum key
END - Calculate the average for fnum 1 as tavg1, loop through the keys in tsecs and calculate the average by fnum key as tavgk. When tavgk > 0 print the output as described above.
You have figured out all the difficult parts already. You don't need the tavg.dat file at all. Create your tavg(n) function directly as a system call:
tavg(n) = system("awk 'BEGIN { FS = \"[ \\t]*=[ \\t]*\" } \
/Time in seconds/ { s += $2; c++ } /Total processes/ { \
if (! CP) CP = $2 } END { print s/c }' cg.B.".n.".log")
And a speedup(n) function as
speedup(n)=tavg(n)/tavg(1)
Now you can set print to write to a file:
set print "speedup.dat"
do for [i=1:5] {
print speedup(i)
}
unset print

Resources