How to put 7 events in one eventset in PAPI - events

In PAPI Im trying to put 7 events in one eventset so I can read 7 results in one operation but I always get return -1 ,can anyone help me?my code like this:
int events1[] = {
PAPI_L1_TCM,
PAPI_L2_TCM,
PAPI_L3_TCM,
PAPI_MEM_WCY,
PAPI_RES_STL,
PAPI_TLB_DM,
PAPI_TLB_IM};
PAPI_library_init(PAPI_VER_CURRENT);
i = PAPI_start_counters(events1,7);
where i appears to be -1 which means PAPI_EINVAL.
I tried change the value PAPI_NUM_TLS but it didn't work.

I have the same problem now. But as I have found, the trouble comes from 5th and 6th counters. Here: https://icl.cs.utk.edu/projects/papi/wiki/PAPI3:PAPI_add_event.3
at IBM POWER6 NOTES there is mentioned that these two counters are specific and as I understand should count concrete events. I have not found the solution yet. For 5th one, I added PAPI_TOT_INS and seems to work but for 6th one PAPI_TOT_CYC gives PAPI_ECNFLCT error as they say.

Related

Report Builder Expressions

Im new to Report Builder and having issues with some expressions that Im trying to implement in a report. I got the standard ones to work however as soon as I try any distinctions, I get error messages. Over the last couple weeks, Ive tried many combinations, read the expression help, google and looking at other questions at internet sites. To reduce my frustrations, I even would jump to other expressions and walk away hoping I would have different insight coming back.
Its probably something simple or something I dont know about writing expressions.
Im hoping that someone can help with these expressions; they are the versions I get the least errors with(usually just expression expected) and show what Im trying to accomplish.
=IIF((Fields!RECORDFLAG.Value)='D',COUNTDISTINCT(Fields!TICKETNUM.Value),0)
=IIF((Fields!TRANSTYPE.Value)='1' and (Fields!RECORDFLAG.VALUE)='A' or
'B',SUM(Fields!DOLLARS.Value),0)
=IIF((Fields!TRANSTYPE.Value)='1' and
(Fields!RECORDFLAG.VALUE)='P',SUM(Fields!DOLLARS.Value),0)
=Sum([DOLLARS] case when [RECORDFLAG]='P' then -1*[DOLLARS])
Thank You.
=IIF((Fields!RECORDFLAG.Value)=”D”,COUNTDISTINCT(Fields!TICK‌​ETNUM.Value))
The error message gives you the answer here - no false part of the iif() has been specified. Use =IIF((Fields!RECORDFLAG.Value)=”D”,COUNTDISTINCT(Fields!TICK‌​ETNUM.Value), 0)
=IIF((Fields!TRANSTYPE.Value)="1" and (Fields!RECORDFLAG.VALUE)="A" or "B",SUM(Fields!DOLLARS.Value),0)
This is not how an OR works in SSRS. Use:
=IIF((Fields!TRANSTYPE.Value)="1" and (Fields!RECORDFLAG.VALUE="A" or Fields!RECORDFLAG.Value = "B"),SUM(Fields!DOLLARS.Value),0)
The 0s are returned due to your report design. countdistinct() is an aggregate function - it's meant to be used on a set of data. However, your iif() is only testing on a per row basis - you're basically saying "if the current row is thing, count all the distinct values" which doesn't make sense. There are a couple of ways forward:
You can count the number of times a certain value occurs in a given condition using a sum(). This is not the same as the countdistinct(), but if you use =sum(iif(Fields!RECORDFLAG.Value = "D", 1, 0)) then you will get the number of times RECORDFLAG is D in that set. Note: this requires the data to be aggregated (so in SSRS, grouped in a tablix).
You can use custom code to count distinct values in a set. See https://itsalocke.com/aggregate-on-a-lookup-in-ssrs/. You can apply this even if you have only one dataset - just reference the same one twice.
You can change the way your report works. You can group on Fields!RECORDFLAG.Value and filter the group to where Fields!RECORDFLAG.Value = "D". Then in your textbox, use =countdistinct(Fields!TICKETNUM.Value) to get the distinct values for TICKETNUM when RECORDFLAG is D.

Use cut and grep to separate data while printing multiple fields

I want to be able to separate data by weeks, and the week is stated in a specific field on every line and would like to know how to use grep, cut, or anything else that's relevant JUST on that field the week is specified in while still being able to save the rest of the data that's being given to me. I need to be able to pipe the information into it via | because that's how the rest of my program needs it to be.
as the output gets processed, it should look something like this
asset.14548.extension 0
asset.40795.extension 0
asset.98745.extension 1
I want to be able to sort those names by their week number while still being able to keep the asset name in my output because the number of times that asset shows up is counted up, but my problem is I can't make my program smart enough to take just the "1" from the week number but smart enough to ignore the "1" located in the asset name.
UPDATE
The closest answer I found was
grep "^.........................$week" ;
That's good, but it relies on every string being the same length. Is there a way I can have it start from the right instead of the left? Because if so then that'd answer my question.
^ tells grep to start checking from the left and . tells grep to ignore whatever's in that space
I found what I was looking for in some documentation. Anchor matches!
grep "$week$" file
would output this if $week was 0
asset.14548.extension 0
asset.40795.extension 0
I couldn't find my exact question or a closely similar question with a simple answer, so hopefully it helps the next person scratching their head on this.

Apache Storm Fields Grouping Calculation

I asked this question in the Storm user group and haven't gotten a response yet, so I decided to ask it here. I've found the code, and many references to how the the taskIndex is calculated but when I try using the following I don't get the same result as my Storm topology. I've also seen more than one posting where others report the same.
Here's the question:
Hello,
I’ve tried to use the information below to generate the hash, mod it, and in turn, calculate the correct consuming destination task index, but without success. I’ve scoured the Internet to find an example of a hand calculation of this nature and have turned up empty. I must be missing something in my hand calc, so I’m hoping someone on the list can help me out.
I have field grouped as follows:
.fieldsGrouping(EXAMPLE_BOLT, EXAMPLE_BOLT_STREAM, new Fields(TopologyConstants.EXAMPLE_FIELD_GROUPING_ID))
My EXAMPLE_BOLT emits as shown here:
collector.emit(TopologyConstants.EXAMPLE_BOLT_STREAM, new Values(EXAMPLE_FIELD_GROUPING_ID_VALUE, EXAMPLE_DATA_INSTANCE));
I perform the calculation as follows:
int numberOfConsumingTasks = x;
Integer EXAMPLE_FIELD_GROUPING_ID_VALUE = y;
ArrayList alist = new ArrayList();
alist.add(EXAMPLE_FIELD_GROUPING_ID_VALUE);
int hashCode = Arrays.deepHashCode(alist.toArray());
int targetTaskIndex = Math.abs(hashCode) % numberOfConsumingTasks;
The resulting targetTaskIndex value from this calculation does not match the value produced by Storm, when I use real values from my topology.
Can someone tell me what I’m doing wrong?
Thanks,
Aubrey

OpenMDAO v1.x: output of sub Group in ParallelGroup does not exist

When running in parallel I am unable to connect unknowns of subgroups in a ParallelGroup() even though I can connect to the subgroups' params. The code causing the problem (with names changed for clarity) is below. This code is within a group of a larger structure, but is the only place where MPI is being used:
for i in range(0, nTasks):
self.connect('comp_a.output%i' % i, 'parallel_group.sub_group%i.param_a' % i)
self.connect('input_param%i' % i, 'parallel_group.sub_group%i.param_b' % i)
self.connect('parallel_group.sub_group%i.output' % i, 'comp_b.input%i' % i)
The first two connections seem to work fine, but the last one throws an error:
NameError: Source 'parallel_group.sub_group0.output' cannot be connected to target 'comb_b.input0': 'parallel_group.sub_group0.output' does not exist.
Also, if I comment out the offending line, then first line in the loop fails for the second process with the same error message:
NameError: Source 'comp_a.output1' cannot be connected to target 'parallel_group.sub_group1.param_a': 'parallel_group.sub_group1.param_a' does not exist.
All the connections work fine with our serial version of the code. The serial version is the same except that the sub_groups are added directly to the group this code is in rather than being wrapped in parallel_group.
I have tried to look over the tutorials and examples but have not been able to figure what might be wrong. I would really appreciate any suggestions of what to check or what may be wrong. Sorry to not post a complete code sample.
its a little unclear, but it sounds like you've added a new group in the parallel version of the code, named "parallel_group". When you did this, did you promote anything (or everything) from that group? If so, then you shouldn't add the parallel group into the variable name path for the connection.
That seems like the only thing likely to trip you up. I could try to debug a bit more if you can come up with a sample code you can post up here that would show the problem.

google search appliance accurate result count parameter not making a difference

We are having a result count issue where the pages have 10 results per page. For pagination we are getting 64 result count on page 1 (ie start=0), 25 for page 2, and 21 for page 3.
I understand as per documentation for estimated vs actual results that it is not guaranteed but the above result count is when I set filter=0 and rc=1. The rc=1 does not appear to make a difference when included or not. We are on version 7.2.0.G.252
filter=0&rc=1 should work for you and you should see the same count even after paginating.
What you need to notice is, when you click on pagination link, make sure the filter=0&rc=1 are carried over. i.e., after pagination, see if you still have the filter and rc parameters intact.
Also check using the default_frontend as your custom frontend may not be handling it?
The problem was related to the collection not the query. The content match pattern did not include a "/" at end which when resolved gave an accurate count. Thanks for the assistance.

Resources