BIRT report for lists of lists - birt

I'm a bit of a BIRT newbie, and I could really do with a hand.
Some background:
I'm developing some software which allows display and simple modelling of Layer 1 connectivity in a data centre.
This is Java based, running on Tomcat, using BIRT reports. BIRT fetches the data from a general web-service we've implemented, serving data up as XML, which BIRT fetches using SOAP.
The report I'm working on currently queries our system to find out the circuit trace on a particular port on a piece of equipment.
The simple report for this works fine. It gives an ancestry path to the assets, and then the specific asset and port.
For example, asset ID 49345, port 1 would result in a report that looks (a bit) like this...
Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Telephone Patch Panel P1 - B1
Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Patch Panel 02 P2 - B2
Organization >> Client/Server development >> U1 ClntSvr FB 02 >> U1 ClntSvr FB 02 I4 P2 - B2
This says that the back of the telephone patch panel goes to the front, via a patch cord to another panel, to the back of that panel, to the back of a floor box.
This report works quite happily.
One customer wants more!
They want the Excel export from the BIT report to be filterable, i.e. rather than having a delimited ancestry path, they need it in a list form, so when it's exported to Excel, each entry is on a different column.
I've modified my query to return an array of ancestry elements rather than a single string, and in its own way, this also works.
The SOAP response for this new query is below (for information - it may help)
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<FindCircuitByAssetAndPortResponse>
<CircuitDetail>
<FoundByAsset>Comms Room C02 R01 Telephone Patch Panel (id: 49345)</FoundByAsset>
<FoundByPort>P1</FoundByPort>
<CircuitAssetDetail>
<AssetId>49345</AssetId>
<AncestryPath>Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Telephone Patch Panel</AncestryPath>
<AncestryPathList>
<AncestryPathElement>Organization</AncestryPathElement>
<AncestryPathElement>Comms Room</AncestryPathElement>
<AncestryPathElement>Comms Room Cabinet 02 Rack 01</AncestryPathElement>
<AncestryPathElement>Comms Room C02 R01 Telephone Patch Panel</AncestryPathElement>
</AncestryPathList>
<AssetTypeName>Patch Panel</AssetTypeName>
<InPort>B1</InPort>
<OutPort>P1</OutPort>
</CircuitAssetDetail>
<CircuitAssetDetail>
<AssetId>49339</AssetId>
<AncestryPath>Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Patch Panel 02</AncestryPath>
<AncestryPathList>
<AncestryPathElement>Organization</AncestryPathElement>
<AncestryPathElement>Comms Room</AncestryPathElement>
<AncestryPathElement>Comms Room Cabinet 02 Rack 01</AncestryPathElement>
<AncestryPathElement>Comms Room C02 R01 Patch Panel 02</AncestryPathElement>
</AncestryPathList>
<AssetTypeName>Patch Panel</AssetTypeName>
<InPort>P2</InPort>
<OutPort>B2</OutPort>
</CircuitAssetDetail>
<CircuitAssetDetail>
<AssetId>48634</AssetId>
<AncestryPath>Organization >> Client/Server development >> U1 ClntSvr FB 02 >> U1 ClntSvr FB 02 I4</AncestryPath>
<AncestryPathList>
<AncestryPathElement>Organization</AncestryPathElement>
<AncestryPathElement>Client/Server development</AncestryPathElement>
<AncestryPathElement>U1 ClntSvr FB 02</AncestryPathElement>
<AncestryPathElement>U1 ClntSvr FB 02 I4</AncestryPathElement>
</AncestryPathList>
<AssetTypeName>Module</AssetTypeName>
<InPort>P2</InPort>
<OutPort>B2</OutPort>
</CircuitAssetDetail>
</CircuitDetail>
</FindCircuitByAssetAndPortResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
The report data set uses the deepest layer, i.e. the ancestry elements.
When it displays the data though, there is duplicated data. For example, the data above is now shown as...
Organization B1 - P1
Comms Room B1 - P1
Comms Room Cabinet 02 Rack 01 B1 - P1
Comms Room C02 R01 Telephone Patch Panel B1 - P1
Organization P2 - B2
Comms Room P2 - B2
Comms Room Cabinet 02 Rack 01 P2 - B2
Comms Room C02 R01 Patch Panel 02 P2 - B2
Organization P2 - B2
Client/Server development P2 - B2
U1 ClntSvr FB 02 P2 - B2
U1 ClntSvr FB 02 I4 P2 - B2
This is "correct" in that we're getting back 12 "rows" via XML. The column mapping says that the element is the "current" data, the ports (P1 & B1) are "up" one level, and so on.
If I fetch the data with respect to the ancestry path list, we don't get duplicated data, but at this point, the ancestry path list isn't seen as a list, so either displays nothing at all, or just the first element from the list repeatedly, resulting in...
Organization B1 - P1
Organization
Organization
Organization
Organization P2 - B2
Organization
Organization
Organization
Organization P2 - B2
Organization
Organization
Organization
I'm 99% sure BIRT will do what I need, but I'm a newcomer to it, and I'm surprised I've got as far as I have!
This problem is non-specific, as we have other situations where we may need to fetch lists of lists.
My apologies if this has already been covered. I have looked, but it may be listed under terminology I'm not familiar with.
Many thanks.
Pete.

If you want the output like this
Organization B1 - P1
Comms Room
Comms Room Cabinet 02 Rack 01
Comms Room C02 R01 Telephone Patch Panel
Organization P2 - B2
Comms Room
Comms Room Cabinet 02 Rack 01
Comms Room C02 R01 Patch Panel 02
Organization
Client/Server development
U1 ClntSvr FB 02
U1 ClntSvr FB 02 I4
In BIRT there is a option called GROUP for this you have to do is
select the table and you will have the properties dialog in there you will have group options and in that options it will list out the column names of your query and you can select your own column to group
view this tutorial for grouping

Related

Weird result using group by

I help run a small radio station and we log stats in to a MySQL database.
I have started seeing weird results when using the following code to grab peaks for each entry logged in the stats table.
I have tried a few things in the code but not seen any change in results so far.
SELECT *
FROM
(
SELECT
stat_utc,
stat_master_count,
stat_master_track
FROM
stats_all_master
ORDER BY
stat_master_count DESC,
stat_utc DESC
) AS my_table_tmp
GROUP BY
stat_master_track
ORDER BY
stat_master_count DESC,
stat_utc DESC
LIMIT 5;
If I remove the GROUP BY stat_master_track I see results with higher stat_count.
With GROUP BY:
41 LIVE SHOW ~ Erebuss - Mixset Showcase Mix 2018 (29/12/18)
41 Mixset ~ Inna Rhythm Recordings - Mix 01 (2015)
39 LIVE SHOW ~ DJ Ransome - Mixset Showcase 2017
38 LIVE SHOW ~ Parts Unknown - PartsUnknown #007
37 LIVE SHOW ~ MrKrotos - Mixset Showcase 2018 Part 2 (2018-12-29)
Without GROUP BY I see records with higher count like:
51 LIVE SHOW ~ DnB_Bo - Mixset Showcase Mix 2018 Part 1 (29/12/18)
47 LIVE SHOW ~ Erebuss - Renegade sessions #0006 (UK)
46 LIVE SHOW ~ DnB_Bo - Mixset Showcase Mix 2018 Part 1 (29/12/18)

DocumentDB - Does a newer session token guarantee reading back older writes?

Let's assume I have two documents in the same collection/partition, both at "version 1": A1, B1.
I update A1 -> A2, the write operation returns a session token SA.
Using SA to read document A will guarantee I get version A2.
Now I update B1 -> B2, and get a new session token SB.
Using SB to read document B will guarantee I get version B2.
My question is:
does using token SB guarantee I can see older writes as well?
I.e. will reading A with token SB always get me A2 ?
Yes. In your case SB > SA and hence it will ensure latest version of A.

create map of Europe with higher degree of granularity for admin regions

I'm working on a project to render a map of Europe, for the purpose of displaying data related to data science jobs around the EU and the different constituent member nations.
At the moment the map shows the demarcation of countries, but what I would like is for the map to show not only countries but also the component administrative regions, such as the departments of France and the federal states of Germany, equivalent to the 50 states of the US.
In the file europeMap.js we read in the file europe.json
1 /**
2 * Contains the class for constructing the map of Europe.
3: * europe.json was created using
4 * data from Natural Earth: http://naturalearthdata.com
5 * and GDAL: http://www.gdal.org/
.
244
245 // load the europe data
246: d3.json("data/map/europe.json", function (error, europe) {
247 if (error) { throw error; }
According to my prior experience with d3 the right way might be to try to create a file similar to europe.json- also using natural earth- but with different administrative regions- is that right?
If I want to show those regions- like this one: http://milkbread.github.io/subpages/webmapping_examples/d3-choropleth_germany.html
what would I have to do to change that file europe.json file?
could it just be swapped with the current one?

spring-integration-kafka : Multi message(s) - partition - consumer relationship

Requirement:
Let say I am receiving message from 3 organisation and want to process them separately/orderly by separate consumers.
m-many messages from a given organisation
example
m1: many messages from oragnisation 1
p-partition
c-consumer
for a given topic and consumer group.
m1 P1 c1
m2 P2 c2
m3 P3 c3
Now ,later at runtime new organisation join.SO,it should be new entry i.e new partition/new consumer
m1 P1 c1
m2 P2 c2
m3 P3 c3
m4 p4 c4
also,if for any reason any consumer dies of then the partion should not get distributed among the rest of the consumer but shld get replaced with new consumer in place of the recently lost one.Idea is to maintain 1 to 1 relation ship among the m-->p-->c,so as to keep track of what is coming from which organisation.
example: if c3 dies of,then it shld be replaced with c5 immediatetly w/o loosing data and start consuming from where c3 left and so on.
I am using spring-integration-kafka 1.2.1.RELEASE
is it possible?looking for best possible solution.
Would be great, if get the code snippet/sample for producer and consumer for the version(1.2.1) mention above.
Many thanks.

extra logs written when posting messages to ibm mq

I have a job that posts messages onto IBM MQ.
I have configured my logs to write to a file and not console.
yet when I run this job everytime I see a large amount of logs on the console like this
I have just changed ips and company name in the logs but
what is the source of this
why does it come up
and how do I stop this ?
All my messages get posted successfully so from an end user perspective the job is working fine however I'm not able to make out why this comes up on the console?
RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], READ: SSLv3 Application Data, length = 72
main, WRITE: SSLv3 Application Data, length = 68
[Raw write]: length = 73
0000: 17 03 00 00 44 54 53 48 43 00 00 00 30 01 0C 30 ....DTSHC...0..0
0010: 00 00 00 00 00 00 00 00 00 00 00 01 11 03 33 00 ..............3.
0020: 00 00 00 00 01 00 00 00 00 00 00 00 02 00 00 00 ................
0030: 00 00 00 00 00 41 69 2A 27 7E EB 3A 9B 47 4A 02 .....Ai*'..:.GJ.RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], received EOFException: ignored
RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], called closeInternal(false)
RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], SEND SSLv3 ALERT: warning, description = close_notify
The output looks like it might be from some type of debug or diagnostic patch that might be applied to the MQ JMS/Java client you have installed. The RcvThread is the thread used internally to listen on the TCP socket for data from the QM. Do you know of a patch that might be applied to help with TCP connection issues in the past?
You might want to look at the com.ibm.mq.jmqi.jar that is contained in the MQ client you are using to see if there is a difference in the timestamp or anything noted in the manifest file within the jar file itself.
Agree with the previous answere that is not normally the format of any logs that get written by the JMS Client code. Under normal circumstances the only time that logs are written to stdout in two cases:
One the JMS Client log file controlled by
# Name(s) of the log file(s)
# Can be
# * a single pathname
# * a comma-separated list of pathnames (all data is logged to all files)
# Each pathname can be
# * absolute or relative pathname
# * "stderr" or "System.err" to represent the standard error stream
# * "stdout" or "System.out" to represent the standard output stream
com.ibm.msg.client.commonservices.log.outputName=mqjms.log
And what is called JMS Startup trace - a very early tracing system normally only used on request of IBM Service. (this is documented also in the jms.config file).

Resources