QlikSense Script Issue : I'm getting multiple records for some Casenumbers with different state like " Progress' & "On Hold" - business-intelligence

LOAD
number as [Case Number],
number as key_case,
short_description as Description,
ApplyMap('map_CustomerDeliveryGroup',dv_company,'N/A') as CustomerGroupNo,
mid(dv_priority,5) as Priority,
dv_business_service as Service,
dv_state as State,
dv_category as ServiceCI,
DATE(SUBFIELD(sys_created_on,' ',1)) as key_reported_date,
IF(DATE(SUBFIELD(sys_created_on,' ',1)) > monthstart(today()-360),'1','0') as InYear,
text(applymap('map_wanted_customers',upper(dv_company),0)) as WantedCustomer,
contact_type,
dv_contact as contact,
dv_company as Customer,
dv_assignment_group as Assigned_Group,
dv_assigned_to as Assignee
FROM [lib];
dv_task as key_case,
dv_stage as ResponseStage,
business_percentage as ResponseLeft,
IF(business_percentage <= 100, 'Met','Missed') as SLA_Response_MeasurementStatus_Name,
DATE(SUBFIELD(end_time ,' ',1)) as ResponseTime
FROM [lib]
WHERE wildmatch(dv_sla,'*Response*')
and exists([Case Number],dv_task)
and dv_stage = 'Completed';

your table probably contains records from different stage of case, something like log,
there is second table with key_case which makes joins and it can also affect data,
it is really hard to give exact answer without seeing data model/data or just qvw file

Related

How to pass a SQL call into an AWK script to dictate text replacement on a file?

I am working on a script to do some personal accounting and budgeting. I'm sure there are easier way to do this, but I love UNIX-like CLI applications, so this is how I've chosen to go about it.
Currently, the pipeline starts with an AWK script that converts my CSV-formatted credit card statement into the plain-text into the double-entry accounting format that the CLI account program Ledger can read. I can then do whatever reporting I want via Ledger.
Here is my AWK script in its current state:
#!/bin/bash
awk -F "," 'NR > 1 {
gsub("[0-9]*\.[0-9]$", "&0", $7)
gsub(",,", ",", $0)
print substr($2,7,4) "-" substr($2,1,2) "-" substr($2,4,2) " * " $5
print " Expenses:"$6" -"$7
print " Liabilities "$7"\n"
}' /path/to/my/file.txt
Here is a simulated example of the original file (data is made up, format is correct):
POSTED,08/22/2018,08/23/2018,1234,RALPH'S COFFEE SHOP,Dining,4.33,
POSTED,08/22/2018,08/24/2018,1234,THE STUFF STORE,Merchandise,4.71,
POSTED,08/22/2018,08/22/2018,1234,PAST DUE FEE,Fee,25.0,
POSTED,08/21/2018,08/22/2018,5678,RALPH'S PAGODA,Dining,35.0,
POSTED,08/21/2018,08/23/2018,5678,GASLAND,Gas/Automotive,42.38,
POSTED,08/20/2018,08/21/2018,1234,CLASSY WALLMART,Grocery,34.67,
Here are the same entries after being converted to the Ledger format with the AWK script:
2018-08-22 * RALPH'S COFFEE SHOP
Expenses:Dining -4.33
Liabilities 4.33
2018-08-22 * THE STUFF STORE
Expenses:Merchandise -4.71
Liabilities 4.71
2018-08-22 * PAST DUE FEE
Expenses:Fee -25.00
Liabilities 25.00
2018-08-21 * RALPH'S PAGODA
Expenses:Dining -35.00
Liabilities 35.00
2018-08-21 * GASLAND
Expenses:Gas/Automotive -42.38
Liabilities 42.38
2018-08-20 * CLASSY WALMART
Expenses:Grocery -34.67
Liabilities 34.67
Ledger can do all sorts of cool reporting on the different categories of spending and earning. My credit card automatically assigns assigns categories to things (e.g. Expenses:Gas/Automotive, Expenses:Dining, etc.), but they are not always categoried in a way that reflects what was spent. I also want to be able to put in sub categories, such as Expenses:Dining:Coffee.
To do this, I created a SQLite database that contains the mappings I want. A query like:
SELECT v.name, tlc.name, sc.name
FROM vender AS v
JOIN top_level_category AS tlc ON v.top_level_category_id = tlc.id
JOIN sub_category AS sc ON v.sub_category_id = sc.id;
will output data like this:
RALPH'S COFFEE SHOP, Dining, Coffee
I want to figure out a way to pass the above SQL query into my AWK script in such a way that when AWK finds a vendor name in a line, it will replace the category assigned by the credit card with the category and subcategory from my database.
Any advice or thoughts would be greatly appreciated.

Understanding performance of a complex SQL JOIN

I'm trying to understand a performance issue I solved on an SQLite database by restructuring the query.
The task of the query is to filter a list of the latest revisions of protocols. Additionally some protocols may not be released, this are to be excluded from the search. The tables are modelled as follows:
-- Describe PROTOCOL
CREATE TABLE protocol (
id CHAR(36) NOT NULL,
revision CHAR(36) NOT NULL,
timestamp DATETIME,
name TEXT,
PRIMARY KEY (revision)
)
-- Describe PROTOCOLSTATUS
CREATE TABLE protocolstatus (
uuid CHAR(36) NOT NULL,
status TEXT,
timestamp DATETIME,
protocol_id CHAR(36) NOT NULL,
PRIMARY KEY (uuid),
FOREIGN KEY(protocol_id) REFERENCES protocol (revision)
)
The number of entries in protocolstatus can be expected to be some constant times the number entries in protocol (#p), even though over time that constant may become largish.
In a database with 300 protocols and 900 statuses the slower query took about 4 seconds to execute. I also noticed, that if I don't use distinct, the results are duplicated numerous times.
Restructuring the query using a subquery sped the whole thing up to feasible response times (<0.05s).
So how do I approach analyzing what went wrong here, having little formal training in databases? Can I analyze query complexity the way I would analyze algorithms, using Big-O-notation, if I assume that I am searching in linear time (no indizes defined)?
Analyzing the first query that way I notice that it describes a product of four sets with O(#p) entries. So can I assume that the complexity of this query is O(#p^4)?
Analyzing the second query the same way I notice that the outermost-layer describes a product of three sets with O(#p) entries. The second join is against a subquery for which I would arrive at a complexity of O(#p^2), so is the complexity of the whole query = O(#p^3) + O(#p^2) = O(#p^3)?
Is this a legitimate way to evaluate the performance of these queries?
The slower query used to retrieve the list of protocols was:
-- slow query: O((#p)^4) ?, Run Time: real 4.335 user 4.331110 sys 0.005220
SELECT DISTINCT
protocol.id AS protocol_id,
protocol.revision AS protocol_revision,
protocol.timestamp AS protocol_timestamp
FROM
protocolstatus, -- O(#p) |
protocol -- O(#p) | => O(#p^2) entries
JOIN (
SELECT
protocol.id AS id, max(protocol.timestamp) AS ts_max
FROM protocol
GROUP BY protocol.id
) AS anon_1 -- O(#p)
ON protocol.timestamp = anon_1.ts_max
JOIN (
SELECT
max(protocolstatus.timestamp) AS max_1,
protocolstatus.protocol_id AS protocol_id
FROM protocolstatus
GROUP BY protocolstatus.protocol_id
) AS anon_2 -- O(#p)
ON protocol.revision = anon_2.protocol_id
WHERE
protocolstatus.status = 'approved'
AND lower(protocol.name) LIKE lower('%c%')
ORDER BY protocol.name;
The faster query used to retrieve the list of protocols was:
-- fast query: O(#p^3)? Run Time: real 0.042 user 0.041540 sys 0.000767
SELECT
protocol.id AS protocol_id,
protocol.revision AS protocol_revision,
protocol.timestamp AS protocol_timestamp
FROM protocol -- O(#p)
JOIN (
SELECT
protocol.id AS id,
max(protocol.timestamp) AS ts_max
FROM protocol
GROUP BY protocol.id
) AS anon_1 -- query: O(#p), results: O(#p)
ON protocol.timestamp = anon_1.ts_max
JOIN (
SELECT
protocolstatus.protocol_id AS protocol_id,
protocolstatus.status AS status,
protocolstatus.timestamp AS timestamp
FROM protocolstatus -- O(#p)
JOIN (
SELECT
max(protocolstatus.timestamp) AS timestamp,
protocolstatus.protocol_id AS protocol_id
FROM protocolstatus
GROUP BY protocolstatus.protocol_id
) AS anon_3 -- O(#p)
ON protocolstatus.timestamp = anon_3.timestamp
WHERE protocolstatus.status = "approved"
) AS anon_2 -- query: O(#p^2), results: O(#p)
ON protocol.revision = anon_2.protocol_id
WHERE lower(protocol.name) LIKE lower("%c%") ORDER BY protocol.name;
Running EXPLAIN on the queries gives me:
for the slow query:
0|Trace|0|0|0||00|
1|Integer|48|1|0||00|
2|Once|0|48|0||00|
3|OpenEphemeral|2|2|0||00|
4|Noop|0|0|0||00|
5|Integer|0|6|0||00|
6|Integer|0|5|0||00|
7|Null|0|9|9||00|
8|Gosub|8|44|0||00|
9|Goto|0|161|0||00|
10|OpenRead|3|18|0|3|00|
11|OpenRead|7|21|0|k(2,B,B)|00|
12|Rewind|7|29|11|0|00|
13|IdxRowid|7|11|0||00|
14|Seek|3|11|0||00|
15|Column|7|0|10||00|
16|Compare|9|10|1|k(1,B)|00|
17|Jump|18|22|18||00|
18|Move|10|9|0||00|
19|Gosub|7|35|0||00|
20|IfPos|6|48|0||00|
21|Gosub|8|44|0||00|
22|Column|3|2|12||00|
23|CollSeq|13|0|0|(BINARY)|00|
24|AggStep|0|12|3|max(1)|01|
25|If|13|27|0||00|
26|Column|7|0|2||00|
27|Integer|1|5|0||00|
28|Next|7|13|0||01|
29|Close|3|0|0||00|
30|Close|7|0|0||00|
31|Gosub|7|35|0||00|
32|Goto|0|48|0||00|
33|Integer|1|6|0||00|
34|Return|7|0|0||00|
35|IfPos|5|37|0||00|
36|Return|7|0|0||00|
37|AggFinal|3|1|0|max(1)|00|
38|SCopy|2|14|0||00|
39|SCopy|3|15|0||00|
40|MakeRecord|14|2|11||00|
41|NewRowid|2|16|0||00|
42|Insert|2|11|16||08|
43|Return|7|0|0||00|
44|Null|0|2|0||00|
45|Null|0|4|0||00|
46|Null|0|3|0||00|
47|Return|8|0|0||00|
48|Return|1|0|0||00|
49|Integer|100|17|0||00|
50|Once|1|100|0||00|
51|OpenEphemeral|4|2|0||00|
52|SorterOpen|8|3|0|k(1,B)|00|
53|Integer|0|22|0||00|
54|Integer|0|21|0||00|
55|Null|0|25|25||00|
56|Gosub|24|96|0||00|
57|OpenRead|5|24|0|4|00|
58|Rewind|5|65|0||00|
59|Column|5|3|27||00|
60|Sequence|8|28|0||00|
61|Column|5|2|29||00|
62|MakeRecord|27|3|30||00|
63|SorterInsert|8|30|0||00|
64|Next|5|59|0||01|
65|Close|5|0|0||00|
66|OpenPseudo|9|30|3||00|
67|SorterSort|8|100|0||00|
68|SorterData|8|30|0||00|
69|Column|9|0|26||20|
70|Compare|25|26|1|k(1,B)|00|
71|Jump|72|76|72||00|
72|Move|26|25|0||00|
73|Gosub|23|87|0||00|
74|IfPos|22|100|0||00|
75|Gosub|24|96|0||00|
76|Column|9|2|27||00|
77|CollSeq|31|0|0|(BINARY)|00|
78|AggStep|0|27|18|max(1)|01|
79|If|31|81|0||00|
80|Column|9|0|19||00|
81|Integer|1|21|0||00|
82|SorterNext|8|68|0||00|
83|Gosub|23|87|0||00|
84|Goto|0|100|0||00|
85|Integer|1|22|0||00|
86|Return|23|0|0||00|
87|IfPos|21|89|0||00|
88|Return|23|0|0||00|
89|AggFinal|18|1|0|max(1)|00|
90|SCopy|18|32|0||00|
91|SCopy|19|33|0||00|
92|MakeRecord|32|2|34||00|
93|NewRowid|4|35|0||00|
94|Insert|4|34|35||08|
95|Return|23|0|0||00|
96|Null|0|19|0||00|
97|Null|0|20|0||00|
98|Null|0|18|0||00|
99|Return|24|0|0||00|
100|Return|17|0|0||00|
101|SorterOpen|10|3|0|k(1,B)|00|
102|OpenEphemeral|11|0|0|k(3,B,B,B)|08|
103|OpenRead|0|24|0|2|00|
104|OpenRead|1|18|0|5|00|
105|OpenRead|12|19|0|k(1,B)|00|
106|Once|2|114|0||00|
107|OpenAutoindex|13|2|0|k(2,B,B)|00|
108|Rewind|0|114|0||00|
109|Column|0|1|37||00|
110|Rowid|0|38|0||00|
111|MakeRecord|37|2|36|ad|00|
112|IdxInsert|13|36|0||10|
113|Next|0|109|0||03|
114|String8|0|39|0|approved|00|
115|SeekGe|13|147|39|1|00|
116|IdxGE|13|147|39|1|01|
117|Rewind|4|146|0||00|
118|Column|4|1|40||00|
119|IsNull|40|145|0||00|
120|SeekGe|12|145|40|1|00|
121|IdxGE|12|145|40|1|01|
122|IdxRowid|12|36|0||00|
123|Seek|1|36|0||00|
124|Column|1|4|37||00|
125|Function|0|37|43|lower(1)|01|
126|Function|1|42|41|like(2)|02|
127|IfNot|41|145|1||00|
128|Rewind|2|145|0||00|
129|Column|1|2|41||00|
130|Column|2|1|44||00|
131|Ne|44|144|41|(BINARY)|6b|
132|Column|1|0|45||00|
133|Column|12|0|46||00|
134|Column|1|2|47||00|
135|Found|11|144|45|3|00|
136|MakeRecord|45|3|44||00|
137|IdxInsert|11|44|0||00|
138|MakeRecord|45|3|44||00|
139|Column|1|4|48||00|
140|Sequence|10|49|0||00|
141|Move|44|50|0||00|
142|MakeRecord|48|3|41||00|
143|SorterInsert|10|41|0||00|
144|Next|2|129|0||01|
145|Next|4|118|0||01|
146|Next|13|116|0||00|
147|Close|1|0|0||00|
148|Close|12|0|0||00|
149|OpenPseudo|14|44|3||00|
150|OpenPseudo|15|51|3||00|
151|SorterSort|10|159|0||00|
152|SorterData|10|51|0||00|
153|Column|15|2|44||20|
154|Column|14|0|45||20|
155|Column|14|1|46||00|
156|Column|14|2|47||00|
157|ResultRow|45|3|0||00|
158|SorterNext|10|152|0||00|
159|Close|14|0|0||00|
160|Halt|0|0|0||00|
161|Transaction|0|0|0||00|
162|VerifyCookie|0|29|0||00|
163|TableLock|0|18|0|protocol|00|
164|TableLock|0|24|0|protocolstatus|00|
165|String8|0|52|0|%c%|00|
166|Function|1|52|42|lower(1)|01|
167|Goto|0|10|0||00|
for the fast query:
0|Trace|0|0|0||00|
1|Integer|48|1|0||00|
2|Once|0|48|0||00|
3|OpenEphemeral|1|2|0||00|
4|Noop|0|0|0||00|
5|Integer|0|6|0||00|
6|Integer|0|5|0||00|
7|Null|0|9|9||00|
8|Gosub|8|44|0||00|
9|Goto|0|163|0||00|
10|OpenRead|2|18|0|3|00|
11|OpenRead|8|21|0|k(2,B,B)|00|
12|Rewind|8|29|11|0|00|
13|IdxRowid|8|11|0||00|
14|Seek|2|11|0||00|
15|Column|8|0|10||00|
16|Compare|9|10|1|k(1,B)|00|
17|Jump|18|22|18||00|
18|Move|10|9|0||00|
19|Gosub|7|35|0||00|
20|IfPos|6|48|0||00|
21|Gosub|8|44|0||00|
22|Column|2|2|12||00|
23|CollSeq|13|0|0|(BINARY)|00|
24|AggStep|0|12|3|max(1)|01|
25|If|13|27|0||00|
26|Column|8|0|2||00|
27|Integer|1|5|0||00|
28|Next|8|13|0||01|
29|Close|2|0|0||00|
30|Close|8|0|0||00|
31|Gosub|7|35|0||00|
32|Goto|0|48|0||00|
33|Integer|1|6|0||00|
34|Return|7|0|0||00|
35|IfPos|5|37|0||00|
36|Return|7|0|0||00|
37|AggFinal|3|1|0|max(1)|00|
38|SCopy|2|14|0||00|
39|SCopy|3|15|0||00|
40|MakeRecord|14|2|11||00|
41|NewRowid|1|16|0||00|
42|Insert|1|11|16||08|
43|Return|7|0|0||00|
44|Null|0|2|0||00|
45|Null|0|4|0||00|
46|Null|0|3|0||00|
47|Return|8|0|0||00|
48|Return|1|0|0||00|
49|Gosub|1|2|0||00|
50|Integer|101|17|0||00|
51|Once|1|101|0||00|
52|OpenEphemeral|5|2|0||00|
53|SorterOpen|9|3|0|k(1,B)|00|
54|Integer|0|22|0||00|
55|Integer|0|21|0||00|
56|Null|0|25|25||00|
57|Gosub|24|97|0||00|
58|OpenRead|6|24|0|4|00|
59|Rewind|6|66|0||00|
60|Column|6|3|27||00|
61|Sequence|9|28|0||00|
62|Column|6|2|29||00|
63|MakeRecord|27|3|30||00|
64|SorterInsert|9|30|0||00|
65|Next|6|60|0||01|
66|Close|6|0|0||00|
67|OpenPseudo|10|30|3||00|
68|SorterSort|9|101|0||00|
69|SorterData|9|30|0||00|
70|Column|10|0|26||20|
71|Compare|25|26|1|k(1,B)|00|
72|Jump|73|77|73||00|
73|Move|26|25|0||00|
74|Gosub|23|88|0||00|
75|IfPos|22|101|0||00|
76|Gosub|24|97|0||00|
77|Column|10|2|27||00|
78|CollSeq|31|0|0|(BINARY)|00|
79|AggStep|0|27|18|max(1)|01|
80|If|31|82|0||00|
81|Column|10|0|19||00|
82|Integer|1|21|0||00|
83|SorterNext|9|69|0||00|
84|Gosub|23|88|0||00|
85|Goto|0|101|0||00|
86|Integer|1|22|0||00|
87|Return|23|0|0||00|
88|IfPos|21|90|0||00|
89|Return|23|0|0||00|
90|AggFinal|18|1|0|max(1)|00|
91|SCopy|18|32|0||00|
92|SCopy|19|33|0||00|
93|MakeRecord|32|2|34||00|
94|NewRowid|5|35|0||00|
95|Insert|5|34|35||08|
96|Return|23|0|0||00|
97|Null|0|19|0||00|
98|Null|0|20|0||00|
99|Null|0|18|0||00|
100|Return|24|0|0||00|
101|Return|17|0|0||00|
102|SorterOpen|11|3|0|k(1,B)|00|
103|OpenRead|4|24|0|4|00|
104|OpenRead|0|18|0|5|00|
105|OpenRead|12|19|0|k(1,B)|00|
106|Once|2|116|0||00|
107|OpenAutoindex|13|4|0|k(4,B,B,B,B)|00|
108|Rewind|4|116|0||00|
109|Column|4|1|37||00|
110|Column|4|2|38||00|
111|Column|4|3|39||00|
112|Rowid|4|40|0||00|
113|MakeRecord|37|4|36|acad|00|
114|IdxInsert|13|36|0||10|
115|Next|4|109|0||03|
116|String8|0|41|0|approved|00|
117|SeekGe|13|149|41|1|00|
118|IdxGE|13|149|41|1|01|
119|Column|13|2|42||00|
120|IsNull|42|148|0||00|
121|SeekGe|12|148|42|1|00|
122|IdxGE|12|148|42|1|01|
123|IdxRowid|12|36|0||00|
124|Seek|0|36|0||00|
125|Column|0|4|37||00|
126|Function|0|37|45|lower(1)|01|
127|Function|1|44|43|like(2)|02|
128|IfNot|43|148|1||00|
129|Rewind|1|148|0||00|
130|Column|0|2|43||00|
131|Column|1|1|46||00|
132|Ne|46|147|43|(BINARY)|6b|
133|Rewind|5|147|0||00|
134|Column|13|1|47||00|
135|Column|5|0|48||00|
136|Ne|48|146|47|(BINARY)|6b|
137|Column|0|0|49||00|
138|Column|12|0|50||00|
139|Column|0|2|51||00|
140|MakeRecord|49|3|48||00|
141|Column|0|4|38||00|
142|Sequence|11|39|0||00|
143|Move|48|40|0||00|
144|MakeRecord|38|3|47||00|
145|SorterInsert|11|47|0||00|
146|Next|5|134|0||01|
147|Next|1|130|0||01|
148|Next|13|118|0||00|
149|Close|0|0|0||00|
150|Close|12|0|0||00|
151|OpenPseudo|14|48|3||00|
152|OpenPseudo|15|52|3||00|
153|SorterSort|11|161|0||00|
154|SorterData|11|52|0||00|
155|Column|15|2|48||20|
156|Column|14|0|49||20|
157|Column|14|1|50||00|
158|Column|14|2|51||00|
159|ResultRow|49|3|0||00|
160|SorterNext|11|154|0||00|
161|Close|14|0|0||00|
162|Halt|0|0|0||00|
163|Transaction|0|0|0||00|
164|VerifyCookie|0|29|0||00|
165|TableLock|0|18|0|protocol|00|
166|TableLock|0|24|0|protocolstatus|00|
167|String8|0|53|0|%c%|00|
168|Function|1|53|44|lower(1)|01|
169|Goto|0|10|0||00|

Customised Sonar Qube Analysis Report

I am trying to generate customized analysis reports from sonar. I am using sonar-ws-client. My code is :
Sonar sonar = new Sonar(new HttpClient4Connector(new Host(url, login, password)));
Resource JunitTestCaseExample = sonar.find(ResourceQuery.createForMetrics("JunitTestCaseExample:JunitTestCaseExample", "critical_violations",
"major_violations", "minor_violations", "info_violations", "tests", "blocker_violations", "statements","coverage","uncovered_lines","lines",
"skipped_tests","test_failures", "test_errors", "test_success_density", "new_coverage","overall_coverage"));
Measure statements = JunitTestCaseExample.getMeasure("statements");
System.out.println("statements : " + statements.getMetricKey() + " === " + statements.getFormattedValue());
System.out.println(statements.getVariation1());
I am able to get most of the values, but statements.getVariation1() always returns null.
Is there any way to get the variation Value on Measure for 7 Days, 15 Days and 30 Days?
It's not a standard but you can fetch required data from sonar Db as its not available through web service Customised Sonar Qube Analysis Report.
Step 1-
fetch period_param from snapshots with the help of project_id
from this you can manage to get variation period as you mention in your question
2)-Execute this query,It will give you all variations, after that you can put your logic to pick what you want
select distinct proj.name NAME_OF_PROJ,metric.name metric_name, metric.description Description, projdesc.value value,projdesc.variation_value_1,projdesc.variation_value_2,projdesc.variation_value_3,projdesc.variation_value_4,projdesc.variation_value_5 ,snap.created_at CREATED_DATE from projects proj inner join snapshots snap on snap.project_id=proj.id inner join " + "(select max(snap2.id) as id from snapshots snap2 where snap2.project_id in ("+projectId+") GROUP BY snap2.project_id ) as Lookup on Lookup.id=snap.id inner join project_measures projdesc on projdesc.snapshot_id=snap.id inner join metrics metric on projdesc.metric_id =metric.id where metric.id between '1' and "+metric_id_Count+'
Hope it helps you.
Again remember as mention on sonar group its not a standard.

creating sqlite database with ruby takes too long time

Im transfering txt files into a sqlite3 database using sqlite-ruby, each txt file is about 3,5 MB which ruby reads really fast, but populating the database takes more than 10 minutes per file. I don´t know if this is normal, i guess not. Can somebody tell me a faster way to do this or if im doing something crazy? code and txt scheme below, thanks.
EDIT: more than 30 min per file...
require 'sqlite3'
db = SQLite3::Database.new( "firstTest.db" )
filename="file1.txt" # this file is 3.5 MB
streetArray=[]
mothertablename="tmother"
coordstablename="tcoords"
db.execute("create table '#{mothertablename}' (id INTEGER,stName TEXT,stBC TEXT,stPrice INTEGER,stUrl TEXT);")
db.execute("create table '#{coordstablename}' (id INTEGER,num INTEGER,xcoord DOUBLE,ycoord DOUBLE);")
f=[]
File.open("_history/" + "#{filename}").each{|line| f.push(line)}
counter=0
for i in 1...f.length
f[i].gsub!("&apos;","") #get rid of comma character
ptsArray=f[i].split(';')
if ptsArray[0]==ptsArray[1]
streetArray.push(f[i])
end
if ptsArray[0]=="next\n"
counter +=1
namestreetArray=f[i-1].split(';')
stname=nil
stprice=nil
stbc=nil
sturl=nil
if namestreetArray[0]=="name"
stname=namestreetArray[1]
if namestreetArray.length>2
stprice=namestreetArray.last
else
stprice=nil
end
end
db.execute( "insert into '#{mothertablename}' (id, stName, stBC, stPrice, stUrl)
values ('#{counter}','#{stname}','#{stbc}','#{stprice}','#{sturl}');")
localcounter=0
streetArray.each do |coord|
localcounter +=1
xcoord=coord.split(";")[3].to_f
ycoord=coord.split(";")[2].to_f
db.execute( "insert into '#{coordstablename}' (id, num, xcoord, ycoord)
values ('#{counter}','#{localcounter}','#{xcoord}','#{ycoord}');")
end
streetArray.clear
end
end
here´s the txt scheme
206358589;206358589;37.3907322;-5.9885633
195966401;195966401;37.3909862;-5.988974
195969491;195969491;37.3910908;-5.9891081
195969493;195969493;37.3911863;-5.9893141
195969494;195969494;37.3912954;-5.9895115
813726831;813726831;37.3914352;-5.98973
name;Calle Descalzos;3085
next
440230342;440230342;37.3918677;-5.9905477
813726823;813726823;37.3916168;-5.9905285
192037929;192037929;37.3916184;-5.9905125
195970140;195970140;37.391872;-5.990398
440230342;440230342;37.3918677;-5.9905477
next
192009475;192009475;37.3875271;-5.9937949
710633000;710633000;37.3875013;-5.9941761
name;Calle Felipe Perez;
next
195982576;195982576;37.387349;-5.9937755
308836571;308836571;37.3873649;-5.9936472
next
...
See the faq of sqlite "INSERT is really slow - I can only do few dozen INSERTs per second"
In short, you can speed up the sqlite by two ways:
Group multiple INSERT statements with BEGIN...COMMIT to make the inserts into one transaction. In ruby, you should do something like:
database.transaction do |db| db.execute( "insert into table values ( 'a', 'b', 'c' )" ) ... end
Or simply run PRAGMA synchronous=OFF.
And remember to prepare the statement before executing it.

How to return multiple results with XMLTABLE?

I want to do a query in Oracle using xmltable.
Everything works fine, but there are multiple (n) results for xml node "article_title". For each row the result "<string>Article Name1</string><string>Article Name 2</string>... is returned. But I want every article name to be returned as a single row.
How can I realize this?
SELECT
X.*
FROM
myTable C,
xmltable (
'$cust//member' PASSING C.STAT_XML as "cust"
COLUMNS
name VARCHAR(25) PATH '/member/name',
article_title XMLTYPE PATH '//string/text()'
) as X
WHERE X.name = 'articles';
I'm having a problem with this as well. I have an XML that's supposed to send shipment data from our warehouse management system back to our order management system, and it has various different things that have multiples. The entire XML message has a single ShipConfirmHeader section, so that's easy enough to pull out. Where I run into trouble is that it has a ShipConfirmDetail/Orders section, and there could be any number of orders listed. Within each order, there could be any number of order lines. I can pull the ShipConfirmHeader and the ShipConfirmDetail/Orders together OR I can pull the ShipConfirmHeader and the ShipConfirmDetail/Orders/OrderLineItem together, but when I try pulling the Orders together with the OrderLineItem, there's no way that I can see to join those, so I end up with a cartesian product. To complicate matters even more, each Order could have many Cartons associated with it, and each Carton could contain multiple OrderLineItems, and each Carton could have multiple CartonDetails.
I've included a sample of my XML below. In this example, there's only one Order, one OrderLine, and one Carton (called an LPN in the XML), because I've stripped out all the others (the original XML is over 4000 lines long).
Pulling stuff from the ShipConfirmHeader is relatively easy, like this:
xmltable('/tXML/Message/ShipConfirm/ShipConfirmSummary/ShipConfirmHeaderInfo/'
passing xmltype(msg_xml.full_xml)
columns
invoice_batch varchar2(20) path 'InvcBatchNbr'
) sc_hdr
But when I want to include any of the multiples, it gives me problems. I've tried a variety of things:
-- This gives the error "ORA-22950: cannot ORDER objects without MAP or ORDER method"
xmltable('/tXML/Message/ShipConfirm'
passing xmltype(msg_xml.full_xml)
columns
invoice_batch varchar2(20) path 'ShipConfirmSummary/ShipConfirmHeaderInfo/InvcBatchNbr',
order_dtl xmltype path 'ShipConfirmDetails/Orders'
) sc_hdr
-- This doesn't like the "../" in the XPATH
xmltable('/tXML/Message/ShipConfirm/ShipConfirmDetails/Orders/OrderLineItem'
passing xmltype(msg_xml.full_xml)
columns
invoice_batch varchar2(20) path '../../../ShipConfirmSummary/ShipConfirmHeaderInfo/InvcBatchNbr',
order_id varchar2(20) path '../TcOrderId',
order_line_id varchar2(20) path 'TcOrderLineId',
item_name varchar2(20) path 'ItemName'
) sc_hdr
-- This gives a cartesian product.
xmltable('/tXML/Message/ShipConfirm/ShipConfirmSummary/ShipConfirmHeaderInfo'
passing xmltype(msg_xml.full_xml)
columns
invoice_batch varchar2(20) path 'InvcBatchNbr'
) sc_hdr,
xmltable('/tXML/Message/ShipConfirm/ShipConfirmDetails/Orders'
passing xmltype(msg_xml.full_xml)
columns
order_id varchar2(20) path 'TcOrderId'
) sc_ord_hdr,
xmltable('/tXML/Message/ShipConfirm/ShipConfirmDetails/Orders/OrderLineItem'
passing xmltype(msg_xml.full_xml)
columns
order_line_id varchar2(20) path 'TcOrderLineId',
item_name varchar2(20) path 'ItemName'
) sc_ord_dtl
Here's the sample XML:
<?xml version="1.0" encoding="UTF-8"?>
<tXML>
<Header>
<Source>warehouse management system</Source>
<Action_Type></Action_Type>
<Sequence_Number></Sequence_Number>
<Batch_ID></Batch_ID>
<Reference_ID></Reference_ID>
<User_ID>CRONUSER</User_ID>
<Password></Password>
<Message_Type>ShipConfirm</Message_Type>
<Company_ID>1</Company_ID>
<Msg_Locale>English (United States)</Msg_Locale>
<Msg_Time_Zone>America/Denver</Msg_Time_Zone>
<Version>2018</Version>
</Header>
<Message>
<ShipConfirm>
<ShipConfirmSummary>
<CompanyName>Blah</CompanyName>
<FacilityName>Blah</FacilityName>
<ShipConfirmHeaderInfo>
<InvcBatchNbr>123456</InvcBatchNbr>
<LastInvcDttm>5/27/21 05:45</LastInvcDttm>
<ShippedDttm>5/27/21 05:45</ShippedDttm>
<DateCreated>5/27/21 05:45</DateCreated>
<StoreNbr></StoreNbr>
<ShipVia>ST</ShipVia>
<SchedDeliveryDate></SchedDeliveryDate>
<ProNbr></ProNbr>
<AppointmentNbr></AppointmentNbr>
<ManifestNbr></ManifestNbr>
<SealNbr></SealNbr>
<AppointmentDate></AppointmentDate>
<PartialShipConfirmStatus>5</PartialShipConfirmStatus>
<PreBillStatus>0</PreBillStatus>
<ApptMadeByID></ApptMadeByID>
<BillOfLading></BillOfLading>
<CancelQuantity>0.0</CancelQuantity>
<NbrOfLpns>26</NbrOfLpns>
<NbrOfPlts>0</NbrOfPlts>
<NbrOfOrders>26</NbrOfOrders>
<TotalWt>61.72</TotalWt>
<UserID>USER</UserID>
</ShipConfirmHeaderInfo>
</ShipConfirmSummary>
<ShipConfirmDetails>
<Orders>
<BatchCtrlNbr>123456</BatchCtrlNbr>
<DistributionShipVia>ST</DistributionShipVia>
<DoType>Customer Order</DoType>
<DsgShipVia>ST</DsgShipVia>
<OriginalShipVia>ST</OriginalShipVia>
<IncotermLocAvaTimeZoneId>America/New_York</IncotermLocAvaTimeZoneId>
<InvcBatchNbr>123456</InvcBatchNbr>
<IsBackOrdered>1</IsBackOrdered>
<MajorOrderCtrlNbr></MajorOrderCtrlNbr>
<OrderType>ECOMM_ORDER</OrderType>
<ShipDate>5/27/21 05:45</ShipDate>
<OrderStatus>Unplanned</OrderStatus>
<DoStatus>Shipped</DoStatus>
<TcCompanyId>1</TcCompanyId>
<TcOrderId>MYORDERID</TcOrderId>
<TotalNbrOfLpn>1</TotalNbrOfLpn>
<TotalNbrOfPlt>0</TotalNbrOfPlt>
<TotalNbrOfUnits>1</TotalNbrOfUnits>
<LineHaulShipVia>ST</LineHaulShipVia>
<PartialShipConfirmStatus>5</PartialShipConfirmStatus>
<PreBillStatus>0</PreBillStatus>
<OrderBillToInfo>
<BillToAddress1>Snip</BillToAddress1>
<BillToAddress2></BillToAddress2>
<BillToAddress3></BillToAddress3>
<BillToCity>Snip</BillToCity>
<BillToContact></BillToContact>
<BillToContactName></BillToContactName>
<BillToCountryCode>CA</BillToCountryCode>
<BillToCounty></BillToCounty>
<BillToFacilityName></BillToFacilityName>
<BillToName>Snip</BillToName>
<BillToPhoneNumber>Snip</BillToPhoneNumber>
<BillToPostalCode>Snip</BillToPostalCode>
<BillToStateProv>ON</BillToStateProv>
</OrderBillToInfo>
<OrderDestInfo>
<DestAddress1>Snip</DestAddress1>
<DestAddress2></DestAddress2>
<DestAddress3></DestAddress3>
<DestCity>Snip</DestCity>
<DestContact>Snip</DestContact>
<DestCountryCode>CA</DestCountryCode>
<DestCounty></DestCounty>
<DestDockDoorId>0</DestDockDoorId>
<DestFacilityAliasId></DestFacilityAliasId>
<DestFacilityId>0</DestFacilityId>
<DestFacilityName></DestFacilityName>
<DestName>Snip</DestName>
<DestPhoneNumber>Snip</DestPhoneNumber>
<DestPostalCode>Snip</DestPostalCode>
<DestStateProv>ON</DestStateProv>
</OrderDestInfo>
<OrderOriginInfo>
<OriginAddress1>Snip</OriginAddress1>
<OriginAddress2></OriginAddress2>
<OriginAddress3></OriginAddress3>
<OriginCity>Snip</OriginCity>
<OriginContact></OriginContact>
<OriginCountryCode>CA</OriginCountryCode>
<OriginFacilityAliasId>Snip</OriginFacilityAliasId>
<OriginFacilityId>1</OriginFacilityId>
<OriginFacilityName>Snip</OriginFacilityName>
<OriginPhoneNumber></OriginPhoneNumber>
<OriginPostalCode>Snip</OriginPostalCode>
<OriginStateProv>AB</OriginStateProv>
</OrderOriginInfo>
<OrderInfoFields>
<SplInstrCode1>MW</SplInstrCode1>
<SplInstrCode2>MW</SplInstrCode2>
</OrderInfoFields>
<OrderLineItem>
<InvcBatchNbr>123456</InvcBatchNbr>
<ItemId>159331</ItemId>
<ItemName>MYITEMNAME</ItemName>
<LineItemId>12053970</LineItemId>
<OrderQty>1</OrderQty>
<OrderQtyUom>Unit</OrderQtyUom>
<OrigItemId>159331</OrigItemId>
<OrigItemName>MYITEMNAME</OrigItemName>
<OrigOrderLineItemId>1</OrigOrderLineItemId>
<OrigOrderQty>1</OrigOrderQty>
<OrigOrderQtyUom>Unit</OrigOrderQtyUom>
<OutptOrderLineItemId>3782033</OutptOrderLineItemId>
<Price>15.39</Price>
<PriceTktType></PriceTktType>
<RetailPrice>0.0</RetailPrice>
<ShippedQty>1</ShippedQty>
<TcCompanyId>1</TcCompanyId>
<TcOrderLineId>1</TcOrderLineId>
<UnitVol>0.0744</UnitVol>
<UnitWt>0.58</UnitWt>
<Uom>Unit</Uom>
<UserCanceledQty>0</UserCanceledQty>
<OrderLineItemDefn>
<ItemStyle>Snip</ItemStyle>
<ItemStyleSfx>Snip</ItemStyleSfx>
</OrderLineItemDefn>
</OrderLineItem>
<Lpn>
<BillOfLadingNumber></BillOfLadingNumber>
<CFacilityAliasId>Snip</CFacilityAliasId>
<EstimatedWeight>0.58</EstimatedWeight>
<FinalDestFacilityAliasId></FinalDestFacilityAliasId>
<InvcBatchNbr>123456</InvcBatchNbr>
<LoadedDttm></LoadedDttm>
<ManifestNbr></ManifestNbr>
<MasterBolNbr></MasterBolNbr>
<NonInventoryLpnFlag>0</NonInventoryLpnFlag>
<NonMachineable></NonMachineable>
<OutptLpnId>730888</OutptLpnId>
<PackerUserid>USER</PackerUserid>
<ProcDttm>5/27/21 05:45</ProcDttm>
<ProcStatCode>0</ProcStatCode>
<QtyUom>Unit</QtyUom>
<ServiceLevel></ServiceLevel>
<ShipVia>ST</ShipVia>
<ShippedDttm>5/27/21 05:45</ShippedDttm>
<StaticRouteId></StaticRouteId>
<TcCompanyId>1</TcCompanyId>
<TcLpnId>98765</TcLpnId>
<TcOrderId>Snip</TcOrderId>
<TcParentLpnId></TcParentLpnId>
<TcShipmentId></TcShipmentId>
<TotalLpnQty>1</TotalLpnQty>
<TrackingNbr>Snip</TrackingNbr>
<VolumeUom>cu ft</VolumeUom>
<Weight>0.58</Weight>
<WeightUom>Lbs</WeightUom>
<LoadSequence>0</LoadSequence>
<oLPNXRefNbr></oLPNXRefNbr>
<LpnDetail>
<InvcBatchNbr>123456</InvcBatchNbr>
<ItemId>159331</ItemId>
<ItemName>MYITEMNAME</ItemName>
<LpnDetailId>20153787</LpnDetailId>
<OutptLpnDetailId>3689518</OutptLpnDetailId>
<QtyUom>Unit</QtyUom>
<SizeValue>1</SizeValue>
<TcCompanyId>1</TcCompanyId>
<TcLpnId>98765</TcLpnId>
<DistroNumber></DistroNumber>
<TcOrderLineId>1</TcOrderLineId>
<MinorOrderNbr>Snip</MinorOrderNbr>
<MinorPoNbr></MinorPoNbr>
</LpnDetail>
</Lpn>
</Orders>
</ShipConfirmDetails>
</ShipConfirm>
</Message>
</tXML>
Try something like this:
SELECT X.*
FROM my_table C,
xmltable('for $i in $cust//string , $j in $cust//member[./string=$i]/name return <member>{$j}{$i}</member>'
passing c.stat_xml AS "cust"
columns name varchar2(25) path '/member/name',
article_title xmltype path '//string') AS X
WHERE X.name = 'articles';
Here is a fiddle
I assumed that for every member you have one name but might have many strings

Resources