Sum Column At Each Value Change of another field in foxpro - visual-foxpro

Sum Column At Each Value Change of another field in foxpro. How can I get the additive column? I was able to do a running total but for all the items how can I get it to start at each item change?
E.g.
Item Number QTY ADDITIVE
1045 50 50
1045 25 75
1045 35 110
2045 50 50
2045 50 100
2045 25 125
3056 30 30
3056 30 60
3056 30 90

It looks like simple additive, but how are you planning on storing and presenting to the end-user the results... in a grid or just final running total per individual item? It looks like this might represent sales order item / qty sold. I would probably query into a read/write cursor ordered by item, then apply a scan loop to update each one... something like.
select ItemNumber, Qty, 000000 as RunningTotal ;
from YourTable ;
order by ItemNumber ;
into cursor C_Sample readwrite
lastItem = ""
runTotal = 0
scan
*/ If different item, reset running total back to zero
if lastItem != ItemNumber
runTotal = 0
endif
*/ Update running total
runTotal = runTotal + Qty
*/ Update the record column
replace RunningTotal with runTotal
*/ preserve the ID we just processed for comparison to next record
lastItem = ItemNumber
endscan
*/ Done...

Related

Deleting opposite duplicates in Oracle SQL

I have some issues that I hope you can lend a helping hand towards.
So I have some data with opposite value, for example:
Amount Type ID
10000 10 25
-10000 10 25
20000 11 30
30000 12 49
-30000 12 49
Sorry for the ugly table.
But how can I delete the lines where the amount cancelled out? I would like the ones with 10000 and -10000 to be deleted. But I wont know the specific type and id number. (30000 and -30000 is the same issue).
Any ideas? I've been searching forever, but can only find how to remove a duplicate row, and not both rows.
Hope it makes sense :)
Update. Thanks for the solutions so far! :)
There can be more than 1:1 in the amount column, but those wouldn't identical Type and ID. For exampel could a 6th entry look like this:
Amount Type ID
10000 10 25
-10000 10 25
20000 11 30
30000 12 49
-30000 12 49
10000 31 42
And the last one should not be deleted :) Hope it makes sense now.
On the basis only of the limited information provided...
DELETE x
FROM my_table x
JOIN my_table y
ON y.id = x.id
AND y.type = x.type
AND y.amount = x.amount * -1;

BIRT interpolate data in crosstab for rows/cols without data

Crosstab interpolate data so graph 'connects the dots'
I have trouble with my crosstab or graph not interpolating the data correctly. I think this can be solved, but I'm not sure. Let me explain what I did.
I have a datacube with the rows grouping data by weeknumber and the cols grouping data per recordtype. I added to flagbits to the dataset so I can see if a record is new or old. In the datacube I added a measure to sum these bits per row/col group. Thus effectively counting new and old records per week per coltype. I also added a sum of records per coltype.
In the crosstab I added two runningsums for the new and old records sums. In the crosstab I added a datacell to calculate the actual records per row/colgroup. Thus actual records = totalcoltype - runningsum(old) + runningsum(new)
So lets say there are 20 records in this set for a coltype.
In week 1: 3 old records, 2 new records. Then the running sum becomes 3 and -2. Actual = 20 - 3 + 2 = 19 (correct)
In week 2: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 3: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 4: 2 old records, 1 new recored. The runningsums becomes 5 and 3. Actual = 20 - 5 + 3 = 18 (correct)
In week 5: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
In week 6: 6 old records, 2 new records. The runningsums becomes 11 and 5. Actual = 20 - 11 + 5 = 14 (correct)
In week 7: no data. The runningsums are not visible. Actual = 20 - null + null = 20 (wrong)
So the graph will display:
19
20
20
18
20
14
20
If I add a condition to the actual calculation, it becomes:
19
null
null
18
null
14
null
The graph doesn't ignore the null values, but thinks they are 0. The graph is wrong.
Is there a way to let the graph realy ignore the null values?
Another solution is that the runningsums display the last know value or just add 0 if there is no data. Any idea how to do this?
Obviously, a correct graph would read:
19
19
19
18
18
14
14

Sum multiple columns using PIG

I have multiple files with same columns and I am trying to aggregate the values in two columns using SUM.
The column structure is below
ID first_count second_count name desc
1 10 10 A A_Desc
1 25 45 A A_Desc
1 30 25 A A_Desc
2 20 20 B B_Desc
2 40 10 B B_Desc
How can I sum the first_count and second_count?
ID first_count second_count name desc
1 65 80 A A_Desc
2 60 30 B B_Desc
Below is the script I wrote but when I execute it I get an error "Could not infer matching function for SUM as multiple of none of them fit.Please use an explicit cast.
A = LOAD '/output/*/part*' AS (id:chararray,first_count:chararray,second_count:chararray,name:chararray,desc:chararray);
B = GROUP A BY id;
C = FOREACH B GENERATE group as id,
SUM(A.first_count) as first_count,
SUM(A.second_count) as second_count,
A.name as name,
A.desc as desc;
Your load statement is wrong. first_count, second_count is loaded as chararray. Sum can't add two strings. If you are sure that these columns will take numbers only then load them as int. Try this-
A = LOAD '/output/*/part*' AS (id:chararray,first_count:int,second_count:int,name:chararray,desc:chararray);
It should work.

How to structure a db table to store ranges like 0;20;40;60;80 or 0;50;100;150;200;250;... so on

I want to store the below range in a table, the number of items in template01 is 6, second is 4. How can we structure the table to store this information. Displaying these values as comma separated or having one column for each range values stated would be my last option. Can you help on how to structure the table/tables to store the below said information.
Template01 0 10 20 30 40 50
Template02 0 50 100 150
Template03 0 100 200 300 400 500 600
Basically, I want to store these ranges as templates and use it later for saying if the luggage weight is from 0 - 10, it will cost $5 per Kg, if its 10 - 20, it will cost $4.8 per Kg etc
This is how the template will be used,
Domain: XYZ
Template01 0 10 20 30 40 50
Increment01% 125% 120% 115% 110% 105% 100%
Domain: ABC, am using or picking the same template 'Template01' but Increment%
will be different for different domain, hence
Template01 0 10 20 30 40 50
Increment02% 150% 140% 130% 120% 110% 100%
Idea is, I want to store Template of weight breaks and later I want to associate with different Increment%. Thus when the user chooses a weight break template, all the increment% values applicable or configured for that template can be shown for the user to choose one.
Template01 0 10 20 30 40 50
Increment01% 125% 120% 115% 110% 105% 100% [Choice 1]
Increment02% 150% 140% 130% 120% 110% 100% [Choice 2]
Standard approach is:
Template_Name Weight_From Weight_To Price
------------- ----------- --------- -----
Template01 0 10 5
Template01 10 20 4.8
...
Template01 40 50 ...
Template01 50 (null) ...
Template02 0 50 ...
...
Template03 500 600 ...
Template03 600 (null) ...
For a normalised schema you'd need to have tables for the Template, for the Increment, for the Template Weights, and for the Increment Factors.
create table luggage_weight_template (
luggage_weight_template_id number primary key,
template_name varchar2(100) unique);
create table luggage_increment (
luggage_increment_id number primary key,
increment_name varchar2(100),
luggage_weight_template_id references luggage_weight_template(luggage_weight_template_id));
create table template_weight (
template_weight_id number primary key,
luggage_weight_template_id references luggage_weight_template(luggage_weight_template_id),
weight_from number not null);
create table increment_factor (
increment_factor number primary key,
increment_id references luggage_increment(luggage_increment_id),
template_weight_id references template_weight(template_weight_id));

How to paging while every page may start form a different offset

For example: 50 rows per page
Page1: First fetch 50 rows. row 0 to row 49, and remove 15 rows by a filter. So I have only 35 rows, then I fetch another 50 rows. Row 50 to row 99, then I get 15 rows from these rows by the same filter to merge those 35 rows. Now I have 50 rows, but spent at lest 65 rows.
Page2: Start with >= row 65.
Page3: Start with >= row 115.
Page4: Start with >= row 165.
How do I get the start offset of page2 when getting the page4?
If you are fetching from a database, make the filter part of the query itself and let the database software do the counting for you. E.g.
SELECT * from myrows WHERE [condition] LIMIT 50 OFFSET 200
would give you the filtered records 200..249, i.e. contents of page #5.

Resources