PIG - retrieve data from XML using XPATH - hadoop

I have n number of these type of xml files.
<students roll_no=1>
<name>abc</name>
<gender>m</gender>
<maxmarks>
<marks>
<year>2014</year>
<maths>100</maths>
<english>100</english>
<spanish>100</spanish>
<marks>
<marks>
<year>2015</year>
<maths>110</maths>
<english>110</english>
<spanish>110</spanish>
<marks>
</maxmarks>
<marksobt>
<marks>
<year>2014</year>
<maths>90</maths>
<english>95</english>
<spanish>82</spanish>
<marks>
<marks>
<year>2015</year>
<maths>94</maths>
<english>98</english>
<spanish>02</spanish>
<marks>
</marksobt>
</Students>
I need output like
roll_no name gender year eng_max_marks maths_max_marks spanish_max_marks
1 abc m 2014 100 100 100
1 abc m 2015 110 110 110
I am able to retrieve marks row wise in single statement but not able to extract roll_no and name with this.
A = LOAD 'student.xml' using org.apache.pig.piggybank.storage.XMLLoader('marks') as (x:chararray);
B = FOREACH A GENERATE XPath(x, 'marks/year'), XPath(x, 'marks/english'), XPath(x, 'marks/math'), XPath(x, 'marks/spanish');
This return
year eng_max_marks maths_max_marks spanish_max_marks
2014 100 100 100
2015 110 110 110
I can extract both the chunks but not getting how to join other fields. I can't use across join because I have n number of other files.
Let's forger attribute name (roll_no) for now. How can I extract the rest of nodes
name gender year eng_max_marks maths_max_marks spanish_max_marks
abc m 2014 100 100 100
abc m 2015 110 110 110
I don't want to use marks(1)/english approach because this nodes can also vary and don't want to adopt any dirty approach.
Any pointers????

Related

Count number of connections in each network using DAX

I have a dataset like this in Power BI with connections between "Participant ID" Column and "Knows Participant":
Participant ID
Knows Participant
111
353
111
777
111
112
111
249
112
143
112
144
113
111
113
244
114
NaN
115
113
...
...
777
111
777
398
777
114
778
NaN
779
112
3499
NaN
I've build Network chart. However, there are a lot of 1-1 connections that are not very useful for visualization, so I want to exclude them (see image):
Is it possible to count a number of connections in each network using DAX and then use this value to filter out all nodes with only 1 connection (red circled)? Or maybe filter out 1 connection nodes using another approach?
I've tried to make a calculated column using DAX:
Connection Column = COUNTROWS(
FILTER(Table,
EARLIER(Table[Knows Participant])=Table[Knows Participant])
)
However, it only shows duplicate values in "Knows Participant" Column, but not number of connections in each network.
Example of desired output:
Participant ID
Knows Participant
Number of Connections in the Network
111
353
4
353
444
4
444
551
4
551
987
4
112
143
1
220
190
1
333
337
2
337
410
2
765
0
You need the PATH functions as you're essentially trying to flatten a hierarchy and then exclude certain parts of it. The following help page gives a good rundown of the approach to take.
https://learn.microsoft.com/en-us/dax/understanding-functions-for-parent-child-hierarchies-in-dax
You can add a column to the table with a measure like this:
VAR pIdLinksCount = CALCULATE(COUNTROWS(tbl), ALL('tbl'[Knows Participant]))
VAR neighbourLinksCount =
IF(
pIdLinksCount=1
, -- if pIdLinksCount=1 then count neighbour links
VAR neighbourId =
CALCULATETABLE(
Values('tbl'[Knows Participant])
)
RETURN
CALCULATE(
COUNTROWS(tbl)
,ALL() -- removes all filters from data model
,'tbl'[Participant ID] = neighbourId -- applies filter to [Participant ID] column
--,'tbl'[Participant ID] IN neighbourId -- alternatively try this. I believe it is not necessary
)
,2 -- returns 2 if pIdLinksCount>1.
-- The "value = 2" will return "result > 3 = TRUE()"
)
VAR result = pIdLinksCount + neighbourLinksCount
RETURN
IF(
result>2
,1
,0
)
The idea is to check a neighbor too - if it has more then 1 link

removing bad data from a data file using pig

I have a data file like this
1943 49 1
1975 91 L
1903 56 3
1909 52 3
1953 96 3
1912 82
1976 66 3
1913 35
1990 45 1
1927 92 A
1912 2
1924 22
1971 2
1959 94 E
now using pig script I want to remove the bad data like removing those rows which have characters and empty fields
I tried this way
records = load '/user/a106524609/test.txt' using PigStorage(' ') as
(year:chararray, temperature:int, quality:int);
rec1 = filter records by temperature != 'null' and (quality != 'null ')
Load it as lines
A = load 'data.txt' using PigStorage('\n') as (line:chararray);
Split on all whitespaces
B = FOREACH A GENERATE FLATTEN(STRSPLIT(line, '\\s+')) as (year:int,temp:int,quality:chararray);
Filter by valid strings
C = FILTER B BY quality IN ('0','1','2','3','4','5','6','7','8','9');
(Optionally) Cast to an int
D = FOREACH C GENERATE year,temp,(int)quality;
In Spark, I would start with a regex match of the expected format.
val cleanRows = sc.textFile("data.txt")
.filter(line => line.matches("(?:\\d+\\s+){2}\\d+"))

Transformations with desired values

I'm trying a process with powercenter designer but I do not get the desired objective.
I have these initial data:
CODE CODE2 OPTION
001 A 89
001 A 55
001 A 12
002 B 25
002 A 59
025 A 44
I have to get it for code to do the following: if there are several records per CODE then you have to put the value 1111 in the OPTION2 field to the record with the highest value in the OPTION field, if there is only one record in CODE it also puts the value 1111. I do this by making an SORTER transformation in powercenter, not complicated.
What I can not do is the next step. The second record with the highest value in the OPTION field corresponds to the value of the first field and so on.
OUTPUT:
CODE CODE2 OPTION OPTION2
001 A 89 111111
001 A 55 89
001 A 12 55
002 A 59 111111
002 B 25 59
025 A 44 111111
How could I get this?
What transformations should I use?
Thanks! ^^
You can sort it by code and descending order of option. Then in an expression variable hold the value for the previous record's value in a variable.
v_OPTION2 = IIF(ISNULL(v_PREV_CODE) OR CODE != v_PREV_CODE,
111111,
v_PREV_OPTION
)
out_OPTION2 = v_OPTION2
v_PREV_OPTION = OPTION
v_PREV_CODE = CODE

Spotfire Custom Expression : Calculate (Num/Den) Percentages

I am trying to plot Num/Den type percentages using OVER. But my thoughts does not appear to translate into spotfire custom expression syntax.
Sample Input:
RecordID CustomerID DOS Age Gender Marker
9621854 854693 09/22/15 37 M D
9732721 676557 09/18/15 65 M D
9732700 676557 11/18/15 65 M N
9777003 5514882 11/25/15 53 M D
9853242 1753256 09/30/15 62 F D
9826842 1260021 09/30/15 61 M D
9897642 3375185 09/10/15 74 M N
9949185 9076035 10/02/15 52 M D
10088610 3512390 09/16/15 33 M D
10120650 41598 10/11/15 67 F N
9949185 9076035 10/02/15 52 M D
10088610 3512390 09/16/15 33 M D
10120650 41598 09/11/15 67 F N
Expected Out:
Row Labels D Cumulative_D N Cumulative_N Percentage
Sep 6 6 2 2 33.33%
Oct 2 8 1 3 37.50%
Nov 1 9 1 4 44.44%
My counts are working.
I want to take the same Cumulative_N & Cumulative_D count and plot Percentage over [Axis.X] as a line chart.
Here's what I am using:
UniqueCount(If([Marker]="N",[CustomerID])) / UniqueCount(If([Marker]="D",[CustomerID])) THEN SUM([Value]) OVER (AllPrevious([Axis.X])) as [CumulativePercent]
I understand SUM([Value]) is not the way to go. But I don't know what to use instead.
Also tried the one below as well, but did not:
UniqueCount(If([Marker]="N",[CustomerID])) OVER (AllPrevious([Axis.X])) / UniqueCount(If([Marker]="D",[CustomerID])) OVER (AllPrevious([Axis.X])) as [CumulativePercent]
Can you have a look ?
I found a way to make it work, but it may not fit your overall solution. I should mention i used Count() versus UniqueCount() so that the results would mirror your desired output.
Add a transformation to your current data table
Insert a calculated column Month([DOS]) as [TheMonth]
Set Row Identifers = [TheMonth]
Set value columns and aggregation methods to Count([CustomerID])
Set column titles to [Marker]
Leave the column name pattern as %M(%V) for %C
That will give you a new data table. Then, you can do your cumulative functions. I did them in a cross table to replicate your expected results. Insert a new cross table and set the values to:
Sum([D]), Sum([N]), Sum([D]) OVER (AllPrevious([Axis.Rows])) as [Cumulative_D],
Sum([N]) OVER (AllPrevious([Axis.Rows])) as [Cumulative_N],
Sum([N]) OVER (AllPrevious([Axis.Rows])) / Sum([D]) OVER (AllPrevious([Axis.Rows])) as [Percentage]
That should do it.
I don't know if Spotfire released a fix or based on everyone's inputs I could get the syntax right. But here is the solution that worked for me.
For Columns D & N,
COUNT([CustomerID])
For columns Cumulative_D & Cumulative_N,
Count([CustomerID]) OVER (AllPrevious([Axis.X])) where [Axis.X] is DOS(Month), Marker
For column Percentage,
Count(If([Marker]="N",[CustomerID])) OVER (AllPrevious([Axis.X])) / Count(If([Marker]="D",[CustomerID])) OVER (AllPrevious([Axis.X]))
where [Axis.X] is DOS(Month)

Apache PIG - How to get the Flop 10 data records?

I have data records like this:
Name customerID revenue(Mio) premium
Michael James 078932832 2.7 y
Susan Miller 024383490 3.9 n
John Cooper 021023023 2.1 y
How do I get the records - divided into the premium flag - each with the lowest revenue (=Flop 10)?
The result should be given as:
Nr Name customerID revenue(Mio) premium
1 John Cooper 021023023 2.1 y
2 Michael James 078932832 2.7 y
3 Andrew Murs 044834399 3.0 y
. ... ..... ... .
10 th entry with flag y
1 Susan Miller 024383490 3.9 n
. ... ..... ... .
10 th entry with flag n
As you see the list is ordered ascending (beginning with the lowest revenue).
I guess you should use split
Considering A is your load statement
A = load 'data' as (Nr,Name,customerID,revenue,premium);
B = split A into PRE if premium =='y', NONPRE if premium == 'n';
C = order PRE by revenue asc;
D = order NONPRE by revenue asc;
Disclaimer: Be careful while using split as null records get dropped. I have not compiled this code.

Resources