I want to transform two values in a column with spoon.
I have a value "1" in the column gender and want to change this to the value "Male"
And I have a value "0" in the column gender and want to change this to the value "Female"
As you could see the input is a csv file and the output will be a excel file.
Which steps do i need to make the transformation of gender?
Steps of ETL
Use Value Mapper Step in transformation. This will allow you to do this.
Related
enter image description here
I have tried traditional approach of using Agg (Group By: ID, Store Name) and Max(Each Object) columns separately.
Then in next expression, Concat(Val1 Val2 Val3 || Val4).
How ever, I'm getting output as '0100'.
But, REQUIRED OUTPUT: 1100
Please let me know, how this can be done in IICS.
IICS is similar to the Powercenter on-prem.
First use an aggregator.
in Group By tab add ID, Store Name
in Aggregate tab add max(object1)... please note to set data type and length correctly.
Then use an expression transformation.
link ID, Store Name first.
Then concat the max_* columns using pipe -
out_max=max_col1||max_col2||... please note to set data type and length correctly.
This should generate correct output. I think you are having wrong output because of data length or data type of object fields. Make sure you trim spaces from object data before aggregator.
I need to create a new column containing repeatedly a specific cell value.
In Column1, Row1 the table contains the exam name. Lets say "Exam1". I now need a new column with this text value "Exam1" for all of my students. The new columns name would then be "Exam-Name" and all the rows should have as content the exam name "Exam1".
How can I reference to a specific content in a specific cell and have this text as value for all of my rows in a new column?
Concrete: I need the value "MMA20aL..." to be the value in the row on the right where actually you find the contents "KE21a..."
Thank you for your help. I pretty new to PowerQuery, thank you for a detailed explanation :-)
If you right click the first cell and drill down it will show you a formula. Make note of that formula then remove the drill down step.
Add column ... custom column ... with column name Exam-Name and the formula it showed you, such as :
= Source{0}[Column1]
Where Source in above formula is the name of prior step and Column1 is the name of the first column
So, for example, if prior step is #"Changed Type" and the first column is named Bob then use instead
= #"Changed Type"{0}[Bob]
(As a note, your data is in a terrible format and needs much massaging before it would be really useful)
I have a csv flowfile with single record. I need to create its file name based on couple of column values in the csv file. Can you please let me know how we can do it by using the column name only not the position of the column as column position may change. Example
CSV File
Name , City, State, Country, Gender
John, Dallas, Texas, USA, M
File name should be John_USA.csv
I am trying extract text processor and pulling the first data row using -
row = ^.\r?\n(.)
And then updateattribute processor I am pulling the values from the columns using below expression
${row:getDelimitedField(1)}_${row:getDelimitedField(4)}.csv
But this use the position of the column not the column name. How can I build it using the column name not the position of columns
The way I will do it (maybe be not the efficient one):
Convert the CSV to json
Pass content to attributes (so you can access the field you want like dictionnary (key-value))
Update Attributes
Convert it back to CSV (thus you can control the schema, and the position of the fields).
I am getting a flowfile with one of the column in content as hexadecimal value. My requirement is to derive a new column with the corresponding decimal value. If i try to use UpdateRecord Processor with replacement value strategy as "Literal value" and try to update the existing field value by using ${field.value:fromRadix(16)} then it is working fine. But if i try to derive a new column using the value of the existing column then am getting run time error. I used Replacement Value strategy as "Record path value" and dynamic property value as ${/existing_column:fromRadix(16)}. Could you please let me know what am missing here.
If you set the strategy as "Record Path" then the value has to be a valid record path statement after EL evaluation and the statement you have is not valid: ${/existing_column:fromRadix(16)}. It will fail because when it goes to do EL first, EL doesn't know when /existing_column is.
There really needs to be a record path function for fromRadix so that you could do fromRadix( /existing_column, 16).
Without that I think you need to do a two step process...
Step 1 - First UpdateRecord has /new_column = /existing_column to create a new column with hex value of the original column.
Step 2 - Second UpdateRecord has /new_column = ${field.value:fromRadix(16)} to convert the new column.
I have a .csv file which consists of 7 columns: ID, Title, Media-Type, Published, Content, Source and Label.
The .csv file/dataset is as given here:
Dataset
Now, what I want to do is transform the values given the last column of "Label". That is, I want to convert the "0"s in the dataset to read as "FALSE" and the "1"s in the dataset to read as "TRUE". Simply put, I want "TRUE" in place of 1s and FALSE in place of 0s. Is there any way this can be done? Any kind of help is appreciated. Thanks a lot in advance.
Use Get & Transform to open the CSV file, then select the column, replace 1 with TRUE and 0 with FALSE and load the result into a worksheet. That way you will have no formulas in the sheet. With Get & Transform you can also repeat the query with one click if the underlying data source changes.
You can use a formula like this.
=IF(G2=1,TRUE, FALSE)