How to bind values from CSV files with a query to database? - birt

I'm trying to build a report using BIRT. I define several data sources: two CSV-files and MySQL database. A query that receives data from the database looks like this:
SELECT applicationType, STATUS, COUNT(*)
FROM cards
GROUP BY applicationType, STATUS;
Then I created a table with three columns that outputs these values from the query:
So far so good. But I want to output values from CSV-files instead of applicationType and status. The first file, apptype.csv, has the following structure:
applicationType,apptypedescr
1,"Common Type"
2,"Type 1"
...
and the second one, statuscards.csv, has the following structure:
status,statuscards
1,"Blocked"
2,"Normal"
...
And instead of:
Тип приложения | Статус карты | Количество
---------------|--------------|------------
1 | 2 | 55
I want to output the following:
Тип приложения | Статус карты | Количество
---------------|----------------|------------
Common Type | Normal | 55
I alse created New Joint Data Set to bind MySQL dataset and the first file dataset:
But I don't know how to change the table now. As far as I understand, [applicationType] in the first column should be replaced with [apptypedescr]:
but I'm not able to drag this field into the table, it's possible to add it to the report only outside the table. How can I bind these values from the CSV files to data from the MySQL query in the table?

I did this by setting new dataset for table in Properties -> Binding -> DataSet. After this the report was built properly:

Related

Creating dynamic data validation

Given the following dataset in one sheet, let's say 'Product Types' (columns and headers displayed):
| A | B | C |
| :----------: | :------: | :-----: |
| Product Type | Desktops | Laptops |
| Desktops | Dell | Dell |
| Laptops | HP | Apple |
In another sheet, let's say 'Assets', I've set column A to require a match to the data validation of an item listed in column A of 'Product Types' (not including the header). What I'm trying to do is that once column A is selected in 'Assets', I'd like to create a dynamic query data validation that will then present the values of the column with the header in 'Product Types'.
As an example, in the 'Assets' sheet, if column A has "Laptops" selected, column B will use data validation for values under the "Laptops" column in 'Product Types'; then giving the only options as "Dell" or "Apple". Alternatively, if ColA is changed to "Desktops", data validation is defined to only allow "Dell" or "HP" as options.
I'm unsure if this is possible. However, data validation in Google Sheets claims to allow a range or "formula".
I don't remember where I sourced this formula from, but it can present the values I need when running the query within a cell. However, I'm unable to use the same formula within a data validation field.
=ARRAYFORMULA(IFERROR(VLOOKUP(A2, TRANSPOSE({'Product Types'!A1:M1;
REGEXREPLACE(TRIM(QUERY(IF('Product Types'!A2:M<>"", 'Product Types'!A2:M&",", )
,,999^99)), ",$", )}), 2, 0)))
The above query presents the correct comma-separated values of the column I want in 'Product Types', but I'm not sure if this can be translated into something data validation can use or if there's altogether a different method to accomplish this.
P.S. I'm new. Markdown for the table seems to work when editing, but not when published..
the answer is no. data validation does not support direct input of complex formulae. you will need to run your formula in some column and then reference the range of that column within the data validation

Spotfire: Using one selection as a range for another datatable

I've searched quite a bit for this and can't find a good solution anywhere to what seems to me like a normal problem for this product.
I've got a data table (in memory) that is from a rollup table(call it 'Ranges'). Basically like so:
id | name | f1 | f2 | totals
0 | Channel1 | 450 | 680 | 51
1 | Channel2 | 890 | 990 | 220
...and so on
Which creates a bar chart with Name on the X and Totals on the Y.
I have another table that is an external link to a large (500M+ rows) table. That table (call it 'Actuals') has a column ('Fc') that can fit inside the F1 and F2 values of Ranges.
I need a way for Spotfire Analyst (v7.x) to use the selection of the the bar chart for Ranges to trigger this select statement:
SELECT * FROM Actuals WHERE Actuals.Fc between [Ranges].[F1] AND [Ranges].[F2]
But there aren't any relationships (Foreign keys) between the two data sources, one is in memory (Ranges) and the other is dynamic loaded.
TLDR: How do I use the selected rows from one visualization as a filter expression for another visualization's data?
My choice for the workaround:
Add a button which says 'Load Selected Data'
This will run the following code, which will store the values of F1 and F2 in a Document Property, which you can then use to filter your Dynamically Loaded table and trigger a refresh (either with the refresh code or by setting it to load automatically).
rowIndexSet=Document.ActiveMarkingSelectionReference.GetSelection(Document.Data.Tables["IL_Ranges"]).AsIndexSet()
if rowIndexSet.IsEmpty != True:
Document.Properties["udF1"] = Document.Data.Tables["IL_Ranges"].Columns["F1"].RowValues.GetFormattedValue(rowIndexSet.First)
Document.Properties["udF2"] = Document.Data.Tables["IL_Ranges"].Columns["F2"].RowValues.GetFormattedValue(rowIndexSet.First)
if Document.Data.Tables.Contains("IL_Actuals")==True:
myTable=Document.Data.Tables["IL_Actuals"]
if myTable.IsRefreshable and myTable.NeedsRefresh:
myTable.Refresh()
This is currently operating on the assumption that you will not allow your user to view multiple ranges at a time, and simply shows the first one selected.
If you DO want to allow them to view multiple ranges, you can run a cursor through your IL_Ranges table to either get the Min and Max for each value, and limit the Actuals between the min and max, or you can create a string that will essentially say 'Fc between 450 and 680 or Fc between 890 and 990', pass that through to a stored procedure as a string, which will execute the quasi-dynamic statement, and grab the resulting dataset.

Pig Latin using two data sources in one FILTER statement

In my pig script, am reading data from more than 5 data sources (Hive tables), where one is the main source data and rest were kind of dimension data tables. I am trying to filter the main data source relation (or alias) w.r.t some value in one of the dimension relation.
E.g.
-- main_data is main data source and dept_data is department data
filtered_data1 = FILTER main_data BY deptID == dept_data.departmentID;
filtered_data2 = FOREACH filtered_data1 GENERATE $0, $1, $3, $7;
In my pig script there are minimum 20 instances where I need to match for some value between multiple data sources and produce a new relation. But am getting some error as
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias filtered_data1.
Backend error : Scalar has more than one row in the output. 1st : ( ..... ) 2nd : ( .... )
Details at logfile: /root/pig_1403263965493.log
I tried to use "relation::field" approach also, no use. Alternatively, am joining these two relations (data sources) to get filtered data, but I feel, this will slow down the execution process and unnecessirity huge data will be dumped.
Please guide me how two use two or more data sources in one FILTER statement, something like in SQL, so that I can avoid using JOIN statements and get it done from FILTER statement itself.
Where A.deptID = B.departmentID And A.sectionID = C.sectionID And A.cityID = D.cityID
If you want to match records from different tables by a single ID, you would pretty much have to use a join, as such:
Where A::deptID = B::departmentID And A::sectionID = C::sectionID And A::cityID = D::cityID
If you just want to keep the records that occur in all other tables, you could probably go for an INTERSECT and then a
FILTER BY someID IN someIDList

Populating Birt through columns

I have been trying to come up with a birt report to print food tag to no avail. What i want to show on the report is:
foodtag1 | foodtag2 | foodtag3
foodtag4 | foodtag5 | foodtag6
foodtag7 | foodtag8 | foodtag9
Can this be done?
the data is taken from a MySql Query "select dishes.name
from dishes
where find_in_set (dishes.id,(select orders.dishes from orders where orders.id = ))"
** Note: FoodTags 1-9 are all unique name of dishes
** Also note that foodtag 1-9 are representatives of a dish name. FoodTag1 can be "Yang Zhou Fried Rice", it can be "italian Pasta". it can be "Mee Goreng". Data is taken out from a datasource in MYSQL server
The easiest way--
Drag a grid element to your report, set it at 3 columns and 3 rows
In property editor, bind the grid to the data set
Drag a dynamic text element to the first cell in the grid
Then use JavaScript simular to this to filter to the desired text.
if (row["FoodTagColumn"]=='foodtag1'){
row["FoodTagColumn"]
}else null

Play Framework: How to render a table structure from plain SQL table

I would be happy to get a good way to get the "table" structure from a plain SQL table.
In my specific case, I need to render JSON structure used by Google Visualization API "datatable" object:
http://code.google.com/apis/chart/interactive/docs/reference.html#DataTable
However, having an example in HTML would help either.
My "source" is a plain SQL table of "DailySales": its columns are "Day" (date), "Product" and "DailySaleTotal" (daily sale for that product). Please recall that my "model" reflects the 3-column table above.
The table columns should be "products" (suppose we have very small number of such). Each row should represent a specific date, and the row data are the actual sales for that day.
Date Product1 Product2 Product3
01/01/2012 30 50 60
01/02/2012 35 3 15
I was trying to use nested #{list} tags in a template, but unfortunately I failed to find a natural way to provide a template with a "list" to represent the "row data".
Of course, I can build a "helper object" in Java that will build a list of the "sales data" items per date - but this looks very weird to me.
I would be thankful to anyone who can provide an elegant solution.
Max
When you load your model order it by date and product name. Then in your controller build a map with date as index and list of model objects that have the same date as value of the map
Then in your template you have a first list iteration on map keys for the rows and a second list iteration on the list value for the columns.
Something like
[
#{list modelMap.keys, as: 'date'}
[${date},#{list modelMap.get(date), as: 'product'}${product.dailySaleTotal}#{ifnot product_isLast},#{/ifnot}#{/list}]#{ifnot date_isLast},#{/ifnot}
#{/list}
]
you can then adapt your json rendering to the exact structure you want to have. Here it is an array of arrays.
Instead of generating the JSON yourself, like Seb suggested, you can generate it:
private static Result queryToJsonResult(String sql) {
SqlQuery sqlQuery = Ebean.createSqlQuery(sql);
return ok(Json.toJson(sqlQuery.findList()));
}

Resources