How to change the column headers in quicksight programatically? - amazon-quicksight

I have multiple reports to publish in quicksight. Each report is getting generated from a different dataset that i have added to the dashboard. In total there are 10 datasets so there are going to be 10 reports. Most of these reports have a lot of repeated column headers. Is there a way to change the name of the column headers programmatically so that I can copy it for all the other reports ?
Below is an example of the columns in the datasets. The real dataset is pretty heavy so cannot share here.
Dataset 1 -
user_id, user_name, user_country, user_city, number_of_jobs_finished
Dataset 2 -
user_id, user_name, user_country, user_city, number_of_interactions
.
.
.
Dataset 10 -
user_id, user_name, user_country, user_city, number_of_movies_watched
-Thanks,
Vinit

You should use the CLI
Use the Dataset CLI / API
CustomSql -> (structure)
A physical table type built from the results of the custom SQL query.
DataSourceArn -> (string)
The Amazon Resource Name (ARN) of the data source.
Name -> (string)
A display name for the SQL query result.

Related

Delete last record from flat file in blob to Azure data warehouse

I have some pipe delimited flat files in Blob storage and in each file I have a header and footer record with filename, date of extract and the number of records. I am using ADF pipeline with Polybase toload into Azure DWH. I could skip header record but unable to skip the footer. The only way I could think of is creating staging table with all varchar and load into staging and then convert the data types back into main tables. But that is not working as the number of columns is different to the footer and the data. Is there any easier way to do this? Please advise.
Polybase does not have an explicit option for removing footer rows but it does have a set of rejection options which you could potentially take advantage of. If you set your REJECT_TYPE as VALUE (rather than PERCENTAGE) and your REJECT_VALUE AS 1 you are telling Polybase to reject one row only. If your footer is in a different format to the main data rows, it will be rejected but your query should not fail.
CREATE EXTERNAL TABLE yourTable
...
<reject_options> ::=
{
| REJECT_TYPE = value,
| REJECT_VALUE = 2
Please post a simple, anonmyised example of your file with headers, columns and footers if you need further assistance.
Update: Check this blog post for information on tracking rejected rows:
https://azure.microsoft.com/en-us/blog/load-confidently-with-sql-data-warehouse-polybase-rejected-row-location/

Building an index with multiple tables in ElasticSearch/Logstash 7.0

I have 20 tables in Oracle, all of them contain (among others) the following columns: id, name, description and notes. I would like the user to enter a text, the text to be searched in either name, description and/or notes of all the tables, and the result to return which table(s) and id(s) has the text.
In the Logstash 7.0 configuration file, do I need to define one jdbc input for each table? Or should the input be a single select with an union of all the tables?
So my answer for the above mentioned question is combine all tables info in single json.Then index it you can solve the problem in an easier way.

OBIEE - cannot create aggregated values

I am trying to create aggregated fields for OBIEE reporting .. I am not able to do it even in Oracle BI Admin Tool ...
When I double click AMOUNT in Business Model & Mapping layer and set aggretation to SUM (column type: DOUBLE) ... save repo load in Enterprise Manager (EM) and check it in OBIEE ... it shows me null values only ..
If I check in EM server logs (which query did it generate) there is no amount in select ...
I've tried to create it directly inside OBIEE -> Table edit form -> Variable options -> Aggregation rule .. SUM
But nothing happend.. I can see values in detail, not grouped and SUMed by dimension :(
Do you have any idea/suggestions what am I doing wrong?
OBI will select CAST(NULL AS DOUBLE) or something along those lines (as shown in the query log) when your logical model is invalid. Check that you have created your model correctly.

BIRT Report Designer - split actual and budget data stored in one table into columns and add a variance

I have financial data in the following format in a SQL database and I have to live with this format unfortunately (example dummy data below).
I have however been struggling to get it into the following layout in a BIRT report.
I have tried creating a data cube with Package, Flow and Account as Dimensions and Balance as a Measure, but that groups actual PER and actual YTD next to each other and budget PER and YTD next to each-other etc so is not quite what I need.
The other idea I had was to create four new calculated columns, the first would only have a value if it were a line for actual and per, the next only if it was actual and ytd etc, but could not get the IF function working in the calculated column.
What are the options? Can someone point me in the direction of how to best create the above layout from this data structure so I can take it from there?
Thanks in advance.
I am not sure what DB you are using in the back end, but this is how I did it with SQL Server.
The important bit happens in the Data Set. Here is the SQL for my Data Set:
SELECT
Account,
Package,
Flow,
Balance
FROM data
UNION
SELECT DISTINCT
Account,
'VARIANCE',
Flow,
(SELECT COALESCE(SUM(Balance),0) FROM data WHERE Account = d.Account AND Flow = d.Flow AND Package = 'ACTUAL') - (SELECT COALESCE(SUM(Balance), 0) FROM data WHERE Account = d.Account AND Flow = d.Flow AND Package = 'BUD') as Balance
FROM data d
This gives me a table like:
Then I created a DataCube that contained
Groups/Dimensions
Account
Flow
Package
Summary Fields/Measures
Balance
Then I created a CrossTab Report that was based on that DataCube
And this produces the result of:
Hopefully this helps.

SSRS Tablix Cell Calculation based on RowGroup Value

I have looked through several of the posts on SSRS tablix expressions and I can't find the answer to my particular issue.
I have a dashboard I am creating that contains summary data for various managers. They are entering monthly summary data into a single table structured like this:
Create TABLE OperationMetrics
AS
Date date
Plant char(10)
Sales float
ReturnedProduct float
The data could use some grouping so I created a table for referencing which report group these metrics go into looks like this:
Create Table OperationsReport
as
ReportType varchar(50)
MetricType varchar(50)
In this table, 'Sales' and 'ReturnedProduct' are the Metric column, while 'ExecSummary' or 'Quality' are ReportType entries. To do the join, I decided to UNPIVOT the OperationMetrics table...
Select Date, Plant, Metric, MetricType
From (Select Date, Plant, Sales, ReturnedProduct From OperationMetrics)
UNPVIOT (Metric for MetricType in (Sales, ReturnedProduct) UnPvt
and join it to the OperationsReport table so I have grouped metrics.
Select Date, Plant, Metric, Rpt.MetricReport, MetricType
FROM OpMetrics_Unpivoted OpEx
INNER JOIN OperationsReport Rpt on OpEx.MetricType = Rpt.MetricType
(I understand that elements of this is not ideal but sometimes we are not in control of our destiny.)
This does not include the whole of the tables but you get the gist. So, they have a form they fill in the OperationMetrics table. I chose SSRS to display the output.
I created a tablix with the following configuration (I can't post images due to my rep...)
Date is the only column group, grouped on 'MMM-yy'
Parent Row Group is the ReportType
Child Row Group is the MetricType
Now, my problem is that some of the metrics are calculations of other metrics. For instance, 'Returned Product (% of Sales)' is not entered by the manager because it is assumed we can simply calculate that. It would be ReturnedProduct divided by Sales.
I attempted to calculate this by using a lookup function, as below:
Switch(Fields!FriendlyName.Value="Sales",SUM(Fields!Metric.Value),
Fields!FriendlyName.Value="ReturnedProduct",SUM(Fields!Metric.Value),
Fields!FriendlyName.Value="ReturnedProductPercent",Lookup("ReturnedProduct",
Fields!FriendlyName.Value,Fields!Metric.Value,"MetricDataSet")/
Lookup("Sales",Fields!FriendlyName.Value,Fields!Metric.Value,
"MetricDataSet"))
This works great! For the first month... but since Lookup looks for the first match, it just posts the same value for the rest of the months after.
I attempted to use this but it got me back to where I was at the beginning since the dataset does not have the value.
Any help with this would be well received. I would like to keep the rowgroup hierarchy.
It sounds like the LookUp is working for you but you just need to include the date to find the right month. LookUp will return the first match which is why it's only working on the first month.
What you can try is concatenating the Metric Name and Date fields in the LookUp.
Lookup("Sales" & CSTR(Fields!DATE.Value), Fields!FriendlyName.Value & CSTR(Fields!DATE.Value), Fields!Metric.Value, "MetricDataSet")
Let me know if I misunderstood the issue.

Resources