QuickSight analysis is breaking when pointing to correct dataset and fields exists as it is after migration from Dev to Prod - amazon-quicksight

Need help,
QuickSight analysis is breaking when pointing to correct dataset that connected initially and fields exists as well after updation of dataset (spice)
The analysis (including filters) should not break if the analysis is pointing to dataset (that is working properly) and the fields that used initially still exists.

Related

Filter a Data Source from a Different Data Source

I have two chart tables both with different data sources. I want one table to act as the filter to the other table.
Here is the problem...
I tried a custom query for my data source which used the email parameter to filter the data source.
The problem is every time a user changes a filter on any page a query is executed in BigQuery, slowing the results and exponentially increasing my BigQuery monthly charges.
I tried blending the two tables.
The problem is the blended data feature only allows for 10 dimensions to be added to the resulting blended data source and is very slow.
I tried creating a control filter using a custom field on the "location" column on each table sharing the same "Field Id".
The problem is that the results table returns all the stores until you click on a location in the control list. And I cannot let a user see other locations.
Here is a link to a data studio sample report you can clearly see what I am trying to do.
https://datastudio.google.com/reporting/dd33be45-ab13-4881-8a3b-cabafa8c0dbb
Thanks
One solution which i can recommend to over come your first challenge, i.e. High cost. You can customize cost by using GCP-Memorystore, depending on frequency of data that is getting updated.
Moreover, Bigquery also cashes data for a query if you are not using Wild cards on tables and Time partitioned tables. So try to customize your solution over analysis cost if it is feasible over your solution. Bigquery Partition and Clusting may also help you in reducing BQ analysis cost.

Determining dates of last use for tables/views in an Oracle Database

At work, my team accesses and works in a number of different databases using our team login. We have a ton of tables and views in each respective schema and I would guess that only ~10% are used regularly. As such, I would like to clean up these schemas to keep only those tables and views which are actually used and delete all the other ones (or at least archive them).
Is there any way for me to see the last time that a view was run, or the last time that a table was queried? My thinking is that if I can see that a view/table hasn't been used in x amount of time, then I'd feel more comfortable dropping it. My fear is that without such a process, I might drop tables/views that are used in Tableau dashboards and for other purposes.
Please check this Link
DBA_HIST tables can show you data depending till what date data is stored but not beyond that and it wont be conclusive.

Seeking Opinion:Denormalising Fact and Dim tables to improve performance of SSRS Reports

We seem to have bit of a debate on a discussion point in our team.
We are working on a Data Warehouse in the Microsoft SQL Server 2012 platform. We have followed the Kimball Architecture to build this Data Warehouse.
Issue:
A reporting solution (built on SSRS), which sources data from this Warehouse, has significant performance issues when sourcing data from fact and dim tables. Some of our team members suggest that we extract data from facts and dims into a new set of tables using SSIS packages. This would mean we denormalise these tables into ‘Snapshot’ tables. In this way the we would not need to join these tables to create data sets within the reports. Data could be read out of these tables directly.
I do have my own worries about this; inconsistencies, maintenance of different data structures, duplication of data etc to name a few.
Question:
Would you consider creating snapshot tables (by denormalising facts and dim tables) for reporting tables a right approach?
Would like to hear your thoughts on this.
Cheers
Nithin
I don't think there is anything wrong with snapshot tables. The two most important aspects of a data warehouse are:
The data is correct.
The data is useful.
If your users are unable to extract the totals they require, in a reasonable timescale, they won't use the warehouse.
My own solution includes 3 snapshot tables. Like you, I was worried about inconsistencies. To address this we built an automated checking process. This sub-system executes a series of queries, stored on a network drive, once an hour. Any records returned by the queries are considered a fail. Fails are reported and immediately investigated by my ETL team. This sub-system ensures the snapshots and underlying facts are always aligned and consistent with each other. Drift is prevented.
That said, additional tables equals additional complexity. And that requires more time/effort to manage. Before introducing another layer to your warehouse, you should investigate why these queries are underperforming. If joins are to blame:
Are you using an inappropriate data type, for your P/F keys?
Are the FKeys indexed (some RDBMS do this by default, others do not)?
Have you looked at the execution plans, for the offending queries?
Is the join really to blame, or is it a filter applied to the dim table?
for raw cube performance my advice would be to always try to denormalize your tables and have one fact table and one table for each dimension (star schema).
If you are unsure if it will actually help you could start creating materialized views. These are kind of the best of both worlds, on the long run you should alter your etl.
In my previous job we only had flattened tables which worked quite well. Currenly we have a normalized schema but flatten it in the last step.

SSRS query data partially missing from report

I know a variation of this questions has been asked 100 times but I've tried all options that I an find and wonder if there's something I'm missing. I've tried many of the proposed solutions but can't get it to work.
I have a matrix report in SSRS 2013...parameters, datasets all the normal stuff. The datasets are SQL stored procedures. My matrix uses a dataset called dsDetails that pulls from on of the SQL stored procedures. When I run the SP in SQL Server I get all the data I expect (i.e. data through year 2018); and if I run the SP through SSRS Query designer I also see all of the data. However when my report renders some of the data isn't there (specifically I'm missing 2018 data; 2016 and 2017 data is present). I've deleted the rdl.data file and cleared out cache and still can't get my data in. I also checked that the missing data doesn't have any weird formatting and that also isn't the issue.
I've gone through and checked for matrix filters, row/column group filters, row/column goup visibility and so on. I can find what's causing it...it seems like it has to be a filter or visibility.
So I decided to just drop a new matrix into the report and try and rebuild by just adding a sinlgle row and a single column and same deal...missing data for year 2018. Is there a form of filter that applies to the report body (and therefore any matrix/table I drop in there). This new matrix definitely has no filters or visibility added so it's a pure representation of the data...except it's missing 2018 data I clearly see in the query/dataset results.

Tracing a single Issue

Problem: I have selected few issues. Now, I want to trace an issue within the source code files starting from the moment it was first detected as an issue until it is repaid/resolved/removed/deleted/remaining in the latest repository.
So, for each unique issue (unique to an specific source file), I want a list that has N rows (N = number of analysis, e.g., SNAPSHOTS) where each row shows the existence of of the issue in a source file (preferable also with its location in the source file).
Questions: Apparently, I couldn't find an API for this. When I explored the database, I was unable to establish a connection between SNAPSHOTS and ISSUES tables that I could use to separate issues from one SNAPSHOT/analysis to another.
Do you see any way to solve the problem?
How can I separate issues from one snapshot to the others?
What is the format/encoding of the LOCATION field of the ISSUE
table? Can this be used to identify an issue location in the source
file?
Relation between issues and analysis is not persisted over time. Still each issue has a creation date, the date of last change (status, assignee, ...) and optionally the close date. That allows you to match issues with the dates of analysis.
As a side note, the database must never be accessed by plugins nor external applications. The only API to extract is provided by web services, api/issues/search and api/issues/changelog in your case.

Resources