I have around 30000 sheets of excel with standardized report.
The goal is to provide online dashboard to view the data.
My first thought as programmer is to create a database and find a way to import the data and then build front end on the top to create a customized dashboard.
Any easier way to do this?
If your reports are always going to be in excel, I don't think it would be a good approach to convert them to a database. It will only add an overhead of converting the data and creating the frontend for it, that would be an overengineering.
Instead what you can do is you can use data visualization tools like retool which already offer visualization components on top of any source data. The source can be anything be it an excel sheet or a database.
There are other alternatives as well like redash which also offers connections to CSV files.
However, if you still feel that creating a database like MySQL or any other database of your choice is a better approach to go. Then you should consider using Metabase or superset. These are excellent tools for data visualization and only require the database credentials to connect to the DB. These are feature-rich, accepted within the industry and you can create possibly all the visualizations as per your need.
Also, the best part is all the tools that I have mentioned here are totally open source or have a nice trial period.
FYI, For a better and more effective solution, I would recommend running a SQL Server on your machine and uploading the excel report to the SQL server and then connecting the SQL server instance to the Metabase, That would do everything that you need and you don't have to spend time to write any code to manage all these stuff. Below is a glimpse of a Metabase Visualisation Dashboard:
Related
I have a Power BI dashboard which has direct queries to an Oracle database, where I import data using SQL queries. On my local pbix file everything is fine. When I publish it to my enterprise powerbi.com site and want to refresh the data, I get the following error:
{"error":{"code":"DM_GWPipeline_Gateway_ProviderDataAccessArgumentError","pbi.error":
{"code":"DM_GWPipeline_Gateway_ProviderDataAccessArgumentError","parameters":{},"details":
[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage","detail":{"type":1,"value":"Unable to
find the requested .Net Framework Data Provider. It may not be installed."}},
{"code":"DM_ErrorDetailNameCode_UnderlyingHResult","detail":
{"type":1,"value":"-2147024809"}}],"exceptionCulprit":1}}}
Does anyone have any idea what could be causing the issue?
I have trawled through the Power BI forums and there does not seem to be a definitive remedy.
I don't have such an issue using Tibco Spotfire, however we are being pushed to use Power BI.
I found a work around. It seems that after you create the Views in Oracle, in PowerBI, you should rather import (as selected below) than make a direct query. Just click the import option and find your table with the view you create in the following options.
I still see the same error if I want to manually fresh the data on the PowerBI app, but I can at least now see my dashboard working, with the tables and graphics fully visible.
Is it possible to use the saiku-ui component with a different jolap provider than mondrian, or with a different server backend than the saiku-server component?
I have been looking but I have not found an architecture description of how these pieces fit together and what interfaces they use to communicate. Can anyone point me towards an understanding of what the saiku-ui wants to speak with and what the saiku-server is providing?
The reason for my interest is that I have a set of data spread across hundreds of csv files that I would like to query with a pivot and charting tool. It looks like the standard way to use this with saiku would be to have an ETL process to load in to an RDBMS. However, this would not be a simple process because the files and content and the way the files relate to each other vary, so the ETL would have to do a lot of inspection of the data sources to figure it out.
Given this it seems to me that I would have three options in how to use saiku:
1) write a complex ETL to load in to a rdbms, and then use a standard jdbc driver to provide the data to modrian. A side function of the ETL would be to analyze the inputs and write the mondrian schema file describing the cubes.
2) write a jdbc driver to access the data natively. This driver would parse sql and provide access to the underlying tables. Essentially this would be a custom r/o dbms written on top of the csv files. The jdbc connection would be used by mondrian to access the data. A side function of this custom dbms would be to produce the mondrian schema file.
3) write a tool that provides a jolap interface to the native data (accepts discovery and mdx queries). This would bypass mondrian entirely and interface with the ui.
I may be a bit naive here but I consider each of the three options to be feasible. Option #1 is my least preferred because of the likelihood of the data in the rdbms becoming out of sync with the cvs files. Option #3 is most preferred because the data are simple, so not much aggregating required and I suspect that mdx will be easier to parse than sql.
So, if i could produce my own jolap data source would it be possible to hook the saiku-ui tools up to it? Where would I look to find out the interface configuration details?
Many years ago, #ronaldbouman created xmondrian - the set of tools with the olap server, and web ui tools for xmla browsing and visualusation. But that project was not updating, and has no source code.
I just updated olap server and libraries to the latest versions.
You may get it here and build:.
https://github.com/Muritiku/xmondrian-build.
You may use web package as the example. The mondrian server works with the saiku-ui.
IMHO,
I would not be as confident as your are, because it took Julian Hyde more than a decade to build Mondrian (MDX->SQL) and Calcite (SQL), fulfilling your last two proposals.
You might simply consider using Calcite, or even better Dremio. Dremio has a JDBC interface, and can query directories of CSV files in SQL. I tested Saiku over Dremio successfully (with a schema based on two separate RDBMS). Just be careful to setup tables' schema accordingly in the Mondrian v4 schema.
Best regards,
Fabrice Etanchaud
Dremio
I have been tasked with creating some backups for some Oracle Apex apps (Application Express v4.1.1.00.23). The request is to back up both the applications & referenced db objects (not sure if this means just structure or structure & data).
On the one hand, I would have expected standard db backups to handle most or all of this but I'm very new to Apex so it's all a learning curve.
I'm currently exporting the application from apex and then exporting (using SQL Developer) all the database object dependencies that Apex gives me - although I see that the list doesn't include functions that are used for auth.
This seems a really clunky process that's very prone to mistakes (miss an object, save something to the wrong place, no guarantees of consistency etc).
Does Apex (my version!) offer something to do the job or is there something else I could be doing? I've had a good google but nothing has stood out.
UPDATE: I realise now that I should have included some extra info. I'm currently at a large organisation & I believe our db backups (which I guess/hope are done using rman) are done by a different department. I think the motivation for the request is so that we have some local, easily accessible backup so that if one of the developers messes something up we don't have to go through multiple layers of organisation (& undoubtedly a lot of time) to sort ourselves out. I suspect that some kind of source control would be a great starting point but I'm not sure how far I'll get with that idea - especially as we seem to have little in the way of autonomy over things like servers.
RMAN is the way to go to backup an oracle database:
https://docs.oracle.com/database/121/BRADV/toc.htm
there are tons of material on the hows and whys online; just google "oracle rman" and you'll find what you need (the documentation should cover you as well of course).
cheers
Standard DB backups will include everything you need.
The Apex applications I develop are static, meaning the end users make no changes to the Apex application, and there is no need to make a specific backup other than to store the original apex application .sql installation files in a safe place.
If you must, you can make an export of the database schemas the application uses. For example with the expdp utility.
IN apex you need to take 2 backups one in the workspace
Second your application
Third is while using export import from the database it tends to loose & character in procedure ..So beter use rman and take a complete backup.
I have found this Oracle white paper Life Cycle Management with Oracle Application Express (Revision 2) which does what it says on the tin - including various strategies for exporting, backing up & managing 'lost application development'. It's a really good read and I'll be using it as a template for suggestions of how we can manage our process in future.
We are using liquibase as evolutionary DB change management tool in our applications, it works great when we use it in "common" database schemas.
But we also work with GIS applications using esri arcSDE 9.3 platform over Oracle and in this case, all (or almost all) tables (both GIS and 'alphanumeric' tables) in the schema are managed (create table, grants, etc.) through arcSDE. So when we want to create new feature classe now we use arcCatalog, and this way, it's not possible to manage the feature classes’ changes directly through SQL using liquibase or other automate refactoring tool.
So if we cannot use liquibase to manage changes, at least we want to execute the management operations over our features through command line.
We’ve started looking for tools that avoid the use of arcCatalog, and then try to automate the changes using scripts, we are investigating these possibilities:
Try to capture the SQL that arcCatalog/arcSDE is executing each time we make a change in one Feature Class monitoring the oracle connection. It outcomes us a too complex set of SQL instructions that involves indexes, versioning tables, etc. so we give up this way.
Use the sdelayer and sdetable admin commands installed on arcSDE server.
Use the data management tool: a python based library to manage the feature classes, but it has to be executed from a machine with the desktop version installed.
These last two options will provide a way to manage features from command-line, but our target is to find/develop a tool to manage changes similar to the way liquibase do. But with these tools we will have to find a tool that let us map each SQL DDL operation to an arcSDE command, and currently no db refactoring tool provides this (currently we have check liquibase, dbdeploy, flyway).
Anybody had solved this problem of evolutionary change management with arcSDE? Any insight of another way to face this problem?
I'll take a stab at this, although I'm unfamiliar with one of the products you mention (liquibase specifically - I have used Oracle and I'm very familiar with ArcGIS (ArcMap & ArcCatalog).
Here's just some additional information that may help and my interpretation of your question.
My interpretation - "What's a simple way to manage or enable us to automate the management of our tables of GIS data in our Oracle database without having to use ArcCatalog all the time?"
So - I'll throw this concept back in the ring - I know SQL Server has spatial datatypes "geometry", etc. and that you can bypass SDE and let ArcGIS directly connect and interpret this data without even installing SDE. I also know Oracle has compatible spatial types. So I would possibly consider migrating my data from the managed FC's that ArcCatalog creates and push them into oracle-native geometry based tables. This way you can treat them like regular tables, cut ESRI out of the solution, and manage them with liquibase, etc. Hopefully that helps.
I would also consider upgrading to 10.1 or at least 10.0 (I promise I'm not an undercover salesman), although that will require your users to come with you on the client side (http://resources.arcgis.com/en/help/main/10.1/index.html#//002q000000n8000000) because the newer python APIs are much easier and faster to use (arcpy vs. the GP model), if you do choose to use Python to manage your stuff. (Regardless, either API isn't very well developed and isn't intuitive to code in or fast.)
Good luck.
I am new to the reporting world. Wanted to know which is the right solution to about for generating a single report by querying for data from multiple databases. We are planning to use some reporting solution like Jasper Reports or BIRT. Generally the databases are going to be postgresql.
Please do feel free to let me know about any other better solutions as well.
Thanks.
With BIRT you can use as many data sources as you like; independently or together as joint data sets. A Joint Data Set is basically a join you create at the report level. The cool think here is that you can in effect create the join accross data bases, even instances.
All the expected sources are supported, even some not so expceted ones. Any JDBC DB, Web Servce, Flat File, POJOs (via a Scripted Data Source), XML, Native DB Driver (i.e. Oracle, SQL Server, etc...). You can even use on BIRT Report as a data source for a secondary BIRT report. That gets a bit beyond the scope of the question, but opens up a huge amount of options in terms of deployment and felxibility.
In JasperReports if you are generating the report on the server, and streaming it back to the client as PDF or HTML, then you can use any datasources you want, being it:
Multiple databases
Objects, i.e. Java Beans
Text files/XML
Web services
... etc