Importing Dashboard With Same Chart/Dataset same overrides existing Dashboard Charts/Datasets (Apache Supserset) - business-intelligence

I am facing an issue while importing dashboards where chart/dataset names are same for multiple dashboards.
Scenario
I have developed one dashboard named Company1_Dashboard in one
instance which contains 2 charts Department_Category and
Department_Revenue (each has similar name dataset to pull in data from a PostgreSQL schema named company1_schema) => Instance1
I created one more dashboard in another instance of Apache Superset
named Company2_Dashboard which also contains 2 charts
Department_Category and Department_Revenue (each has similar name
dataset to pull in data from a PostgreSQL schema named
company2_schema) => Instance2
Now, when I try to import Company2_Dashboard into Instance1 (after
creation of company2_schema in Instance1), then charts/datasets of
Company1_Dashboards are getting overriden by Company2_Dashboards.
When Instance1 charts are explored then I observed that their datasets
are overriden by Company2_Dashboard's datasetsNow, I
This is happening for all charts/dashboards with same name even in different dashboards
I tried multiple times import/export but same issue happening
Any possible solution to resolve this?
Apache Superset Version Used : 1.5.1
Thanks in Advance

Related

Where can we create queries in Kibana dashboard

If we need to create queries in Kibana dashboard without creating any codebase change where can we do it. Is there any place to create queries in Kibana. Please tell me anyone who knows that place. Because I have a big problem in our project, creating a dashboard for the logs.
see either https://www.elastic.co/guide/en/kibana/current/kuery-query.html or https://www.elastic.co/guide/en/kibana/current/lucene-query.html
and https://www.elastic.co/guide/en/kibana/current/save-load-delete-query.html may also be relevant

How to get notification for updated IP in grafana datasource?

I am using prometheus as datasource for the grafana dashboard. I am adding the Mesh IP as the URL of the default datasource. Whenever the grafana runs, it creates grafana.db which contains all the information related to datasource. I need to work in such a way that user can change the default URL of the datasource. Till now, everything works very well.
Now my problem is, when I try to change the IP of default datasource, and when I run the container again, it again picks the default URL instead of last saved URL in the grafana.db file. I want it to work in such a way that it should read default datasource IP from grafana.db if the file is available otherwise read it from default Mesh IP.
I can think of two different approaches for this:
Calling some queries using Postgres.
Get notified from GUI whenever URL is changed by the user and update that URL in the variable.
I am completely lost how to solve this problem. Anyone please help me how I can solve this problem using above mentioned approaches or any other one.
Thanks in advance.
The grafana.db resorts to the old default URL because the data is not being persisted across restarts.
For data persistence, you need to map Grafana to an external DB. Install another db outside docker and use the following link to map it to Grafana: database_configuration
Also look at provisioning

Open Kibana Console with queries from JSON file?

Is it possible to save a bunch of queries into a single JSON file to import in Kibana Console?
I know there's an option to save a single query[2] and the Kibana console is based on local storage, but I would like to load up the queries based on parameters, such that changing the params(e.g load_from=filename.json) should load up a different set of queries.
For example, when I open http://localhost:5601/app/kibana#/dev_tools/console?load_from=filename.json, it should open the Kibana console with ES queries from the file.
EDIT: As a workaround, it's possible to do this with Postman API Client or similar API clients.
Solution:
EDIT 2 on 22/02/2022: Kibana Spaces is the answer. It lets you organize dashboards and other saved objects into meaningful categories[3]. Whenever you load http://localhost:5601/ it lets you choose the space you want to work with. Having multiple browser tabs with different saved spaces should work for most cases.
[2] https://www.elastic.co/guide/en/kibana/master/save-load-delete-query.html
[3] https://www.elastic.co/guide/en/kibana/master/xpack-spaces.html
Unfortunately, that's not possible yet.
Elastic is (supposedly) working on a new Kibana feature (tabbed console panes #10095) that will provide support for better organizing the code in the Dev Tools application. The issue has been opened for a while and not much seems to be happening, so we'll see.
The release date of that feature is not known yet.

Store infrequently changing info in Spring App

I am working On a Microservice (Spring boot) that require to store some static information that infrequently changes (once per quarter). The data (below) is about the company reports that looks like
reportId#1: "frequency"="daily","to":"some email ids"
reportId#2: "frequency"="weekly", "to":"some emailids"
As you can see an entry in the data is basically a Report id, and associated attributes are frequency of reports and receiver's email id.
My question is.. What is the best place to store this information? I have some thoughts..and here are my views.
a) NoSQL DB like MongoDB seems to be a good option.. I can create a Collection and store it there and retrieve it once during app startup. But the I thought, whether creating a Collection just to store this static info is a good choice?
b) Redis seems to be another good option. I can create a template for above dataset and store it there. I can query the Redis based on the reportId to retrieve the frequency and senders list.
c) Store it in a file in the classpath and load at the app startup. The downside is that, I will have to redeploy the app with new changes in file whenever this report listing changes. I believe externalizing this information to either Mongo or Redis is a better option.
d) The app is running in the AWS..so I can even store this in a file in S3 bucket.
Would like to know your views?
Since the config will only change once a quarter, the overheard of a database is not required. You should consider Apache commons configuration. It will allow you to load config changes from files without the need for an application restart.
http://commons.apache.org/proper/commons-configuration///userguide/howto_reloading.html

How do I properly import a project into ChainBuilder?

When I import a project into chainbuilder and re-harvest, there are still modules that are marked as "not available". The project does not run properly and some visualizations are not shown. Even the visualizations that are loading don't show the proper content.
When ChainBuilder exports a project it only exports the modules, their connections and settings, but not the data. Each module is identified by a unique key. While harvesting after importing a project, ChainBuilder searches the current database of services and matches services with the same key to the modules that are imported. If there are modules that dont match any service, the module will still be marked as not availabe. In this case you need to find the required services (with the matching key) and add them to the system and re-harvest again.
Since the data is not transfered automatically during the import, the workflows that provide appropriate data have to be rerun. There should be some starting points in the workflow that will reload all the data and restore the necessary datasets into the workflows and therefore also into the visualizations. See if you can find a button, textfield or dropdown-box that starts that process. Sometimes more than one button/input field will have to be triggered.

Resources