Tableau on Splunk.. Possible? - business-intelligence

My BI team would like to access my Splunk data using Tableau, we don't want to use Splunk visualizations and would rather use our BI tools...
We tried to use Splunk connectors but that failed and the volume of data is too big to show on tableau... is there any solution for this problem ? how can that be achieved and what are your recommnendations on this subject?
Thank you.

Yes. this is possible. You can use splunk SDK. By using this you and bring your data and feed it into tableau.
We are using this thing with QlickSense

Related

can i use some native Parquet file tools in Azure Logic App?

Hi there I need to change format of parquet file to csv using only Logic app native tools. Is that even possible?
I did research of similar issues, I found how to use Azure Functions to change format, but it's not native Logic App tool.
There's a custom connector that will transform Parquet to Json for you.
It will also allow you to perform filter and sorting operations on the data prior to it being returned.
Documentation can be found here ... https://www.statesolutions.com.au/parquet-to-json/

Open Kibana Console with queries from JSON file?

Is it possible to save a bunch of queries into a single JSON file to import in Kibana Console?
I know there's an option to save a single query[2] and the Kibana console is based on local storage, but I would like to load up the queries based on parameters, such that changing the params(e.g load_from=filename.json) should load up a different set of queries.
For example, when I open http://localhost:5601/app/kibana#/dev_tools/console?load_from=filename.json, it should open the Kibana console with ES queries from the file.
EDIT: As a workaround, it's possible to do this with Postman API Client or similar API clients.
Solution:
EDIT 2 on 22/02/2022: Kibana Spaces is the answer. It lets you organize dashboards and other saved objects into meaningful categories[3]. Whenever you load http://localhost:5601/ it lets you choose the space you want to work with. Having multiple browser tabs with different saved spaces should work for most cases.
[2] https://www.elastic.co/guide/en/kibana/master/save-load-delete-query.html
[3] https://www.elastic.co/guide/en/kibana/master/xpack-spaces.html
Unfortunately, that's not possible yet.
Elastic is (supposedly) working on a new Kibana feature (tabbed console panes #10095) that will provide support for better organizing the code in the Dev Tools application. The issue has been opened for a while and not much seems to be happening, so we'll see.
The release date of that feature is not known yet.

Datadog data validation between on-prem and cloud database

Very new to Datadog and need some help. I have crafted 2 SQL queries (one for on-prem database and one for cloud database) and I would like to run those queries through Datadog and be able display the query results and validate that the daily results fall within an expected variance between the two systems.
I have already set up Datadog on the cloud environment and believe I should use DogStatsD to create a custom metric but I am pretty lost with how I can incorporate my necessary SQL queries in the code to create the metric for eventual display on a dashboard. Any help will be greatly appreciated!!!
You probably want to be using the MySQL integration, and configure the 'custom queries' option: https://docs.datadoghq.com/integrations/faq/how-to-collect-metrics-from-custom-mysql-queries
You can follow those instructions after you configure the base integration https://docs.datadoghq.com/integrations/mysql/#pagetitle (This will give you a lot of use metrics in addition to the custom queries you want to run)
As you mentioned, DogStatsD is a library you can import to whatever script or application in order to submit metrics. But it really isn't a common practice in the slightest to modify the underlying code of your database. So instead it makes more sense to externally run a query on the database, take those results, and send them to datadog. You could totally write a python script or something to do this. However the Datadog agent already has this capability built in, so it's probably easier to just use that.
I am also just assuming SQL refers to MySQL, there are other integration for things like SQL Server, and PostgreSQL, and pretty much every implementation of sql. And the same pattern applies where you would configure the integration, and then add an extra line to the config file where you have the check run your queries.

Document Management/Content Management with Search

I have a requirement for a document management system to handle pdf,word,xls,ppt with semantic search.
I started looking into elasticsearch for the same and stumbled on Apache JacKrabbit and subsequently on OpenKM and Hippo. Even though core features like versioning exists in Jackrabbit, I need some pointers on how to go about this.
I need help navigating through the following concerns:
Should I just use elasticsearch and elasticsearch attachment plugin or use Jackrabbit with MySQL backend and use Elasticsearch to index the documents.
Or should I use OpenKM?
Any pointers would be greatly appreciated. This would finally require App integration.
Update Logically, using ElasticSearch for Search makes sense. But I figure that I cannot use that as primary datasource. What are the best options from storage(primary) Apache JackRabbit with MySQL? As all features are prebuilt in OpenKM, would this be a better option?.
What is it you want to achieve? Are you looking to manage making the documents available, is it about managing the content in documents? ES, or any search engine, is generally not a primary data source.
I can't give you any advice wrt OpenKM (neither for or against). Whether Hippo is a match depends on your case which I need to know more about.

Native application to query ELK?

I'm using Logstash, Elasticsearch and Kibana to process, store and visualize my logs.
My setup works fine but now I'm looking for a new tool : before ELK I was used to read my logs on Notepad++ or Glogg (I'm on Windows) and now I'm using only kibana discover tab.
Do you think I can find a native application that looks like a read-only Notepad++ that query Elasticsearch and display my logs like before ?
The three features I actually need are :
querying multiple sources logs,
for a specified date range,
and display it quickly to a concise and fast viewer.
I don't think it's very complicated to implement, so that's why i'm wondering if it already exists :)

Resources