Is there any way how to use/add/install my own Custom Query Module in Memgraph Cloud? - memgraphdb

On my local Memgraph instance (Memgrpah platform running in Docker) I've created a few Custom query modules. I'd like to use them at Memgraph Cloud. Can I add them to Cloud somehow?

At the moment it is not possible to do that. You can go to Memgrpah repo and open an issue to request this feature.

Related

Any best way to create kibana automated snapshot to GCP storage as i am using an older version of Kibana

Any best way to create a kibana automated snapshot to GCP storage as I am using an older version of Kibana 7.7.1, Also I do not have any automated backup currently.
Kibana has Snapshot lifecycle management(SLM) that helps you do this. You have to run the Kibana with basic license
Here is a tutorial, you could also directly use the SLM API to create and automate this process along with Index-lifecycle management.

How to monitor an ElasticSearch Cluster on the Elastic Cloud with Datadog?

We have an elasticsearch cluster deployed to the Elastic Cloud and would like to send monitoring/health metrics to Datadog. What is the best way to do that?
It seems like our options are:
Installing the datadog agent binary via the plugins upload
Using metric beat -> logstash -> datadog_metrics output
You can deploy the Datadog agent in a container / instance that you manage and the configure it according to these instructions to gather metrics from the remote ElasticSearch cluster that is hosted on Elastic Cloud. You need to create a conf.yaml file in the elastic.d/ directory and provide the required information (Elasticsearch endpoint/URL, username, password, port, etc) for the agent to be able to connect to the cluster. You may find a sample configuration file here.
As George Tseres mentioned above, the way I had to get this working was to set up collection on a separate instance (through docker) and then to configure it to read the specific Elastic Cloud instances.
I ended up making this: https://github.com/crwang/datadog-elasticsearch, building that docker image, and then pushing it up to AWS ECR.
Then, I spun up a Fargate service / task to run the container.
I also set it to run locally with docker-compose as a test.

Unable to get the dashboard of IBM websphere MQ in Azure Kubernetes

We are trying to spin up a Stateful MQ manager with Azure File System as persistent storage mounted for data in an Azure Kubernetes cluster. Here is the link which we followed. We exposed the service type as LoadBalancer as shown in below command.
helm install stable/ibm-mqadvanced-server-dev --version 3.0.1 --set service.type=LoadBalancer,security.initVolumeAsRoot=true,license=accept
By default, it takes default storage class as Azure disk. Here I want to use the Azure File System as Persistence storage.so, How should I pass my Azure file System name? and the other thing is, we can able to run the pod successfully without any restarts, but unable to access the web interface of it. so, we don't know where might be the exact issue raises while accessing the service?
Github repo you've linked specifically mentions dataPVC.storageClassName under configuration. This is used to define storage class, if you dont have a storage class for Azure Files (i think it doesnt exist by default), you'd need to create it and then reference it, so it would use that class.
How to set it up: here

Databricks notebook integrated mlflow artifact location and retention

Currently by default in notebook run, it will create an experiment ID, but the Artifact Location would point to something under dbfs:/databricks/mlflow/{experiment id}. If there is a way we may change this in default experiment creation? We like to manage the storage outside databricks.
How long is default TTL for experiment runs and metrics? Is it configurable and how?
You can use mlflow_set_experiment('<PATH>') to specify where you want your runs and all of their contents to be logged. See the docs here.
If you are working on Databricks and want to log to a particular blob storage, you can mount the blob storage to Databricks File System (DBFS) and point MLflow to it when you set the experiment.
If you are talking about running it in Databricks and directly logging the results locally, I don't think you can do that. However, you can use GitHub and MLflow Projects to develop on Databricks and then run locally, or vice versa.

Running netflix conductor with standalone elastic search?

How to configure Netflix conductor to run standalone elastic search rather than embedded elastic search ?
if you have a conductor-config.properties just make sure you have these pointing to the valid elasticsearch you have up and running:
workflow.elasticsearch.instanceType=EXTERNAL
workflow.elasticsearch.url=http://elasticsearch:9200
Then should be able to run conductor up with that config:
java conductor-server-2.15.0-SNAPSHOT-all.jar conductor-config.properties
https://github.com/Netflix/conductor/blob/master/es5-persistence/src/main/java/com/netflix/conductor/dao/es5/index/ElasticSearchRestDAOV5.java
You can inspect this as an example, swapping the elastic container by your own, modifying the conductor-config.properties. It will be copied in when you run:
check out https://github.com/s50600822/conductor-cheat
inside the repo just do
docker-compose up
Check out https://github.com/Netflix/conductor/blob/master/es5-persistence/src/main/java/com/netflix/conductor/dao/es5/index/ElasticSearchRestDAOV5.java for other options.
To add external elastic search we need to follow code changes as mentioned in
below link.
https://github.com/Netflix/conductor/tree/master/es5-persistence.
And rebuild jar and run conductor server again with properties.
if you still get errors , I suggest to follow below link
https://github.com/Netflix/conductor/issues/489.
You can use the standalone installation of elasticsearch2 or elasticsearch5 because the associated support classes are already provided with Netflix Conductor binary.
To configure externally you have to do the following
Install and configure standalone elasticsearch. By default the
installation would expose 2 ports 9200/http or 9300/tcp.
Update server.properties file with the host and port so that the
communication will start happening with the standalone instance of
elasticsearch.
Hope this helps.

Resources