Elasticsearch as a service for GCP - elasticsearch

As far as I'm aware, there are no managed elasticsearch solutions provided by Google Cloud Platform, such as there is Amazon Elasticsearch Service on AWS.
I've opened a feature request ticket for this on the issue-tracker here, but I was wondering if there is a service somewhere on GCP that I'm missing? If not, are there plans to build an ES service on top of GCP? And if so, is there a general timeline on when that will be GA?

When configuring your cluster on ES Cloud (the cloud operated by Elastic Inc), you have the choice between hosting it on AWS or on GCP. If you pick GCP, the cluster is fully managed by Elastic on GCP.
This is a commercial feature (but AWS Elasticsearch is too), but you have a 14 days free trial to see how it looks like.
Also worth reading:
https://www.elastic.co/blog/hosted-elasticsearch-services-roundup-elastic-cloud-and-amazon-elasticsearch-service
https://www.elastic.co/aws-elasticsearch-service

Thank you for creating a feature request!
Regarding Elasticsearch on GCP, I am not 100% sure if it will apply for your case but there is a solution on Google Marketplace. It is Elasticsearch Service on Elastic Cloud offered by Google on GCP. Check it out and see if you can use it.

Related

Cloudfoundry logs to Elastic SAAS

In the documentation of Cloudfoundry, the Elastic SAAS service is not mentioned
https://docs.cloudfoundry.org/devguide/services/log-management-thirdparty-svc.html
So was wondering if anyone has done it and how?
I know one way is to use a logstash instance in cf, feed the syslog to it and then ship it to Elastic. But just wondering if there is a direct possibility to skip the logstash deployment on cf?
PS. We also log using the ECS format.

Use Kafka Connect with Azure Event Hubs and/or AWS Kinesis/MSK to send data to ElasticSearch

Has anyone used Kafka connect with one or more of the following cloud streaming services?
AWS Kinesis
AWS MSK
Azure Event Hubs
FWIW we're looking to send data from Kafka to ElasticSearch without needing to use additional component such as Logstash or FileBeat.
At first I thought we could only do this using the Confluent platform, but then read that Kafka Connect is just an open-source Apache project. The only need for Confluent would be if we want/need to use one of the proprietary connectors, but given the ElasticSearch Sink connector is the only one we need (at least for now) and this is a community connector - see here (and here for licensing info), we might be able to do this using one of the AWS/Azure streaming services assuming this is supported (Note: AWS or Azure represents a path of less resistance as the company I work for already has vendor relationships with both AWS & Microsoft. Not saying we won't use Confluent or migrate to it at some stage, but for now Azure/AWS is going to be easier to get across the line).
I found a Microsoft document that implies we can use Azure Event Hubs with Kafka Connect, even though AEH is a bit different to open source Kafka... not sure about AWS Kinesis or MSK - I assume MSK would be fine, but not sure... any guidance/blogs/articles would be much appreciated....
Cheers,

Deploying jaeger on AWS ECS with Elasticsearch

How should I go about deploying Jaeger on AWS ECS with Elasticsearch as backend? Is it a good idea to use the Jaeger all in one image or should I use separate images?
While I didn’t find any official jaeger reference to this, I think the jaeger all in one image is not intended for use in production. It makes one container a single point of failure, making it better to use separate containers for each jaeger component(if one is down from some reason - others can continue to operate).
I have recently written a blog post about hosting jaeger on AWS with AWS Elasticsearch (OpenSearch) service. While it is done with all-in-one, it is still useful to get the general idea of how to go about this.
Just to generally outline the process (described in detail in the post):
Create AWS Elasticsearch cluster
Create an ECS Cluster (running on ec2)
Create an ECS Task Definition, configured with a jaeger all-in-one image with the elasticsearch url from the step 1
Create an ECS Service that runs the created task definition
Make sure security groups on your EC2 allow access to jaeger ports as described here
Send spans to your jaeger endpoint via OpenTelemetry SDK
View your spans via the hosted jaeger UI (your-ec2-url:16686)
The all in one is a useful tool in development to test your work locally.
For deployment it is very limiting. Ideally to handle a potentially large volume of traffic you will want to scale parts of your infrastructure.
I would recommend deploying multiple jaeger-collectors, configured to write to the ES cluster. Then you can configure jaeger-agents running as a sidecar to each app or service broadcasting telemetry info. These agents can be configured to forward to one of a list of collectors adding some extra resilience.

Any best way to create kibana automated snapshot to GCP storage as i am using an older version of Kibana

Any best way to create a kibana automated snapshot to GCP storage as I am using an older version of Kibana 7.7.1, Also I do not have any automated backup currently.
Kibana has Snapshot lifecycle management(SLM) that helps you do this. You have to run the Kibana with basic license
Here is a tutorial, you could also directly use the SLM API to create and automate this process along with Index-lifecycle management.

How to monitor search queries on ElasticSearch? I want to know what my users are searching for

I am using Elastic Cloud hosted service for elasticsearch and kibana instance. I have already asked help on https://www.elastic.co/blog/monitoring-the-search-queries article from Elastic Cloud team but it is relevant to on-premise cluster

Resources