How can I import data from MySQL(AWS RDS) using Logstash of Elastic Cloud via AWS VPC? - elasticsearch

I'm trying to import some data from AWS RDS to Elasticsearch of Hosted Elastic Cloud
- It's not AWS Elasticsearch Service
What I want to do is below.
What: Data import
From: AWS RDS MySQL
To: Elasticsearch in Elastic Cloud
How: Using Logstash of Elastic Cloud
However, my AWS RDS MySQL is in AWS VPC and Elastic Cloud doesn't provide static ip address (please see Elasticsearch F&Q)
So Logstash can't access to AWS RDS MySQL preserving security rule of AWS VPC.
In previous data transfer case, I used to add trasferer's ip address to whitelist of VPC. For this case, it can't be done.
I totally don't know whether this trial is wrong or not.
How can I handle this case?

After some research, I concluded that there is no way to do that for now. However, there is compromise plan. By setting Logstash EC2 instance in Amazon VPC, Logstash can access to AWS RDS MySQL. With Elastic cloud's credential, it can also send data to Elasticsearch in Elastic Cloud.

Related

elasticsearch move data from local device to cloud ealstic

Is there any way to copy all the data of an index from elasticsearch from my computer to cloud elastic ?
i'm working on localhost and now i want to migrate it to cloud.elastic.co
Cheers!
you can do this with a snapshot of your local cluster into s3, then a restore of that on Elastic Cloud.
Which cloud provider are you using? If it is AWS OpenSearch, it does not allow remote index operation from local ElasticSearch. Allows if it's an ElasticSearch in the AWS cloud and https is broadcasting.
If the data is critical, you can pull the data and send bulk requests to ElasticSearch in the cloud. I had to do so.
You can write your own application or you can send requests multi-threaded with a tool like Jmeter.

Cannot connect LogStash to AWS ElasticSearch "Attempted to resurrect connection to dead ES instance, but got an error"

I am building a setup which consists of AWS ElasticSearch (includes both ElasticSearch and Kibana), LogStash and FileBeat. I have been following this tutorial which explains how to Setup a Logstash Server for Amazon Elasticsearch Service and Auth with IAM.
I am using an Ubuntu 18.04 EC2 m4.large instance to host both LogStash and FileBeat. I have provisioned all of my assets inside a VPC. So far, I have provisioned an AWS ES domain, an Ubuntu 18.04 EC2 and then installed LogStash inside that. Right now, I am ignoring FileBeat and I just want to connect my LogStash service to the AWS ES domain.
As per the tutorial, I have
Created an IAM Access Policy
Created Role logstash-system-es with "ec2.amazonaws.com" as trusted entity
Authorized the Role in my AWS ES domain dashboard
Installed LogStash and configured as specified
(Here I entered the Access Key I am using and its ID into the output section. However, I am not sure how the Role and an Access Key relates to each other)
Started LogStash and tailed the logstash-plain.log file to see the output
When I check the output it appears LogStash cannot connect to the ES domain.The following line starts occurring infinitely. (I have replaced the AWS ES domain name with AWSESDOMAIN).
Attempted to resurrect connection to dead ES instance, but got an
error.
{:url=>"https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/",
:error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError,
:error=>"Got response code '403' contacting Elasticsearch at URL
'https://vpc-AWSESDOMAIN.us-east-1.es.amazonaws.com:443/'"}
FYI I have configured my AWS ES domain with Fine Grained Access Control when setting it up.
What seems to be the issue here? Is it regarding Fine Grained Access Control? Security Groups? IAM issue?

Can we connect ECS Instance with RDS instance in Alibaba Cloud?

I would like to know that whether it is possible to connect an ECS instance with RDS instance? If yes, then please explain the process or please share some resources to know about the process.
Thank You!!
Yes you can connect the ECS instance to the RDS instance through the internet by using the public address of the RDS instance.
Did you mean your application on your ECS being able to connect to your RDS? Yes, of cause.
(Source)
After you create your RDS, you'll have to configure a whitelist of IPs which can access your RDS, and create your accounts and databases on RDS. Then you can connect to your RDS through your ECS by your application or client using the internal endpoints (only within your VPC) provided in the Basic Information of your RDS instance. If you need a public endpoint, you can apply for one.
Sure, you can connect your application on ECS instance to your RDS instance.
You can check this documentation:
https://www.alibabacloud.com/help/product/26090.htm
or specific database:
ApsaraDB RDS for MySQL
ApsaraDB RDS for SQL Server
ApsaraDB RDS for PostgreSQL
ApsaraDB RDS for PPAS
ApsaraDB RDS for MariaDB TX

Not able to connect Amazon Aurora Serverless from SQL client

Today I've created Amazon Aurora Serverless cluster for PostGreSql in us-west-2, ensured the VPC and security groups in
a way that, it should be publicly accessibly. But I'm not able to connect that cluster using the aurora endpoint from my Navicat/PgAdmin4 desktop client. Then I tried from the EC2 instance that are in same security group/vpc as like as Aurora Serverless, then it worked.
From AWS froum,
You can't give an Aurora Serverless DB cluster a public IP address.
You can access an Aurora Serverless DB cluster only from within a
virtual private cloud (VPC) based on the Amazon VPC service.
Source: https://forums.aws.amazon.com/thread.jspa?messageID=862860&tstart=0
Seems it uses an internal AWS networking setup that currently only supports connections from inside a VPC, and it must be the same VPC where the serverless cluster is deployed.
So now basically my question is that,
Is there any workaround to connect Aurora Serverless with any client like Navicat or PgAdmin4?
I found a cool hack that is working perfectly for my development purpose with some tweaks and I know I don't need this on my production environment.
So as we know Aurora Serverless works only inside VPC. So make sure you are attempting to connect to Aurora within the VPC and the security group assigned to the Aurora cluster has the appropriate rules to allow access. As I mention earier that I already have an EC2 instance, Aurora Serverless and a VPC around both. So I can access it from my EC2 but not from my local pc/ local sql client. To fix that I did below two steps.
1. To access from any client(Navicat in my case),
a. First need to add GENERAL db configurations like aurora endpoint host, username, password etc.
b. Then, need to add SSH configuration, like EC2 machine username, hostip and .pem file path
2. To access from project,
First I create a ssh tunnel from my terminal like this way,
ssh ubuntu#my_ec2_ip_goes_here -i rnd-vrs.pem -L 5555:database-1.my_aurora_cluster_url_goes_here.us-west-2.rds.amazonaws.com:5432
Then run my project with db configuration like this way test.php,
$conn = pg_connect("host=127.0.0.1 port=5555 dbname=postgres user=postgres password=password_goes_here");
// other code goes here to get data from your database
if (!$conn) {
echo "An error occurred.\n";
exit;
}
$result = pg_query($conn, "SELECT * FROM brands");
if (!$result) {
echo "An error occurred.\n";
exit;
}
while ($row = pg_fetch_row($result)) {
echo "Brand Id: $row[0] Brand Name: $row[1]";
echo "<br />\n";
}
This question comes up over and over for multiple AWS services (most new ones are VPC only by default). The short answer is - you can hack up something and expose the DB outside of the VPC, but it is not recommended for a production setup. Assuming this is for a dev setup, by all means try the recommendations from [1]. It is for Neptune, but you can do the exact same thing for Aurora.
[1] Connect to Neptune on AWS from local machine

Getting information about deployment from within an instance of AWS Elastic Beanstalk

My specific need is to get the list of EC2 instances in the deployment from within one of the instances.
I've tried using AWS command line for example aws elb describe-load-balancers however it would just give details of all my AWS services. I know you can specify an instances name with --load-balancer-name but I just don't have access to that from within the instance automatically.
Perhaps a file can be created on instance creation by placing something in .ebextensions?
You can do it in a two step process using the AWS CLI.
First you get the endpoint for your Elastic Beanstalk application:
aws elasticbeanstalk describe-environments --query='Environments[?ApplicationName==`Your-application-name`].EndpointURL'
Then you use the endpoint to get the instances:
aws elb describe-load-balancers --query='LoadBalancerDescriptions[?DNSName==`load-balancer-end-point-from-previous-step`].Instances[0]'

Resources