I have one application deployed on Red Hat Linux and Oracle 12c.
As part of the application, there is an implementation of Oracle Database Change Notification. Whenever there is a change in the database, that triggers a notification back to the application server. Based on that, the application makes some decisions. Here everything is working well.
Now we are migrating application to amazon EC2 instance with our Red Hat Linux box converted into an EC2 instance and the Oracle Database migrated to cloud. I don’t know what that means, but I do have connection parameters and I'm able to connect though the JDBC driver from my application deployed in the EC2 instance.
Somehow the database change notification functionality is not working in EC2and nothing can be tracked from the application log.
Oracle Database in cloud in aws, which I consider it has to be RDS.
You need to understand the basic on how the application is deployed in aws cloud.
The EC2(with app server) must be in public subnet with security group allowing http/https traffic.
The RDS is kept in private subnet with security group attached which only allows incoming traffic from EC2 instance ( or public subnet).This is general scenario in most of the cases.
The RDS security group only allows the incoming traffic from ec2. but any traffic from your DB (Outgoing traffic) has to be explicitly allowed from the security group attached to Your RDS.
The same traffic needs to be allowed in your Ec2 instance security group.
I am unaware of your specific feature, whether it is supported in AWS RDS or not, you should take a look at AWS RDS Oracle docs
Related
I'm trying to connect my local standalone MySQL with Cloud Fusion to create and test a data pipeline. I have deployed the driver successfully.
Also, I have configured the pipeline properties with correct values of jdbc string, user name and password but connectivity isn't getting established.
Connection String: jdbc:mysql://localhost:3306/test_database
I have also tried to test the connectivity via data wrangling option but that is also not getting succeeded.
Do I need to bring both the environments under same network by setting up some VPC and tunneling?
In your example, I see that you specified localhost in your Connection String. localhost is only advertised to other services running local to your machine, and Cloud Data Fusion (running in GCP) will not be able to reach the MySQL instance (running on your machine). Hence you're seeing the connectivity issue.
I highly recommend looking at this answer on SO that will help you setup a quick proof-of-concept.
I think that your question is more related to the way how to connect some on-premise environments to GCP networking system that gathering Google cloud instances or resources throughout VPC connection model.
Admitting the fact that GCP is actually leveraging different approaches for connection methods within a Hybrid cloud concepts, I would encourage you to learn some fundamental principles of Cloud VPN as a essential part of performing secure connection between particular VPN Peer Gateway and Cloud VPN Gateway and further creating a VPN tunnel between parties.
I guess there is even dedicated chapter in GCP documentation about Data Fusion VPC peering implementation that might be helpful in your user case.
Today I've created Amazon Aurora Serverless cluster for PostGreSql in us-west-2, ensured the VPC and security groups in
a way that, it should be publicly accessibly. But I'm not able to connect that cluster using the aurora endpoint from my Navicat/PgAdmin4 desktop client. Then I tried from the EC2 instance that are in same security group/vpc as like as Aurora Serverless, then it worked.
From AWS froum,
You can't give an Aurora Serverless DB cluster a public IP address.
You can access an Aurora Serverless DB cluster only from within a
virtual private cloud (VPC) based on the Amazon VPC service.
Source: https://forums.aws.amazon.com/thread.jspa?messageID=862860&tstart=0
Seems it uses an internal AWS networking setup that currently only supports connections from inside a VPC, and it must be the same VPC where the serverless cluster is deployed.
So now basically my question is that,
Is there any workaround to connect Aurora Serverless with any client like Navicat or PgAdmin4?
I found a cool hack that is working perfectly for my development purpose with some tweaks and I know I don't need this on my production environment.
So as we know Aurora Serverless works only inside VPC. So make sure you are attempting to connect to Aurora within the VPC and the security group assigned to the Aurora cluster has the appropriate rules to allow access. As I mention earier that I already have an EC2 instance, Aurora Serverless and a VPC around both. So I can access it from my EC2 but not from my local pc/ local sql client. To fix that I did below two steps.
1. To access from any client(Navicat in my case),
a. First need to add GENERAL db configurations like aurora endpoint host, username, password etc.
b. Then, need to add SSH configuration, like EC2 machine username, hostip and .pem file path
2. To access from project,
First I create a ssh tunnel from my terminal like this way,
ssh ubuntu#my_ec2_ip_goes_here -i rnd-vrs.pem -L 5555:database-1.my_aurora_cluster_url_goes_here.us-west-2.rds.amazonaws.com:5432
Then run my project with db configuration like this way test.php,
$conn = pg_connect("host=127.0.0.1 port=5555 dbname=postgres user=postgres password=password_goes_here");
// other code goes here to get data from your database
if (!$conn) {
echo "An error occurred.\n";
exit;
}
$result = pg_query($conn, "SELECT * FROM brands");
if (!$result) {
echo "An error occurred.\n";
exit;
}
while ($row = pg_fetch_row($result)) {
echo "Brand Id: $row[0] Brand Name: $row[1]";
echo "<br />\n";
}
This question comes up over and over for multiple AWS services (most new ones are VPC only by default). The short answer is - you can hack up something and expose the DB outside of the VPC, but it is not recommended for a production setup. Assuming this is for a dev setup, by all means try the recommendations from [1]. It is for Neptune, but you can do the exact same thing for Aurora.
[1] Connect to Neptune on AWS from local machine
I’ve now tried to create a serverless Aurora (MySQL compatible) database and connect to it for two days, and I just can’t seem to get it to work. Supposedly I should have been able to get it up and running in five minutes.
In any case, I created am Aurora Serverless database in the US East (N. Virginia) region (us-east-1), and have been able to connect to it with the AWS Query Editor. I also have an EC2 server in the same region, and have given the Aurora database the same security group (under RDS > Security Group), and in the security group I have opened for MYSQL/Aurora (TCP, 3306) from all sources. When I click the modify button on the database, there is also another (VPC) Security Group listed (rds-launch-wizard-4), which was created automatically. This one I also located under my EC2 dashboard and gave access to all ports from all sources (inbound), and to all ports (outbound). And there is a networking VPC & subnet group, which I don’t know what to do with, if anything.
I try to connect to the database, using this command line command:
mysql -h hest2.cluster-xxxxx.us-east-1.rds.amazonaws.com -P 3306 -u root –p
It generates an error “ERROR 2003 (HY000): Can't connect to MySQL server on” on both my EC2 instance, my local computer and on other online servers.
From the EC2 instance, try doing a telnet on the DB port to test if all your security group settings are applied correctly.
telnet hest2.cluster-xxxxx.us-east-1.rds.amazonaws.com 3306
If the connection does go through, then the issue is with your client code. Cross check that you have wired the right endpoint in your code.
If the telnet connection does not group (I'm guessing that it would not), then it is guaranteed that your security group settings are not set correctly. In order to debug this further, we would need more details on:
The list of vpc security groups associated with your cluster.
The details of each of these vpc security groups (You've mentioned that
you've opened up everything, but I'd like to see the exact rules in
place)
As for laptop and other servers - If they are outside the VPC, then it would not work. Aurora Serverless is accessible only from within the VPC as of now.
I am looking for advice/ideas on how to continuously deploy new features to a Spring Boot web application that is hosted on an AWS EC2 instance. My current workflow:
bootRepackage my application to create a war file.
Upload that file to AWS.
Add a new feature to my application.
bootRepackage again.
Remove the current war from AWS, and upload the new one.
This is obviously not a good workflow, as the application needs to be restarted which could result in 1) downtime and 2) entries in the database being lost (if I'm using Spring's default H2 database - I am not, I'm using a standalone SQL server, but just making the point for this question) so I am wanting to streamline it.
Is there any way to add a new feature to the current instance of the service on AWS? Is it possible to recompile the code "one the fly" to prevent the need to restart the application?
Is there any way of creating a better setup that would allow me to just merge a new branch to master locally, and push that with the same instance still in prod except with this new feature?
Thank you in advance!
Update, is this really the correct answer?
If you using single instance of aws and deploying the application to EC2 instance, please assign Elastic IP for the AWS EC2 instance.
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
Deploy the new version of the application in another AWS EC2 instance
When the application is ready, reassign the Elastic IP from the existing EC2 instance to new EC2 instance
Elastic IPs are the simplest way to implement the blue-green switch.
Can an Amazon EC2 instance process requests from and return results to an external client which may a browser or non-browser application? (I know that the EC2 instance will require a IP address and must be able to create a socket and bind to a port in order to do this.)
I'm considering an Amazon EC2 instance because the server application is not written in PHP, Ruby or any other language that conventional web hosting services support by default.
Sure it will. Just setup the security group the right way to allow your clients to connect.
Take a look at this guide: Amazon Elastic Compute Cloud - Security Groups
Also keep in mind: It's not possible to change the policy group after you created the EC2 instance. This feature is available for VPC instances only. See http://aws.amazon.com/vpc/faqs/#S2 for more information.