Containerized Laravel application that connects to a remote database - laravel

Good day everyone, I have a Laravel application that is supposed to connect to a remote MYSQL database in production, and to ease deployment I am using docker. I have setup a GitHub actions workflow that is triggered when I push to master branch, the workflow essentially runs a couple of tests and then builds my app into an image and then pushes to docker hub.
To avoid database connection issues when composer dump-autoload is run during the build process, I allowed connection from any host (changed bind-address to 0.0.0.0 in mysql config) and also setup the mysql user to connect from any host. This seems to do the trick but my concern is obviously exposing my database service to the entire world. Fortunately its possible to setup my own dedicated server for Github actions, which means I can easily restrict my db service to that host. Would that be the Ideal solution or there is way to run the workflow without needing to connect to a database?.

Try to connect to remote database using an SSH Tunnel
ssh -N -L 3336:127.0.0.1:3306 [USER]#[REMOTE_SERVER_IP]
With this you do not need to publish MySQL to the world and could bind it to 127.0.0.1 on Remote host.

Related

Does Heroku provide any VPN client like AWS Client VPN?

I want to host a database in Heroku server and also a django application. The problem is: To transfer data to my Heroku database i would need be connected to a VPN. Does Heroku provides a way to connect to a VPN in order to access another database, like AWS client VPN?
My infra would be like this:
Airflow running DAGs to pull data from a AWS database that requires VPN connection to source from it. I would transfer the data from this AWS database to my heroku database.
Is it possible?
Thank you
Another thing that i'm wondering is if it is possible to connect Heroku to AWS client VPN, in case Heroku does not have something similar or a way to do this step.
Yes Heroku does provides a VPN labelled as Heroku Private Spaces and Shield Spaces.
Here is the link
https://devcenter.heroku.com/articles/private-space-vpn-connection

Communication error between host and docker containers in user defined bridged network

asking for help with network setup in docker-compose
my infrastructure is described in docker-compose,
it runs a web site based on the Laravel framework,
network custom bridge.
Problem:
I can't go to any of the web pages of the site,
except for index.php
The host machine has an IP address from the subnet of docker containers (earlier when everything worked, it had an IP address issued by a wifi router)
When the host machine communicates with the docker container "TCP Reset"
"tcp retransmission" error occurs when communicating with containers
I add next files:
docker-compose
Dockerfile
*.env
TCP/IP packet capture file between host system and containers.
Files in google drive:
https://drive.google.com/drive/folders/1qwohtogShwQ2hwauKOLtXDSmx-Qwg-mz?usp=sharing
The problem was not in the docker container network.
The problem was solved by:
deleting database tables
generate a new key for Laravel application.
table migrations
importing data into tables for the application to work
Command to generate application key
php artisan key:generate
Table Migration Command
php artisan migrate
Delete tables manually via PHPSTORM IDE
Imported table data via database dumps

Azure DevOps Pipeline connect to VPN using command line

We have CD pipelines set up in Azure to deploy to App Services and all works well but we want to add a stage to automate out Cypress test process. The problem we have is our test environment is only accessible via VPN which is fine from local machines as we run the VPN client.
Does anyone know how to include a command within the Yaml pipeline to establish a VPN connection from the pipeline host which would allow our Cypress tests to run? I'm assuming this would require a command line connection script.
We are using a Pritunl VPN server which accepts OpenVPN connections.
Thanks.
This opens up a conversation around storing the secret and the infrastructure you have to allow that VPN client in. Azure pipelines can run arbitrary commands, but you'll need to inject the VPN secrets/key, which without strong security oversight you can have some major issues down the line.
I'd take a step back and revisit your options here, maybe build the test/ environment in azure, so you don't have to worry about this?

Local database docker to AWS database in VPN

i'm a beginner to Docker, hope everyone can help, much appreciated.
I downloaded a docker image from my company repository and i managed to create a container in my local machine from the image, let's named it mydb. It is created through command below:
docker run --name mydb -p 1521:1521 -d mycompany.com:5000/docker-db:20.0.04
I am able to access the database with following connection string through my sqldqveloper : system/abc123#127.0.0.1:1521/ORCL
Our company have a database server in AWS, let's name it awsdb. I can access it after vpn login.
I am able to access the database with following connection string in sqldqveloper :
system/abc123#awsdb.amazonaws.com:1521/awsdb
Question:
How can i create a database link in mydb to awsdb with database link "my_dblink"? eg. select sysdate from dual#my_dblink.
I try with following command:
CREATE PUBLIC DATABASE LINK my_dblink
CONNECT TO system
IDENTIFIED BY abc123
USING 'awsdb.amazonaws.com:1521/awsdb';
but it return error ORA-12543: TNS:destination host unreachable.
I tried remove the container and recreated it by set the net=host:
docker run --name mydb -p 1521:1521 -d --net=host mycompany.com:5000/docker-db:20.0.04
then now i can't even connect is with system/abc123#127.0.0.1:1521/ORCL
error ORA-12541 returned: no listener.
How can i open the connection between internal docker to AWS database server? Thank you.
First of all, I do believe you need to understand what you are trying to accomplish.
When you create a database link between two databases, the main requirement you must fulfil is to have network connectivity between both of them in the ports you are using. As one of them is stored in public cloud, at least you would need:
A network connection between the network where the docker is installed and the public cloud in AWS.
But, as your docker is installed in your local laptop, the AWS should be opened to Internet, something that it is a security issue and probably it is not enabled.
Moreover, you would need Firewall rules in all the ports you might need to use in this connectivity.
As you are using a VPN login that allows you to access the AWS Cloud resources because you are connecting through it ( probably using Active Directory and/or a certificate, perhaps even using SSO federation between your AD in your company and the resources in AWS ), the database can't connect using that.
Summarizing, that is not possible, and if I were someone in Security I would never allow it. The only option for you would be to create a docker with the database in AWS and then create the database link there.

cloudfoundry NoHostAvailableException while deploying app

I have deplyed my local cloudfoundry instance. When I try to deploy my application , my app requires cassandra to be up and running. I have cassandra host setup on independant server. Cloud foundry throws com.datastax.driver.core.exceptions.NoHostAvailableException
Whereas when I try to ping this host from the machine on which CF is installed , Ping is successful. Even this cassandra host is accessible from my local computer and works fine with my eclipse deployment.
How can I make cloudfoundry recognize this host?
You will need to make sure that (a) your application has access to the information about the address and credentials to access the cassandra server, and that (b) networking (and maybe DNS) are such that your application instances will actually be able to reach the cassandra server.
For (a), you will want to bind your application to a "user-provided service instance". For (b), you need to make sure your application's running security groups allow it to reach your cassandra server.

Resources