We have develoed an APP using Angular 8 and flask-restplus(0.13.0) (Python 3.7.4) and cx_oracle(7.2.3).
The Angular app is deployed on NGINX on Ubuntu server. We have created 3 micro services and have deployed it on gunicorn using docker and kubernetes pods.
Production environment is having 7 kubernetes PODs per service.
In Yaml file of kubernetes we have configured it to run on 4 threads using below command.
["gunicorn"]
args: ["run_app:app","-b","0.0.0.0:8080","--threads=4","--access-logfile","-","--error-logfile","-"]
The session pool code is as follows:
dbsession_pool = cx_Oracle.SessionPool('xxxxx', 'xxxxx', 'xxxxx.xxxxx.com/xxxxxdb', min=5, max=50, increment=5, threaded = True)
All the services run for a while and then start giving 504 gateway timeout error after some time.
But if we use cx_Oracle.connect then it works fine. Our reason to use session pool was to save time for connecting and disconnecting to DB thus improving performance.
Related
I have a 2-way board game server and a client. I dockerized the client and now running it on minikube locally. I am not sure how to connect it to the clients.
Before this I was just doing npm start for the server and client (2 instances of client) and running the game.
I am not sure if I have to make some changes to the client as well, or can I just put a simple link (like localhost:// 8080) as i was doing earlier (running client not on minikube)
I have got an application which has few microservices like shown below
- python microservice - runs as a Docker container on port 5001, 5002, 5003, 5004, 5005
- nodejs microservice - runs as a Docker container on runs on port 4000
- mongodb - runs as a Docker container on port 27017
- graphql microservice - runs as a Docker container on port 4000
I require clarification for the below options
OPTION 1:
Is it correct to configure nginx as a reverse proxy for each application so that I want to run each microservice on port 80
i.e * python microservice docker container + nginx
* nodejs microservice docker container + nginx
* mongodb microservice docker container + nginx
* graphql microservice docker container + nginx
OPTION 2:
or should I configure a single nginx instance and setup upstream for python application, nodejs application and mongodb ?
ie python + nodejs + mongodb + graphql + nginx
Note: In OPTION 2 only a single nginx instance is running and for OPTION 1 each microservice has a nginx instance running. Which pattern is correct OPTION 1 or OPTION 2 ?
Is it correct to containerize mongodb and expose it on port 80 ?
Question 1:
If you use only one nginx you have a single point of failure. This means that if nginx fails for some reason, all the services will be down.
If you use several different nginx with different configurations it will require more maintenance, technical debt and resources.
A good approach here is to have replicas (e.g., 2) of the same nginx server which contains rules for routing all the microservices.
Question 2:
There is no problem on deploying mongoDB in a container as soon as you have some persistent storage. The port is not a problem at all.
I am connecting to an external Mongo DB that only accepts certain IPs. I have a Meteor instance running on Heroku, and I have a Quotaguard static URL that I am trying to route Meteor through so I can connect to the Mongo server from that IP. Currently I have two environment variables on Heroku:
HTTP_PROXY=http://user:password#1.2.3.4:5678
HTTPS_PROXY=http://user:password#1.2.3.4:5678
However, when I check the logs, the application was not connecting to the database from my proxy IP. It was connecting as if there were no proxy. Is there an extra step I must take on Heroku?
I have deplyed my local cloudfoundry instance. When I try to deploy my application , my app requires cassandra to be up and running. I have cassandra host setup on independant server. Cloud foundry throws com.datastax.driver.core.exceptions.NoHostAvailableException
Whereas when I try to ping this host from the machine on which CF is installed , Ping is successful. Even this cassandra host is accessible from my local computer and works fine with my eclipse deployment.
How can I make cloudfoundry recognize this host?
You will need to make sure that (a) your application has access to the information about the address and credentials to access the cassandra server, and that (b) networking (and maybe DNS) are such that your application instances will actually be able to reach the cassandra server.
For (a), you will want to bind your application to a "user-provided service instance". For (b), you need to make sure your application's running security groups allow it to reach your cassandra server.
I have deployed a Rails 3.2 application to a Micro Cloud Foundry running locally in a VM. The vmc push finishes successfully, and running vmc logs shows
=> Booting Thin
=> Rails 3.2.11 application starting in production on http://0.0.0.0:54263
=> Call with -d to detach
=> Ctrl-C to shutdown server
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:54263, CTRL+C to stop
There are no other errors or even warnings mentioned in the logs. When I connect to the application using blah.myname.cloudfoundry.me URL, I notice that the app redirects to HTTPS and then displays Connection Refused.
Just to be sure the problem is not with my Micro Cloud Foundry setup, I deployed a simple Sinatra Hello World app and it worked great.
What steps can I take to help debug this, because vmc logs is not giving any help? Are there other logs I can access from the Micro Cloud Foundry VM via SSH which may have clues to the problem?
Thanks in advance.
You can see that thin is being bound to port 54263 on the VM, it may be worth SSHing to the vm and using curl to open 127.0.0.1:54263
It's also worth checking the Rails application logs too, you can do this using "vmc files" command and passing the path app/logs/production.log