I have a meteor app running on heroku and until last week the database was running on mlab.
Then I switched to MongoDB Atlas and after a few days the application was running very slow.
I upgraded from M2 to M5, so it was ok, but now it is very slow again.
It seems there is a network out limitation, but with mlab there wasn't.
Could it be a problem with the queries or what am I doing wrong, what do I have to consider?
Does anybody know about this issue or have experience with meteor/heroku/mongodb-atlas combination?
Thanks in advance
In Heroku, when you picked up the MLab service, the DB was provided most probably in the same VPC with your Meteor instance. I'd first make sure I run an Atlas MongoDB in the same region and service provider as Heroku. (e.g. both Meteor and Mongo run on AWS eu-central). Did you do this? https://www.mongodb.com/blog/post/integrating-mongodb-atlas-with-heroku-private-spaces
Do you exceed the limitations for your Mongo cluster? https://docs.atlas.mongodb.com/reference/atlas-limits/ This is important to avoid paying for a service scale that you don't need.
Monti APM (https://montiapm.com/) has a free monitoring service for Meteor for 8 hours retention. That can help you with understanding your Oplog transactions and volume.
I don't know how you set up your Oplog but you may also try this (older) Mongo URI. I still use this with the latest Meteor version and I am fine with it:
"env": {
"MONGO_URL": "mongodb://yourapp:XXXXXXXXXXXXXXXX#yourapp-shard-00-00-zc1lg.mongodb.net:27017,yourapp-shard-00-01-zc1lg.mongodb.net:27017,yourapp-shard-00-02-zc1lg.mongodb.net:27017/meteor?ssl=true&replicaSet=yourapp-shard-0&authSource=admin",
"MONGO_OPLOG_URL": "mongodb://yourapp-oplog:XXXXXXXXXXXXXXXX#yourapp-shard-00-00-zc1lg.mongodb.net:27017,yourapp-shard-00-01-zc1lg.mongodb.net:27017,yourapp-shard-00-02-zc1lg.mongodb.net:27017/local?authSource=admin&ssl=true&replicaSet=Yourapp-shard-0"
}
Related
My problem is that sometimes when I do a new code deployment to AWS Elastic Beanstalk and I already have active session (am logged as myself), then when I refresh the page, I'm logged in as someone else. I use database sessions. This doesn't happen too often - or at least I'm not aware of it but can't figure this out. I'm using standard Laravel login functionality. I'm trying to find at least a start point how/where to start investigating. It has to do something with the deployments to Elastic Beanstalk because that's when this sometimes happen. I would have imagined that using database session shouldn't be affected by code changes on EB. Any help would be appreciated.
Hi I am trying to deploy my application with zero downtime. My app is quite frequent with database ddl changes. What are all the possible ways to achieve it with zero transaction failure in the app. Though we can use kubernetes to achieve zero downtime of the application, I don't want any failures in service request happening at the time of deployment due to database change like dropping the columns, dropping the table and changing the datatype
TechStack
Kubernetes - Deployment
Spring boot Java -app
Oracle -database
This has nothing to do with Kubernetes. You will have the same problems or challenges when you install your application on bare metal servers, on VMs or on plain Docker. Have a look at https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database this describes the problem pretty good.
I'm having difficulty pre-warming my deploys. The first few minutes after a deploy produce a lot of request timeouts, followed by too many db connections, ... followed by stability. If I pre-warm a bunch then it triggers the too many db connection errors ... so I guess I need to pre-scale both. Anybody know how to do this in vapor.yaml or vapor web gui ... or aws as a last resort? (but I'd like to keep the infra config with the project repo)
This is not the vapor.yml solution I was hoping to reach, but I ended up going into aws and configuring more baseline db capacity.
aws console > rds > databases > MyDB
click "modify"
change the "capacity settings"
I am going through the getting started guide with heroku and I hit a snag, cannot seem to access the remote database it connects but there is no database name. I have installed postgres sql 9.5 locally but attempting to push the local database I created fails also and when i run heroku pg:info it never responds.
I am going through the documentation but there is a lot of it, so hopefully some psql wizard will see this and go, oh this is what he is doing wrong and let me know.
It's likely that you have not "Created" the database. rake db:create - This needs to be done on Heroku's servers and your local machine.
Not quite correct,I am not using ruby, one major caveat I did not notice initially is that I was not in the bash shell which might have been part of my problem, what i wound up doing was connecting to the postgresql remote instance from my local install using pgadmin and creating the table manually from there. I had to get the connection info which I obtained by using heroku pg:credentials DATABASE which gave me the info I needed to add the server in pgadmin, you do need to check ssl for that within the tool, and it helps to add the database name to restricted so you see only your database, not the whole 10 gazillion they have in production :) I hope this helps anyone else that has the same problem.
Thanks
Now i'm working on RESTfull API on go, using Windows and goclipse.
Testing environemnt consists of few VMs managed by Vagrant. These machines contain nginx, PostgreSQL etc. The app should be deployed into Docker on the separated VM.
There is no problem to deploy app on first time using guide like here: https://blog.golang.org/docker. I've read a lot of information and guides but still totally confused how to automate deploying process and update go app in docker after some changes in code done. On the current stage changes in code done very often, so deploying should be fast.
Could you please advise me with correct way to setup some kind of local CI for such case? What approach will be better?
Thanks a lot.