Redis: share data in heroku with more than one dynos - heroku

I'm new using heroku and I will pick 4 instances of 2X dynos and I worry about AFAIK there is no share data between dynos.
So my question is, is there a way to save data in all redis(https://addons.heroku.com/rediscloud#250) instances(located on each dynos)

Your confusion stems from assuming that Redis runs locally on each dyno. When you use an add-on such as Redis Cloud, Redis is external to all dynos and is run on sperate servers that are operated by the service provider (Redis Labs in this case).

As long as all your dynos belong to the same app (which appears to be the case), all of them will share the same add-ons, Redis Cloud included. While your app will run distributed across these dynos, all of its connections will be opened to the same Redis database (defined by the `REDISCLOUD_URL' env var) and will therefore be able to share the data in it.

Related

Persistent volume with Heroku Dynos

My Java application is built around an embedded database which stores db data directly to disk. I understand that Heroku by default is built on an ephemeral filesystem and anything stored in it will be removed when dynos restart or just don't stick.
What is the workaround to make such an application available and deployed in Heroku?
I understand that Heroku by default is built on an ephemeral filesystem
This isn't "by default". It's fundamental to Heroku's architecture and cannot be changed. Heroku is designed to be trivially horizontally scalable, and part of that design is that state should exist apart from any one dyno. Dynos are disposable.
What is the workaround to make such an application available and deployed in Heroku?
As far as I know, none exists. Either change how you save your data or choose another host.
(You might be able to mount a shared persistent filesystem on your dynos, but that's awkward and undermines Heroku's architecture. I don't advise it. None of Heroku's offical addons provides a persistent filesystem, and a quick search finds a few blog posts outlining attempts but I don't see any successes.)

How can I monitor Heroku multi-dynos app with Prometheus?

I need monitor apps deployed to Heroku by Prometheus monitoring system.
Problem is that if you have more dynos app, you need to know all IP address of your dynost to be able to pull metrics from all dynos.
For K8s or AWS we are able to get full list of PODs/instances, So you are able to do this.
So question is, do you know, how to get IPs of all dynos from Heroku?
I'm considering exposing the $DYNO environment variable as a label to all metrics so Prometheus can have a consistent view on which dyno it's scraping. Given a short enough scraping interval, all dynos should be able to scraped within reasonable time.
Pushgateway is not a recommended way of monitoring long-standing services.
So question is, do you know, how to get IPs of all dynos from Heroku?
This is not possible on the Heroku platform. The application dynos sit behind a load balancer, you do not have direct access to them.
I need monitor apps deployed to Heroku by Prometheus monitoring system.
Perhaps this could be split into two Prometheus jobs.
Monitor the application directly (at it's load-balanced *.herokuapp.com/metrics endpoint)
and use an exporter that gathers the dynos metrics via push somehow
Consider making use of a Heroku log drain to an exporter, converting the logs into a metrics endpoint.
There is also a private API available on Heroku application stats, not the best idea, but it might work well to gather the basic application stats. Have a look at the network requests in the Heroku dashboard to see how that works.
This would have some similarities with using a pushgateway, as described at https://prometheus.io/docs/practices/pushing/.

how many dynos i need for my application with Heroku?

we want to migrate my application to Heroku acctually we have 3 applications related and we want to move to Heroku. But we don't know how many dynos we need for deploy those applications. wether 3 applications in the same dynos and make a copy in other dynos or for each application one dynos? thanks for your help.
If they are 3 separate applications you will need a minimum of 3 dynos since each Heroku application will need to run a dyno.
As to how many dynos each of your application needs to run that all depends on how busy each site is and how long requests take to process.
I can only speak for Java and an embedded server/container. In this case you actually could deploy more than one app into your container (which will be startet from your dyno), only using one dyno.

sharing data with heroku apps

Hi what is the recommended way of sharing data in heroku apps.
The reason why I ask is that I have a scheduler app which runs a process every 5 mins and places data in a memcache (memcachier).
I would then like another servlet based app to be able to read that same memcache data and return it to the user.
I tried to do this but the data is returning null.
Would it be better to use a database or is there any other way of doing this.
Can the memcache be shared across dynos?
Yup, all these things are connected as attachable resources via your config variables. It's perfectly OK to have many apps using the same attached resources.
http://www.12factor.net/backing-services
http://www.12factor.net/config

Cheapest, future-scalable way to host a HTTPS PHP Website on AWS?

I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.

Resources