Parse Server with independent workers - heroku

Image we want to check two weeks after a user's registration if she has been active and otherwise I want to notify her.
To achieve this we currently use the following setup (this runs on Heroku):
The parse server puts a task into the redis queue. The worker fetches tasks from that queue. Then it performs checks on the activity of the user. For this it needs to access the parse server to fetch that information. This puts additional load on our api.
I image the following scenario to be better:
I wonder: is it possible to achieve this scenario using parse server? (The worker dynos don't have a HTTP interface to run a parse server...)

Related

How to broadcast/share a variable/property from a slave to the others?

I have a JMeter script that is executed in distributed mode with 4 nodes. One of them is the controller and does not do any request and the other 3, as workers do the requests.
I can currently set one of the workers as a master worker, setting a property in the user.properties file for that specific worker. This "master" worker perform some requests that has to be done only once, so these requests can't be done by the other workers.
Now I have the need of extract some values from the response of these unique request and send this information to the other slaves.
Is it possible to do this?
How can data be sent form one worker to the other workers at run time?
You can use HTTP Simple Table Server plugin and populate it with data from the "master" worker using ADD command so once you do the setup of the pre-requisites all other workers including the master could access the generated data via READ command
HTTP Simple Table server can be installed using JMeter Plugins Manager
No it's not possible.
The communication between Controller and servers is very reduced:
Controller send start / stop / shutdown commands to servers
Servers send sample result to controller
That's it.
To communicate you'll need to use 3rd party tiers like Redis DB or similar means.

Schedule a task in EC2 Auto Scaling Group

I have multiple EC2s on an autoscaling group. They all run the same java application. In the application, I want to trigger a functionality every month. So, I have a function that uses Spring Schedule and runs every month. But, that function is run on every single EC2 instance in the autoscaling group while it must run only once. How should I approach this issue? I am thinking of using services like Amazon SQS but they would have the same problem.
To be more specific on what I have tried, in one attempt the function puts a record with a key unique to this month on a database which is shared among all the ec2 instances. If the record for this month is already there, the put request is ignored. Now the problems transfer to the reading part? I have a function that reads the database and do the job. But that function is run by every single ec2 instance.
Interesting! You could put a configuration on one of the servers to trigger a monthly activity, but individual instances in an Auto Scaling group should be treated as identical, fragile systems that could be replaced during a month. So, there would be no guarantee that this specific server would be around in one month.
I would suggest you take a step back and look at the monthly event as something that is triggered external to the servers.
I'm going to assume that the cluster of servers is running a web application and there is a Load Balancer in front of the instances that distributes traffic amongst the instances. If so, "something" should send a request to the Load Balancer, and this would be forwarded to one of the instances for processing, just like any normal request.
This particular request would be to a URL used specifically trigger the monthly processing.
This leaves the question of what is the "something" that sends this particular request. For that, there are many options. A simple one would be:
Configure Amazon CloudWatch Events to trigger a Lambda function based on a schedule
The AWS Lambda function would send the HTTP request to the Load Balancer

Opentracing - Should I trace internal service work or just API calls?

Suppose I have service which does the following:
Receives input notification
Processes input notification which means:
some computing
storing in DB
some computring
generating it's own notification
Sends its own notification to multiple clients
What is the best practice in this case, should I granularly trace each operation like computing, storing in db etc with separate span or leave that for metrics (i.e. prometheus) and create single span for the whole notification processing?
It's somewhat up to you as to the granularity that's appropriate for your application, and also the volume of tracing data you're expecting to generate. An application handling a few requests per minute is going to have different needs than one handling 1000s of requests per second.
That said, I recommend creating spans when control flow enters or leaves your application (such as when your application starts processing a request or message from an external system, and when your application calls out to an external dependency, such as HTTP requests, sending notifications, or writing/reading from the database), and using logs/tags for everything that's internal to your application.

Spring task scheduler Multiple Instances on multiple machines detection

I have made an Spring Task Scheduler services for sending e-mails at particular condition. This service is running on multiple machines.
If one machine service sends the e-mail then I have to stop the other service for sending the email.
How can I detect this without using persistent storage flag that one machine service has executed its e-mail code?
You have basically 3 options:
Use a shared data store of some form (e.g. data base) that all nodes connect to. That's what we do usually.
Make the nodes "talk to each other" so a particular node could the check the state of its peers before sending email. For this you could use JGroups.
Have your email service run on only one of the nodes.

Queuing in tandem with a Ruby Web socket server

I am writing an application using jruby on rails. Part of the application initiates a long running process from a web page. The long running process could last for 20 minutes in some cases but will in most cases outlive a web page response in terms of time. I also want the job to continue if the user closes down the browser. The long running process will add records to a database as it is running.
I want to give visual indications of the inserts into the database on the web page and I would prefer to use web sockets rather than polling the database for the inserts.
I am thinking of sending a message to a resque queue with a queue handler that will ensure the job is completed if the user closes down the browser. The queue handler will perform the inserts into the database.
I was thinking of using EM-WebSocket as my websocket server.
The problem I have is:
How can I communicate between the resque process and EM-WebSocket process? I want to somehow pass the details of the new inserts into the database from the resque process to an EM-WebSocket instance that will communicate with the browser?
Anybody solved a problem like this or any ideas how I can do this?
I'm actually working on a gem that makes this pretty simple. Right now it's pretty bare, but it does work. https://github.com/KellyMahan/RealTimeRails
It runs an event-machine server for listening for updates and makes use of em-websockets to send those updates to the browser.
It's meant to watch for active record updates through an after_save call that tells the event-machine server it has an update for it's model with an id. Then it matches the model and id to specific channels to send a message to connections on the web socket server. When the browser receives a notice to update it makes an ajax call to retrieve the latest results.
Still a work in progress but it could help you.

Resources