Alternative to Task queue on Google Cloud Kubernetes - go

I found out that the task queue is being primarily used for App Engine standard environment. I am migrating our existing services from App Engine to Kubernetes. What would be a good alternative for task queue? Push queue is the one which is being currently used.
I read documentation online as well as gone through this link: When to use PubSub vs Task Queues
But there is no clear answer as to whether Pub/Sub is a good alternative on Kubernetes.
Edit:
My current use case is that a service performs similar tasks for a set of ID's and some task which takes some time to complete so the queue would take this task and process it while the service can perform other things in parallel. While Pub/Sub is mainly needed where we have publisher and subscriber here the service itself has some tasks which it needs to keep processing in parallel!

I would think Cloud Pub/Sub is a great tool for message queues. It's orthogonal to how you deploy/run your services, whether with Kubernetes or something else.
There's a lot of relevant documentation for using pubsub with Kubernetes on GCP, like this page.

Related

How can we migrate from distributed services architecture to the MassTransit with less changes?

We have microservices architecture that the services are publishing (and also subscribing) events (+ a gateway api). And we want to start using MassTransit, since the system consist of services which are feeding each other with events to achieve the flow to the end..
Every services in the flow, are generally making some api calls to complete its mission for the events they received.
Looks like we should use MassTransit Courier in order to keep all the services as they are (with only adding consumers) and create a saga state machine as an orchestrator in the gateway api.
Is this a good approach? Or should we try something different?

How does functions as a service ( FaaS) hosting work under the hood?

hypothesis
Suppose I want to roll out my own FaaS hosting, a service like Lambda, not on Lambda.
analogy
I have an abstract understanding of other cloud services as follows
1. Infrastructure as a service (IaaS): Create virtual machines for tenants on your hardware.
2. Platform as a service (PaaS): Create VM and run script that loads the required environment.
The above could also be achieved with docker images.
What about FaaS?
AWS uses firecracker VM for Lambda functions. But what's not clear is how the VMs are triggered on and off, how they're orchestrated on multiple pieces of hardware in a multi-tenant environment. Could someone explain how the complete life cycle works?
The main features of AWS Lambda and Cloud Function can be found in
https://cloud.google.com/docs/compare/aws/compute#faas_comparison
I can include the information of what I know, that is Google Cloud Functions.
Triggers
Cloud Functions can be triggered in two ways: HTTP request or Event-triggered. Events and Triggers. The events are things that happen into your project: A file is updated in Cloud Storage or Cloud Firestore. Other events are: a Compute Engine instance (VM) is initialized or the source code is updated in your repository.
All these events can be the trigger of a Cloud Function. This function, when triggered, is executed in a VM that will receive a HTTP request, and context information to perform its duty.
Auto-scaling and machine-type
If the volume that arrives to a Cloud Function increases, it auto-scales. That is that instead of having one VM executing one request at a time. You will have more than one VMs that server one request at a time. In any instance, only one request at a time will be analyzed.
If you want more information, you can check it on the official documentation.

Schedule CronJobs with the PHP Buildpack

For my PHP Web App I am using the PHP Buildpack. Now I would like to schedule a Tasks that should be triggered every month. Normally I would use CronJobs for that.
How can I achieve that within the Swisscom Application Cloud?
Swisscom App Cloud is based on Open Source Cloud Foundry
Upstream Cloud Foundry doesn’t have a feature equivalent to cron jobs (task scheduler). Stay tuned, I guess this feature will be soon implemented, because lots of people migrating from Heroku to CF. Heroku offers a cron job feature. Subscribe to Swisscom App Cloud Newsletter to read announcements.
There are workarounds for scheduling tasks, see Scheduling tasks on Cloud Foundry on blog.pivotal.io for a Ruby/Rake based example. Sorry for PHP I didn't found example code. There is no elegant solution! You need to implement yourself some kind of workaround. Would be great if you publish your code to GitHub.
If you need cron jobs only in data store, for example MariaDB offers Events.
Events are named database objects containing SQL statements that are
to be executed at a later stage, either once off, or at regular
intervals.
They function very similarly to the Windows Task Scheduler or Unix
cron jobs.
We had a simular issue. As written by #Fyodor, there is no native solution in Cloud Foundry. We did some research and found vendors like https://www.iron.io/.
Finally, we ended up with a very simple solution.
We expose all our background jobs via an https interface.
As we anyhow use Jenkins for CI/CD and it has lots of scheduling capabilities, we use our existing Jenkins to trigger these jobs via a simple cURL call to the HTTP endpoints.

ElasticBeanstalk system events and my application logs

AWS has a really nice log management tool. I can make my application log messages there very easy.
Amazon ElasticBeanstalk has a "event management" tool.
The questions are:
Can I log my app messages together in the ElasticBeanstalk events? Is it the syslog of the EC2 instance?
If yes, is this a good practice? Any problem on this? I was thinking about because, if there is no problem, I would not need any other third log management service.
The events shown in Elastic Beanstalk are internal to it. You are not supposed to fudge around with them (Although nobody is really preventing you from playing around with them).
Also, there's a log snapshot feature that picks up logs related to the application. These logs are mainly related to deployment and logging messages from the application itself. So, you can use this feature in case your application code is logging messages. For example, if you are running a Ruby/Rails with passenger you would get log messages under /var/app/support/logs/passenger.log. These are not syslog messages per se and the problem with this approach is that it's not straight forward the get your custom monitoring in place. For example, how do you parse your errors and send them to say PagerDuty?
Like you've probably figured out if you want to have custom monitoring (send logs to a syslog facility) you are better off using a third party tool like Splunk Storm, PaperTrail or Loggly. Of course you can setup your own syslog server(s) but that will require you to set up all the infrastructure.
Hope this helps.

Event Aggregator for Distributed Applications

I am implementing an application using Prism.
The application has a few distributed components that resides on various machines or servers. In order to communicate them, I am planning to implement messaging service using Event Aggregator. But before I start working on that I would like to have a few clarifications:
Can Event Aggregator be used on a distributed environment. If yes
than how to define the server or hub where the message would be
published or subscribed?
What is the performance impact on the applications using Event
Aggregator? I feel it is negligible but still I would like to know.
Is Event Aggregator approach is good for future expansion in an
enterprise environment?
Thanks and Regards,
Ashish Sharma
PRISM is client-side technology. So, EventAggregator as it is won't do what you need. This is mechanism to communicate between modules in a loosely-coupled way. It is not about communicating between different clients.
For what you need - I would look into HTTP Polling Duplex
http://www.devproconnections.com/article/silverlight-40/using-http-polling-duplex-in-silverlight-applications
If you use PRISM on front end - you can write your own service and subscribe/publish EventAggregator events from that service while making server calls and receiving responses back.

Resources