ASP.NET Boilerplate - Background Worker - aspnetboilerplate

I need to know where is the best place to implement the background worker.
Web.Core or Core modules.
I am considering hosting my host project and MVC project on different VM's and do not want the background worker to run twice ie on both VMS.
Thanks

Related

Managing mulitple apps/ecosystem.config.js files with pm2

I am building a project which will live on a single server that will contain multiple services running side by side. I am using ansible to provision the server to automate setting everything up.
Services running:
Headless CMS
Database
Other nodejs API etc...
If in the future I would need to scale this project up, I would then want to separate the above services out onto their own servers which has led me to creating separate ansible roles for each of the above services.
My Question:
I am having real difficulty in working with pm2 to get my 2 nodejs apps running with each other.
I know that I can have a single ecosystem.config.js file containing multiple apps which would fit my current architecture (everything hosted on a single server). However this would be a pain later down the road if I were to switch one of my ansible roles to its own server.
Is there a way to deploy to production my nodejs related apps using pm2 management but in a way where they have their own configuration files and systemd service which I can define in ansible?
If I have multiple ecosystem.config.js files for each nodejs app, can pm2 mangage these with the default systemd service it offers when running:
pm2 startup
Or should I just write my own separate systemd services which I could then manually install in each ansible role through templates?
I'm really lost here and have spent so much time trying to work out the best approach to take so any help would be great!!
After doing some more research on this matter I came across this super helpful thread on pm2's github. Basically it seems that for automation using systemd services is the way to go and not bother with pm2's startup (which does create a systemd service, but more complex to manage when using automation software such as ansible.)
I strongly recommend you read it if you stumble accross this question!
https://github.com/Unitech/pm2/issues/2914

How to Test Gol App Engine apps locally on Win 10 and use app.yaml

In Google's latest docs, they say to test Go 1.12+ apps locally, one should just go build.
However, this doesn't take into account all the routing etc that would happen in the app engine utilizing the app.yaml config file.
I see that the dev_appserver.py is still included in the sdk. But it doesn't seem to work in Windows 10.
How does one test their Go App Engine App locally with the app.yaml. ie: as an actual emulated app engine app.
Thank you!
On one hand, if your application consists of just the default service I would recommend to follow #cerise-limón comment suggestion. In general, it is recommended for the routing logic of the application to be handled within the code. Although I'm not a Go programmer, for single service applications that use static_files and static_dir there shouldn't be any problems when testing the application locally. You might also deploy the new version without promoting traffic to it in order to test it as explained here.
On the other hand, if your application is distributed across multiple services and the routing is managed through the dispatch.yaml configuration file you might follow two approaches:
Test each service locally one by one. This could be the way to go if each service has a single responsibility/functionality that could be tested in isolation from the other services. In fact, with this kind of architecture the testing procedure would be more or less the same as for single service applications.
Run all services locally at once and build your own routing layer. This option would allow to test applications where services needs to reach one another in order to fulfill the requests made to them.
Another approach that is widely used is to have a separate project for development purposes where you could just deploy the application and observe it's behavior in the App Engine environment. As for applications with highly coupled services it would be the easiest option. But it largely depends on your budget.

Can scheduled web request be authorized by MVC controllers?

My web app has to do some calculations in the background. I've investigated multiple solutions and I would like to go for business logic instead of a SQL job that triggers all the calculations. After a few days of research I'm still not convinced what is the best solution for my case.
A lot of articles mention Quartz.NET, a separate windows service (but I think that's not an option on most shared web host services), a windows task, etc...
To keep the calculations in the business logic I would extend my web application with a dedicated 'task' controller that fires the calculations automatically and then returns a result of its actions.
Q1: Calling the controller with a Quartz.NET timed web request will not be that hard, but how can I secure it? If I add the [Authorize] attribute to my 'task' controller it will block the request. (note that I use forms authentication on my internet web application) I don't want users on the internet to be able to launch my 'task' controller.
Q2: Also if what I'm thinking is correct that shared web host services don't support the installation of separate windows services or remote desktop connections, I'll have 2 options:
hope there is support for windows tasks at the shared web host service (but can this be called with authorization credentials?)
start the Quartz.NET from my application_start (which is certainly not an ideal solution...)
Thanks in advance
Kr
First of, I wouldn't call ASP.NET MVC controller from scheduled job. I'd just delegate to business components / services (whatever the name is) and make sure there that we run correct things with context's permissions. This could mean that I fire the job with information about for whom the calculation is done and pass that information to service component (calculate daily average for user X). I don't see a real benefit from masquerading the call with ASP.NET MVC stack.
So Q1: Secure in code level, not using ASP.NET MVC stack
You can always run without windows service, then you just take the risk of app pool shutting down when not in use. One way to get by this is to have a external ping program that makes sure that there are calls made, not ideal as you pointed out. Having jobs and triggers in database protects from losing information but not from misfires.
Q2: most likely running Quartz.NET is far easier than trying to access Windows Scheduled Tasks.
Some shared providers have very strict settings for code to run. It might be that Quartz.NET won't run at all if too tightly sandboxed.

Windows Azure Visual Studio Solution

My application contains 25 C# projects, these projects are divided into 5 solutions.
Now I want to migrate these projects to run under Windows Azure, I realized that I should create one solution that contains all my web roles and worker roles.
Is this the correct way to do so, or still I can divide my projects into several solution.
The Projects are as shown below:
One Web application.
5 Windows Services.
The others are all class libraries.
Great answers by others. Let me add a bit more about one vs. many hosted services: If the Web and Worker roles need to interact directly (e.g. via TCP connection from Web role instance to a specific worker role instance), then those roles really should be placed in the same hosted service. External to the deployment, your hosted service listeners (web, wcf, etc.) are accessed by IP+Port; you cannot access a specific instance unless you also enable Azure Connect (VPN).
If your Web and Worker roles interact via Azure Queues or Service Bus, then have the option of deploying them to separate hosted services and still have the ability to communicate between them.
The most important question is: How many of these 25 projects are actual WebSites/Web Applications or Windows Services, and how many of them are just Class Libraries.
For the Class Libraries, you do not have to convert anything.
Now for the Cloud project(s). You have to decide how many hosted services you will create. You can read my blog post to get familiar with terms like "Hosted Service", "Role", "Role Instance", if you need to.
Once you decided your cloud structure - the number of hosted services and roles per each service, you can create a new solution for each hosted service.
You can also decide that you can host multiple web sites into a single WebRole, which is totally supported and possible, since WebRoles run in full IIS environment since SDK 1.3. You read more about hosting multiple web sites in single web role here and here, and even use the Windows Azure Accelerator for Web Roles.
If you have multiple windows services, or a background worker processes, you can combine them into a single Worker Role, or define a worker role for each separate worker process should you desire separate elasticity for each process, or your worker require a lot of computing power and memory.
UPDATE with regards to question update:
So, the Web Application is clear - it goes to one Web Role. Now for the Windows Services. There are two main considerations that you have to answer in order to find whether to put them into a single or more worker roles:
Does any of your Windows Services require excessive resources (i.e. a lot of computing power, or
lot of RAM)?
Does any of your Windows Services require independent scale?
If the answer for any of the questions is "yes", then put this Windows Service in a single Worker Role. Put all the Windows Services that the answer for both questions is "no" in a single Worker Role. That means that you will scale all of them or none of them (by manipulating the number of instances).
As for Cloud Service (or the Hosted Service), it is up to you to decide whether to use a single cloud service to deploy all the roles (Web and Workers) or use one Hosted service to deploy the Web Role and another one to deploy the Worker Roles. There is absolutelly no difference from billing prospective. You will still have your Web Role and Worker Role(s), and you will be charged based on instances count and data traffic. And you can independently scale any role (change the number of instances for a particular role) regardless of its deployment (within the same hosted service, or another hosted service).
At the end I suggest that you have single solution per Hosted Service (Cloud Project). So if you decide to have the Web Role and Worker Roles into a single Hosted Service, than you will have a single solution. If you have two Hosted Services (Cloud Projects), you will have two solutions.
Hope this helps.
You are correct ! and all projects goes to under 1 hosted service if you create only one cloud project for all your webrole and worker role project
Still you can divide your projects into several solution and you have to create that much cloud project and hosted service on azure platform
You can do both.
You can keep your 5 separate solutions as they are. Then, create a new solution that contains all 25 projects.
Which solution you choose to contain your Cloud (ccproj) project(s) will depend on how you want to distribute your application.
Each CCPROJ corresponds to 1 hosted service. So you could put all of your webs and workers into a single hosted service. Or you could have each web role as a different hosted service, and all of your worker roles together on another hosted service. Or you could do a combination of these. A definitive answer would require more information about your application, but in VS, a project can belong to more than 1 solution.

Best practices upgrading web app on Amazon EC2

Let’s say I've already deployed a web app on EC2, maybe thru FTP or Remote desktop. So from now on, what would be the best way to update to a new version of my web app?
My main concern would be when running several instances of that web app behind the load balancer: is there a way to update all instances at once so that there are never two instances running with different versions of the web app?
Thanks.
Yeah. Remove each instance from the load balancer (using the API or AWS management console) and update its software, until there is only one instance left. Upgrade that one without removing it, then re-add all the other instances.
There will be no time when the load balancer sends your traffic to two different versions of the software.

Resources