How to run django rest project as windows service - django-rest-framework

I want to run Django-Rest application as a windows service. Just to keep it simple. i want to know a way to deploy my Django-Rest application in a windows server and run in the background. Can anyone please let me know if they have come across this situation.

Django-Rest application is the instance of Django. So, you need to run Django on your Windows server first (installing all dependencies and so on). The you can try Apache

Related

Managing mulitple apps/ecosystem.config.js files with pm2

I am building a project which will live on a single server that will contain multiple services running side by side. I am using ansible to provision the server to automate setting everything up.
Services running:
Headless CMS
Database
Other nodejs API etc...
If in the future I would need to scale this project up, I would then want to separate the above services out onto their own servers which has led me to creating separate ansible roles for each of the above services.
My Question:
I am having real difficulty in working with pm2 to get my 2 nodejs apps running with each other.
I know that I can have a single ecosystem.config.js file containing multiple apps which would fit my current architecture (everything hosted on a single server). However this would be a pain later down the road if I were to switch one of my ansible roles to its own server.
Is there a way to deploy to production my nodejs related apps using pm2 management but in a way where they have their own configuration files and systemd service which I can define in ansible?
If I have multiple ecosystem.config.js files for each nodejs app, can pm2 mangage these with the default systemd service it offers when running:
pm2 startup
Or should I just write my own separate systemd services which I could then manually install in each ansible role through templates?
I'm really lost here and have spent so much time trying to work out the best approach to take so any help would be great!!
After doing some more research on this matter I came across this super helpful thread on pm2's github. Basically it seems that for automation using systemd services is the way to go and not bother with pm2's startup (which does create a systemd service, but more complex to manage when using automation software such as ansible.)
I strongly recommend you read it if you stumble accross this question!
https://github.com/Unitech/pm2/issues/2914

Elasticsearch on cloudfoundary

I am trying to deploy my asp.net core 2.2 application using buildpacks on SCP cloudfoundary. My application has dependency on elasticsearch. How can I assembly my buildpack file so that I can install both my dotnet core 2.2. app along with Elasticsearch running as a service.
Thanks,
I don't think you would want to run Elasticsearch as an app on Cloud Foundry because it would be difficult to persist any data. The file system your app gets on CF is ephemeral, and is scoped to the lifetime of your app instance. Thus if your app restarts anything you've written to disk is gone. If that doesn't matter to you, I suppose you could proceed.
That said, you wouldn't want to run it as part of the same app. You'd want to push them separately as two different apps. That way you can scale them separately, if one crashes, you don't lose both, and you have dedicated memory limits for each app (i.e. one of the two, cannot consume more than it's share of memory and crash the app).
Having said all that, my suggestion would be to run cf marketplace and see if there is a Elasticsearch service in your Marketplace. That would be the easiest way forward. You could simply cf create-service and make your Elasticsearch service.
If there isn't one in your marketplace, then you can look at getting one through AWS/GCP/Azure or some other provider. Then you can create a service instance in CF with cf cups (it's called a user-provided service). You can bind this user-provided service to your app just like a service created through the marketplace.
If all else fails and you can't find a service from a provider you trust, you could always run your own. Then use a user-provided service to pass in creds.

update client code in electron

So I have multiple clients using an app built in electron. The entire application is actually a number of electron windows that talk to each other. When I have an update for the client side code (html/js/css) I have to have them shutdown, and run a utility that downloads from our internal server to update their app. I would like to know if there is a way I can either push new code to the clients (maybe through a socket) to overwrite the old code, or even maybe poll our 'code server' for updates, and then have it automatically update/overwrite existing code.
Is this possible? Is there functionality built in to electron that allows this?
And, if possible, how can it be accomplished? i.e. is there a library I can look at that will help me?? (i found a filesaver.js library, but its not exactly what I need). Thanks in advance.
You can have your Electron app load all code from a server every startup and cache this locally. You can do this by simply hosting your electron code on a web server and pointing Electron at the URL. You could make the app work offline by using a Service Worker.
This isn't a great idea though as code loaded from the internet will have access to all the node APIs. You will have essentially made a DIY botnet and securing it from abuse can be tricky.
You should read Security, Native Capabilities, and Your Responsibility in the Electron docs and be sure you understand the implications.
You can use the built-in autoUpdater of Electron: https://electronjs.org/docs/api/auto-updater
You need a server the autoUpdater can talk to, to download the updates from. The updates are installed after download.
You can host a server by yourself or use a service like https://www.update.rocks/
What you need is electron updater and you can use electron-builder for that.

OWIN Authentication Failing - Web API

I have some Web API applications that uses OWIN for authentication. Currently they are hooked up to Google and Facebook. I have them installed in multiple environments (local, dev, test, etc). Recently ALL of my applications in my development environment started failing. When trying to authenticate I would get a response back "access_denied". The URL would look like this:
https://{mydevserver}/{mywebapiapp}/#error=access_denied
The same code base works locally as well as in my test environment.
I tried using the same project (just adding redirect uris and orgins) as well as creating a new project.
I also updated my test environment to use the dev project (id and secret).
Nothing seems to have changed on the server recently. But it seems to be environment specific (because multiple applications are affected as well as multiple providers).
Are there any logging techniques I can use to drill down to a more detailed error message? Any tips or hints for what to try next?
The fix was a bit of an odd one. I had to log into my server, open up a browser and connect to a web page (any page). After doing so it started working again.

How do you run utility services on Heroku?

Heroku is fantastic for prototyping ideas and running simple web services, I often use it to run Python web services like Flask and Django and try out ideas. However I've always struggled to understand how you can use the infrastricture to run those amazingly powerful support or utility services every startup needs in its stack. 4 exmaples of services I can't live without and would recommend to any startup.
Jenkins
Statsd
Graphite
Graylog
How would you run these on Heroku? Would it be best just getting dedicated boxes (Rackspace, e.t.c) with these support services installed.
Has anyone one run utility deamons (services) on Heroku?
There are two basic options. The first is to find or create a Heroku addon to accomplish the task. For example, there are many hosted logging solutions you can use instead of Graylog; Rails on Fire or Travis can be used instead of Jenkins. If an appropriate addon doesn't exist, you can effectively make your own by just running the service on an AWS EC2 instance.
The other alternative is to push the service into being a 12factor application so that it can run on Heroku as well. For example, you could stub out whisper's filesystem calls so that they store in a backing service instead. This is often pretty painful and brittle, though, unless you can get your changes accepted by the upstream maintainers.
you could also use another free service in conjunction with it. OpenShift has a lot of Java related build services and tools that can be added.
I am using a mix of heroku, openshift, mongolab and my own web hosting. Throw in dropbox and box for some space...

Resources