Microservices - system desgin - Node.js+React framework + Mongo DB on Azure - microservices

We are looking to build system for micro services (around 8 micro services built on node.js + react for now ) while production is already running with Azure Kubernetes. But we looking to build without Kubernetes for QA environment.
Please guide us on infrastructure requirement on Azure without K8S/AKS, just with VM and open software.
Thanks,
Arul

Related

How to easily publish a multi-container asp.net core web app and wep api to a remote kubernetes kluster

So I recently got into docker and kubernetes and I have a kubernetes cluster set up on a remote vm(linux, kubeadm) and I'm wondering if there is a solution suitable for production that I can easily use to deploy my multi-container asp.net core web application. I have been trying to solve this issue for the past week and found nothing that suits my needs. I have been trying to use bridge to kubernetes but I can only get that to work locally on my windows machine and not remotely onto my linux vm. this is the layout of my appliction
Ask me if you need any additional information as I'm still new to this stuff.
Thanks for your help.
I found that Jenkins is just what I needed!

Run several microservices docker image together on local dev with Minikube

I have several microservices around 20 or something to check their services in my local development. The micro-services are spring boot services with maven build. So wanted to know when I have to run them on my aws server can I run all these containers individually like they might have shared database so will that be one issue i might face.Or is it possible to run all these services together in one single docker image.
Also I have to configure it with Kubernetes so I have configured Minikube in my local dev would be helpful if there are some considerations to be taken while running around 20services on my minikube or even Kubernetes env
PS: I know this is a basic question but dont have much idea about Devops
Ideally you should have different docker image for each of the micro services and create kubernetes deployment for each of the micro services.This makes scaling individual micro services de coupled from each other. Also communication between micro services should be via kubernetes service. This makes communication stable because service IPs and FQDN don't change even if pods are created, deleted, scaled up and down.
Just be cautious of how much memory and CPU the micros services will need and if the system with minikube has that much resource or not. If the available memory and CPU of a Kubernetes node is not enough to schedule the pod then pods will be stuck in pending state.
As you have too many microservices, I suggest you make a Kubernetes cluster on AWS of 3-4 VMs (more info here). Then try to deploy all your microservices on that. For that you need to build the containers individually for each service and create kubernetes deployment for each service.
I run all these containers individually like they might have shared database so will that be one issue i might face.
As you have shared database, I suggest you run your database server on individual host and then remotely connect with your database from your services. This way you would be able to share database between your microservices.

How do developers typically use Docker with a Java Maven project and AWS EC2?

I have a single Java application. We developed the application in Eclipse. It is a Maven project. We already have a system for launching our application to AWS EC2. It works but is rudimentary and we would like to learn about the more common and modern approaches other teams use to launch their Java Maven apps to EC2. We have heard of Docker and I researched the tool yesterday. I understand the basics of building an image, tagging it and pushing to either Docker Hub or Amazon's ECS service. I have also read through a few tutorials describing how to pull a Docker image into an EC2 instance. However, I don't know if this is what we are trying to do, given that I am a bit confused about the role Docker can play in our situation to help make our dev ops more robust and efficient.
Currently, we are building our Maven app in Eclipse. When the build completes, we run a second Java file that uses the AWS JDK for Java to
launch an EC2 instance
copy the.jar artifact from the build into this instance
add the instance to a load balancer and
test the app
My understanding of how we can use Docker is as follows. We would Dockerize our application and push it to an online repository according to the steps in this video.
Then we would create an EC2 instance and pull the Docker image into this new instance according to the steps in this tutorial.
If this is the typical flow, then what is the purpose of using Docker here? What is the added benefit, when we are currently ...
creating the instance,
deploying the app directly to the instance and also
testing the running app
all using a simple single Java file and functions from the AWS SDK for Java?
#GNG what are your objectives for containerization?
Amazon ECS is the best method if you want to operate in only AWS environment.
Docker is effective in hybrid environments i.e., on physical servers and VMs.
the Docker image is portable and complete executable of your application: it delivers your jar, but it can also include property files, static resources, etc... You package everything you need and deploy to AWS, but you could decide also to deploy the same image on other platforms (or locally).
Another benefit is the image contains the whole runtime (OS, jdk) so you dont rely on what AWS provides ensuring also isolation from the underlying infrastructure.

Can files be created in Pivotal Cloud Foundry environment

i have deployed an application into the pivotal cloud using Spring Integration ,where it should read file and create more files in another folder based on custom logic , and after that is has to ftp those output files to remote directory .The scenario works perfectly fine in local machine ,but in the cloud it doesn't do as expected .Any insights are welcome!Thanks !!
My doubts are -- Since it has to create files in cloud ,is it possible ? are any configurations needed ?
You have to use Volume Services:
This topic describes how Pivotal Cloud Foundry (PCF) app developers can read and write to a mounted file system from their apps. In PCF, a volume service provides a volume so your app can read or write to a reliable, non-ephemeral file system
Before you can use a volume service with your app, your Cloud Foundry administrator must add a volume service to your deployment. See the Enabling NFS Volume Services topic for more information.
Here: https://docs.pivotal.io/pivotalcf/1-10/devguide/services/using-vol-services.html
You can standup an s3 compatible object storage like minio.
And then create a s3-service CUPS and use it in your app. Here's an article that can help with it - https://github.com/cloudfoundry-samples/cf-s3-demo.

Deploying Orchard CMS to AWS Elastic Beanstalk

I asked this on the Orchard community forums and got crickets...
How does one deploy Orchard CMS into a proper cloud style deployment (ephemeral/stateless servers) on AWS using Elastic Beanstalk?
There is a standard and an Azure solution file for Visual Studio. I'm not a developer so I'm kinda lost in VS-2015. I would have thought I could deploy using the normal (not azure specific) solution and utilized S3 for storage and and RDS instance for database. However, I just keep running into walls. I would like to be able to utilize all the VPC infrustructure and investment we have in AWS for this if possible.

Resources