How to deploy an AWS Kinesis Data Analytics App without downtime - continuous-integration

We currently have an AWS Kinesis Data Analytics app that requires a .jar file to run.
We have automated the deployment for our .jar file that resides in an S3 bucket.
Our issue is, whenever the .jar file is updated we are forced to restart the kinesis app to get the new build which is causing downtime
Does anyone have a workaround or another way of deploying the app Without causing downtime ?

Flink itself does not support zero-downtime deployments. While a few users have built their own solutions for this, it requires implementing application-specific deployment automation and tooling. See
Drivetribe's Modern Take On CQRS With Apache Flink
Zero-downtime upgrades of Flink applications
for examples.

Related

How do developers typically use Docker with a Java Maven project and AWS EC2?

I have a single Java application. We developed the application in Eclipse. It is a Maven project. We already have a system for launching our application to AWS EC2. It works but is rudimentary and we would like to learn about the more common and modern approaches other teams use to launch their Java Maven apps to EC2. We have heard of Docker and I researched the tool yesterday. I understand the basics of building an image, tagging it and pushing to either Docker Hub or Amazon's ECS service. I have also read through a few tutorials describing how to pull a Docker image into an EC2 instance. However, I don't know if this is what we are trying to do, given that I am a bit confused about the role Docker can play in our situation to help make our dev ops more robust and efficient.
Currently, we are building our Maven app in Eclipse. When the build completes, we run a second Java file that uses the AWS JDK for Java to
launch an EC2 instance
copy the.jar artifact from the build into this instance
add the instance to a load balancer and
test the app
My understanding of how we can use Docker is as follows. We would Dockerize our application and push it to an online repository according to the steps in this video.
Then we would create an EC2 instance and pull the Docker image into this new instance according to the steps in this tutorial.
If this is the typical flow, then what is the purpose of using Docker here? What is the added benefit, when we are currently ...
creating the instance,
deploying the app directly to the instance and also
testing the running app
all using a simple single Java file and functions from the AWS SDK for Java?
#GNG what are your objectives for containerization?
Amazon ECS is the best method if you want to operate in only AWS environment.
Docker is effective in hybrid environments i.e., on physical servers and VMs.
the Docker image is portable and complete executable of your application: it delivers your jar, but it can also include property files, static resources, etc... You package everything you need and deploy to AWS, but you could decide also to deploy the same image on other platforms (or locally).
Another benefit is the image contains the whole runtime (OS, jdk) so you dont rely on what AWS provides ensuring also isolation from the underlying infrastructure.

How to configure Application Logging Service for SCP application

I have created the hello world application from the SAP Cloud SDK archetypes and pushed this to the cloud foundry environment, binding it to an application logging service instance. My understanding is that this should already provide me with the ability to analyze all logs in the Kibana dashboard of the cloud platform and previously it also worked this way.
However, this time the Kibana dashboard remains empty, so I am wondering if I missed a step or configuration. Looking at the documentation of the service and the respective tutorial blog, I was not able to identify any additional required steps. In the Logs view on the SCP cockpit I can definitely see the entries, but they are not replicated to the ELK stack in the background.
Problem was not SDK related, but seems to have been an incident on the SCP - now works correctly without any changes.

How to migrate GCP instances to AWS?

I'm trying to migrate GCP instances to AWS, I have been searching for the solution but didn't find any references. Could you please help me with this.
The OS are customized differently in different cloud, so you can only migrate files and applications, by using Cloud Storage or AWS S3 as the bridge to migration files
While for the applications, you're recommended to build from scratch, but it highly depends on what exactly you want to migrate

Can files be created in Pivotal Cloud Foundry environment

i have deployed an application into the pivotal cloud using Spring Integration ,where it should read file and create more files in another folder based on custom logic , and after that is has to ftp those output files to remote directory .The scenario works perfectly fine in local machine ,but in the cloud it doesn't do as expected .Any insights are welcome!Thanks !!
My doubts are -- Since it has to create files in cloud ,is it possible ? are any configurations needed ?
You have to use Volume Services:
This topic describes how Pivotal Cloud Foundry (PCF) app developers can read and write to a mounted file system from their apps. In PCF, a volume service provides a volume so your app can read or write to a reliable, non-ephemeral file system
Before you can use a volume service with your app, your Cloud Foundry administrator must add a volume service to your deployment. See the Enabling NFS Volume Services topic for more information.
Here: https://docs.pivotal.io/pivotalcf/1-10/devguide/services/using-vol-services.html
You can standup an s3 compatible object storage like minio.
And then create a s3-service CUPS and use it in your app. Here's an article that can help with it - https://github.com/cloudfoundry-samples/cf-s3-demo.

amazon EC2 load balanced - how to deploy web app?

We're looking to move to amazon cloud using EC2 and RDS.
I'm looking at load balancing, which I would like to do, two servers, each in a different availability zone to protect against downtime.
My question is how to deploy web applications and updates to them? I assume there is a better way than individually updating the files on each EC2 server?
In systems past, I have used the vcs puppet module to ensure that the appropriate source code is installed on my system, in addition to using puppet to build the configuration files for the apache/nginx server that I'm using. Another possibility is to push your application in a deployable state (if you're not using a scripting language) to Amazon S3, and have your run-time scripts pull the latest build from your S3 bucket.

Resources