i have deployed an application into the pivotal cloud using Spring Integration ,where it should read file and create more files in another folder based on custom logic , and after that is has to ftp those output files to remote directory .The scenario works perfectly fine in local machine ,but in the cloud it doesn't do as expected .Any insights are welcome!Thanks !!
My doubts are -- Since it has to create files in cloud ,is it possible ? are any configurations needed ?
You have to use Volume Services:
This topic describes how Pivotal Cloud Foundry (PCF) app developers can read and write to a mounted file system from their apps. In PCF, a volume service provides a volume so your app can read or write to a reliable, non-ephemeral file system
Before you can use a volume service with your app, your Cloud Foundry administrator must add a volume service to your deployment. See the Enabling NFS Volume Services topic for more information.
Here: https://docs.pivotal.io/pivotalcf/1-10/devguide/services/using-vol-services.html
You can standup an s3 compatible object storage like minio.
And then create a s3-service CUPS and use it in your app. Here's an article that can help with it - https://github.com/cloudfoundry-samples/cf-s3-demo.
Related
We currently have an AWS Kinesis Data Analytics app that requires a .jar file to run.
We have automated the deployment for our .jar file that resides in an S3 bucket.
Our issue is, whenever the .jar file is updated we are forced to restart the kinesis app to get the new build which is causing downtime
Does anyone have a workaround or another way of deploying the app Without causing downtime ?
Flink itself does not support zero-downtime deployments. While a few users have built their own solutions for this, it requires implementing application-specific deployment automation and tooling. See
Drivetribe's Modern Take On CQRS With Apache Flink
Zero-downtime upgrades of Flink applications
for examples.
I have a single Java application. We developed the application in Eclipse. It is a Maven project. We already have a system for launching our application to AWS EC2. It works but is rudimentary and we would like to learn about the more common and modern approaches other teams use to launch their Java Maven apps to EC2. We have heard of Docker and I researched the tool yesterday. I understand the basics of building an image, tagging it and pushing to either Docker Hub or Amazon's ECS service. I have also read through a few tutorials describing how to pull a Docker image into an EC2 instance. However, I don't know if this is what we are trying to do, given that I am a bit confused about the role Docker can play in our situation to help make our dev ops more robust and efficient.
Currently, we are building our Maven app in Eclipse. When the build completes, we run a second Java file that uses the AWS JDK for Java to
launch an EC2 instance
copy the.jar artifact from the build into this instance
add the instance to a load balancer and
test the app
My understanding of how we can use Docker is as follows. We would Dockerize our application and push it to an online repository according to the steps in this video.
Then we would create an EC2 instance and pull the Docker image into this new instance according to the steps in this tutorial.
If this is the typical flow, then what is the purpose of using Docker here? What is the added benefit, when we are currently ...
creating the instance,
deploying the app directly to the instance and also
testing the running app
all using a simple single Java file and functions from the AWS SDK for Java?
#GNG what are your objectives for containerization?
Amazon ECS is the best method if you want to operate in only AWS environment.
Docker is effective in hybrid environments i.e., on physical servers and VMs.
the Docker image is portable and complete executable of your application: it delivers your jar, but it can also include property files, static resources, etc... You package everything you need and deploy to AWS, but you could decide also to deploy the same image on other platforms (or locally).
Another benefit is the image contains the whole runtime (OS, jdk) so you dont rely on what AWS provides ensuring also isolation from the underlying infrastructure.
I have created the hello world application from the SAP Cloud SDK archetypes and pushed this to the cloud foundry environment, binding it to an application logging service instance. My understanding is that this should already provide me with the ability to analyze all logs in the Kibana dashboard of the cloud platform and previously it also worked this way.
However, this time the Kibana dashboard remains empty, so I am wondering if I missed a step or configuration. Looking at the documentation of the service and the respective tutorial blog, I was not able to identify any additional required steps. In the Logs view on the SCP cockpit I can definitely see the entries, but they are not replicated to the ELK stack in the background.
Problem was not SDK related, but seems to have been an incident on the SCP - now works correctly without any changes.
We are trying to spin up a Stateful MQ manager with Azure File System as persistent storage mounted for data in an Azure Kubernetes cluster. Here is the link which we followed. We exposed the service type as LoadBalancer as shown in below command.
helm install stable/ibm-mqadvanced-server-dev --version 3.0.1 --set service.type=LoadBalancer,security.initVolumeAsRoot=true,license=accept
By default, it takes default storage class as Azure disk. Here I want to use the Azure File System as Persistence storage.so, How should I pass my Azure file System name? and the other thing is, we can able to run the pod successfully without any restarts, but unable to access the web interface of it. so, we don't know where might be the exact issue raises while accessing the service?
Github repo you've linked specifically mentions dataPVC.storageClassName under configuration. This is used to define storage class, if you dont have a storage class for Azure Files (i think it doesnt exist by default), you'd need to create it and then reference it, so it would use that class.
How to set it up: here
I have an application(A) deployed on amazon aws using elasticbeanstalk. I also have another multi threaded java application(B), which creates some file on periodic basis, which needs to be read/updated by the application(A) running on elasticbeanstalk.
If i directly run the application (B) on EC2 then Application (A) does not have access to it.
What model should i use in this situation so that Application (A) can access files created by application(B).?
Upload the files created by B to S3, you can do this with the AWS api or use S3 Fuse to mount it in the filesystem. Then have A read them the same way with either the API or S3 Fuse.