Could I use multiple package manager in Turborepo? - yarnpkg

Currently, I have 2 apps in my Turborepo.
Their package manager is yarn 1.
Now I am going to add an app for AWS lambda. I want to use yarn berry for lambda only.
1.Is it good to manage AWS Lambda and Web Frontend code as monorepo?
2.Could I seperate package manager by app?

Related

Migrate Spring Boot project from Amazon Linux 1 to Amazon Linux 2

I have a Spring Boot project that is running on an EC2 instance via Elastic Beanstalk. This project was created by AWS CodeStar and they also provided an out-of-the-box project template. You can see the template here: https://github.com/JanHorcicka/AWS-Codestar-Spring-Webapp-EBS-Template
The problem is that this project is automatically deployed to an EC2 instance running Amazon Linux 1. To project template is built to work on AL1. Unfortunately, some tools I want to install (Certbot) require Amazon Linux 2. There are ways to switch from AL1 to AL2. For example here: Create Amazon Linux 2 instance via CodeStar
The problem is that after I switch to AL2, the provided project template doesn't work.
I know there are some differences for Elastic Beanstalk for different version of AL. For example I read somewhere, that the AL2 is not using .ebextensions folder anymore. But I cannot find the full list of changes.
How do I have to modify the template to make it work also on AL2 instance?

Consistently deploying Cloudfunction with PubSub and Google Scheduler

I am trying to automatize deployment of three modules: Cloud Function which is invoked via PubSub subscription from Cloud Scheduler. Currently I have a following script, which uses gcloud command:
gcloud beta pubsub topics create $SCHEDULE_NAME || echo "Topic $SCHEDULE_NAME already created."
gcloud beta functions deploy $SCHEDULE_NAME
--region $CLOUD_REGION
--memory 128MB
--runtime nodejs10
--entry-point $ENTRY_POINT
--trigger-topic $SCHEDULE_NAME
--vpc-connector cloud-function-connector
# gcloud scheduler jobs delete $JOB_NAME # does not work as it needs YES non-interactively
gcloud scheduler jobs create pubsub $SCHEDULE_NAME --message-body='RUN' --topic=$SCHEDULE_NAME --schedule='27 2 * * *' --time-zone='Europe/London' || true
This works, however I am not sure whether this is the most correct way to do this. For instance, there is no way to just update the job if it already exists. I was considering terraform, but I am not sure it is useful just for deploying these three small modules. I discovered also serverless tool, however it seems it can only deploy cloud function, but not schedulers and pubsub topics.
I think your approach is straightforward and fine.
Does Terraform provide the job update capability? If so, you'll likely find that it simply deletes and then (re)creates the job. I think this approach (delete-then-recreate) to updating jobs is fine too and seems to provide more control; you can check whether the schedule is about to fire before|after updating it.
Google provides Deployment Manager as a Google-Cloud-specific deployment tool. In my experience, it's primary benefit is that it's server-side but, ultimately, you're just automating the same APIs that you're using with gcloud.
If you want to learn a tool to manage your infrastructure as code, I'd recommend Terraform over Deployment Manager.
Update
The Scheduler API supports 'patching' jobs:
https://cloud.google.com/scheduler/docs/reference/rest/v1beta1/projects.locations.jobs/patch
And this mechanism is supported by gcloud:
gcloud alpha scheduler jobs update

How do developers typically use Docker with a Java Maven project and AWS EC2?

I have a single Java application. We developed the application in Eclipse. It is a Maven project. We already have a system for launching our application to AWS EC2. It works but is rudimentary and we would like to learn about the more common and modern approaches other teams use to launch their Java Maven apps to EC2. We have heard of Docker and I researched the tool yesterday. I understand the basics of building an image, tagging it and pushing to either Docker Hub or Amazon's ECS service. I have also read through a few tutorials describing how to pull a Docker image into an EC2 instance. However, I don't know if this is what we are trying to do, given that I am a bit confused about the role Docker can play in our situation to help make our dev ops more robust and efficient.
Currently, we are building our Maven app in Eclipse. When the build completes, we run a second Java file that uses the AWS JDK for Java to
launch an EC2 instance
copy the.jar artifact from the build into this instance
add the instance to a load balancer and
test the app
My understanding of how we can use Docker is as follows. We would Dockerize our application and push it to an online repository according to the steps in this video.
Then we would create an EC2 instance and pull the Docker image into this new instance according to the steps in this tutorial.
If this is the typical flow, then what is the purpose of using Docker here? What is the added benefit, when we are currently ...
creating the instance,
deploying the app directly to the instance and also
testing the running app
all using a simple single Java file and functions from the AWS SDK for Java?
#GNG what are your objectives for containerization?
Amazon ECS is the best method if you want to operate in only AWS environment.
Docker is effective in hybrid environments i.e., on physical servers and VMs.
the Docker image is portable and complete executable of your application: it delivers your jar, but it can also include property files, static resources, etc... You package everything you need and deploy to AWS, but you could decide also to deploy the same image on other platforms (or locally).
Another benefit is the image contains the whole runtime (OS, jdk) so you dont rely on what AWS provides ensuring also isolation from the underlying infrastructure.

How to send Lambda logs to StackDriver instead of CloudWatch?

I am considering sending my logs into StackDriver instead of CloudWatch. But from the docs, it seem to only describe how to do it with EC2. What about lambda? I prefer to send logs directly to StackDriver instead of StackDriver reading from CloudWatch to remove the CloudWatch costs entirely.
Stackdriver supports the metric types from Amazon Lambda listed in this article
To use these metrics in charting or alerting, your Google Cloud Platform project or AWS account must be associated with a Workspace.
After you have a Workspace, you can add more GCP projects and AWS accounts to it using the Adding monitored projects instructions.
If you plan to monitor more than just your host project, then the best practice is to use a new, empty GCP project to host the Workspace and then to add the projects and AWS accounts you want to monitor to your Workspace. This lets you choose a useful name for your host project and Workspace, and gives you a little more flexibility in moving monitored projects between Workspaces. The following diagram shows Workspace W monitoring GCP projects A and B and AWS account D:
Monitoring creates this AWS connector project when you add an AWS account to a Workspace. The connector project has a name beginning with AWS Link, and it has the same parent organization as the Workspace. To get the name and details about your AWS connector projects, go to the Inspecting Workspace section.
In the GCP Console, AWS connector projects appear as regular GCP projects. Don't use connector projects for any other purpose, and don't delete them while your Workspace is still connected to your AWS account.

Aws Lambda Deployment via CodePipeline

I would like to deploy my Lambda methods by using Aws Codepipeline. However, when i follow Aws Codepipeline creation wizard, i couldn't understand which one should i choose at beta stage. Because, not only Aws Codedeploy, but also Elastic Beanstalk are concerning only EC2 instances. There is lack of tutorial about telling step by step to create pipeline for our lambda, apigateway deployments. How can i skip beta stage without choosing one of them?, or which one should i choose for my serverless architecture's deployments?.
There are no direct integrations for Lambda/API Gateway -> CodePipeline at the moment. You could certainly do something with Jenkins like #arjabbar suggested. Thanks for the feedback, we'll take this on our backlog.
CloudFormation is available in CodePipeline now. This allows you to target cloudformation templates as Actions in the CodePipeline.
Here's an overview (the implementation was moved to a private repository after I changed positions):
https://aws.amazon.com/blogs/compute/continuous-deployment-for-serverless-applications/
In this pipeline we deploy a staging lambda, test its functionality, then deploy the production lambda.

Resources