I have created a Zend expressive application that basically exposes a few APIs. I want to deploy this now to AWS Lambda. What is the best way to refactor the code quickly and easily (or is there any other alternatives) to deploy it? I am fairly new in AWS.
I assume that you have found the answer already since the question is more than five months old. But I am posting what I have found in my recent research in the same criteria. Please note that you need to have at least some idea on how AWS IAM, Lambda, API Gateway in order to follow the steps I have described below. Also please note that I have only deployed the liminas/mezzio skeleton app during this research and you'll need much more work to deploy a real app because it might need database & storage support in the AWS environment which might require to adapt your application accordingly.
PHP application cab be executed using the support for custom runtimes in AWS. You could check this AWS blog article on how to get it done but it doesn't cover any specific PHP framework.
Then I have found this project which provides all then necessary tools for running a PHP application in serverless environment. You could go through their documentation to get an understanding how things work.
In order to get the liminas/mezzio (new name of the zend expressive project) skeltopn app working, I have followed the laravel tutorial given in the bref documentation. First I installed bref package using
composer require bref/bref
Then I have created the serverless.yml file in the root folder of the project according to the documentation and made few tweaks in it and it looked like as follows.
service: myapp-serverless
provider:
name: aws
region: eu-west-1 # Change according to the AWS region you use
runtime: provided
plugins:
- ./vendor/bref/bref
package:
exclude:
- node_modules/**
- data/**
- test/**
functions:
api:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
memorySize: 512 # Memory size for the AWS lambda function. Default is 1024MB
layers:
- ${bref:layer.php-73-fpm}
events:
- http: 'ANY /'
- http: 'ANY /{proxy+}'
Then I followed the deployment guidelines given in the bref documentation which is to use serverless framework for the deployment of the app. You can check here how to install serverless framework on your system and here to see how it need to be configured.
To install servreless I have used npm install -g serverless
To configure the tool I have used serverless config credentials --provider aws --key <key> --secret <secret>. Please note that this key used here needs Administrator Access to the AWS environment.
Then serverless deploy command will deploy your application to the AWS enviroment.
The result of the above command will give you an API gateway endpoint with which you application/api will work. This is intended as a starting point for a PHP serverless application and there might be lots of other works needed to be done to get an real application working there.
Related
No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json
I have a pretty big project that I use Serverless Framework to deploy to AWS (a few lambdas together at a time) using Windows Terminal.
I would do:
serverless deploy -s integration
and it will take all of my lambdas and deploy them. My problem is that I need to use the versioning of AWS, and I don't know how to do it.
After I do the serverless deploy, do I need to open the AWS CLI console and run something like this for each lambda that I already deployed using serverless?
version=$(aws lambda publish-version --function-name test_lambda --description "updated via cli" --region eu-west-1| jq '.Version')
I'm just confused on how to combine the 2 ways of deploying lambdas.
by default, all functions deployed with Serverless Framework are versioned. You can also disable it or turn it on explicitly by setting:
provider:
versionFunctions: true (or false to turn it off)
Please keep in mind that the old versions are not removed automatically, so if you want to keep e.g. only a few previously deployed versions, you might need to use a plugin as https://github.com/claygregory/serverless-prune-plugin
Iam looking for help to containerize a laravel application with docker, running it locally and make it deployable to gcloud Run, connected to a gcloud database.
My application is an API, build with laravel, and so far i have just used the docker-compose/sail package, that comes with laravel 8, in the development.
Here is what i want to achieve:
Laravel app running on gcloud Run.
Database in gcloud, Mysql, PostgreSQL or SQL server. (prefer Mysql).
Enviroment stored in gcloud.
My problem is can find any info if or how to use/rewrite the docker-composer file i laravel 8, create a Dockerfile or cloudbuild file, and build it for gcloud.
Maybe i could add something like this in a cloudbuild.yml file:
#cloudbuild.yml
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
Any help/guidanceis is appreciated.
As mentioned in the comments to this question you can check this video that explains how you can use docker-composer, laravel to deploy an app to Cloud Run with a step-by-step tutorial.
As per database connection to said app, the Connecting from Cloud Run (fully managed) to Cloud SQL documentation is quite complete on that matter and for secret management I found this article that explains how to implement secret manager into Cloud Run.
I know this answer is basically just links to the documentation and articles, but I believe all the information you need to implement your app into Cloud Run is in those.
Task: To write a simple standalone app(app1) that can subscribe(watch) firehose events from pivotal cloud foundry. Yet to understand the technology to be used for app1.
Python is my primary skill, but open for Java or GO, if required
app1 need to subscribe(watch) for Staging complete events of any app running across Orgs in Pivotal Cloud Foundry and receive the app details and then trigger cf env <app_name> to get environment details of the app that just got into staging.
Any app is pushed with a manifest file, having environment variables(as shown below)
--- applications:
- name: some-app
instances: 1
memory: 1G
buildpack: java_buildpack_offline
path: target/artifact.jar
routes: -
route: some.router.com
services:
- abc
- def
env:
ARTIFACT_VERSION: 0.0.1
1) which technology is more suitable(supported) to perform this task? to basically watch Firehose events and run cf env <on_that_app>
2) Is my code(app1) suppose to be running within PCFoundry to watch Firehose events? Can I run app1 outside PCF to watch Firehose events?
Please share some resources on learning about firehose events in PCFoundry(PAAS), as novice
Golang concept (please don't expect any copy/paste codes):
get log messages from Doppler -
use cloudfoundry/noaa and watch only for log Staging complete
call CF client and get env variable value - use cloudfoundry-community/go-cfclient
You can watch Firehose events from anywhere. You just need network connectivity to Doppler URL, so development can be done on localhost dev machine and production version can be running in Cloud Foundry. You can use websockets, so you can push changes directly to the browser. IMHO final Golang implementation will need disk_quota: 64M and memory: 16M.
I'm moving to serverless with AWS Lambda. I've gotten to "hello world" so far. I'm used to having a development codebase that I work on, test, and then promote to production. Is there an easy way to do this with Lambda?
I use different AWS accounts for dev, staging, and prod. When deploying the Lambda, I just choose which AWS profile to use so it deploys to the right environment.
If you're using a single AWS account, each deployment of a Lambda function will have a version. You can use those.
If you're using API Gateway with Lambda, you can use API Gateway's "Stages".
You should use a deployment framework such as serverless and that will make things easier for you.
Using frameworks like serverless makes it easy to develop, configure and deploy lambdas, API gateways and other events to AWS. I highly recommend that you adapt serverless framework. This makes it easier to integrate and use serveless deployment with your current CI system.
Now if you have all your environments within one AWS account then you can use stages to represent each env. Using serverless you can simply deploy the lambdas to a different env using --stage (-s) argument.
serverless deploy -s <env/stage name>
You put some smarts in configuring serverless yaml file to pick up configuration files based on your stage (assuming that you will require accessing diff resources like db, s3 buckets etc for diff environments)
If you are using different AWS accounts for prod and nonprod (recommended) then all you need to do is provide an additional argument for the profile.
serverless deploy --profile <prod/nonprod profile> --stage <prod/nonprod stage>