I'm creating a website with NextJS and will deploy it with Vercel.
NextJS has API routes that Vercel will deploy to AWS Lambda if I understand correctly.
As backend I will use mysql like here: https://vercel.com/guides/deploying-next-and-mysql-with-vercel
But what is the best place to host this?
Is AWS the fastest because AWS Lambda is used?
But what region should I use then?
Wat is the best way to make it fast in eu and us?
I tested the region with this code in a API:
export default (req, res) => {
res.statusCode = 200;
res.json({ region: process.env.AWS_REGION || "NOT SET" });
};
The result was us-west-1 for me. (I live in Europe)
I found information about this here:
https://vercel.com/knowledge/choosing-deployment-regions
This seems to be correct for a hobby account.
For Pro you can select a region, but only one.
You need enterprise for multiple region.
The best place to host mysql would be in the aws us-west-1 region but I decided to not use Vercel because I prefer a different solution.
This may be an alternative: https://github.com/serverless-nextjs/serverless-next.js
Related
In the AWS Lambda service's console, there is a Configuration tab called Database proxies, shown here:
However, in the Terraform registry's entry for an AWS Lambda Function, there does not seem to be a place to define this relationship for my lambda. It's easy enough to add manually after I deploy the Lambda, but for obvious reasons this isn't optimal. It seems like using a DB proxy is a common enough use case for serverless architectures that there would be a way to do this with the resources I've referenced.
What am I missing?
EDIT: As of 9 months ago, this feature was not included in the AWS Provider, but I'm unsure of how to search upcoming nightly or perhaps dev releases of Terraform for this feature...
EDIT EDIT (from my comment below): the RDS, its proxy, the roles they use, the lambdas, and the vpc in which they sit all work as expected. if I go to the above screenshot in the lambdas I am deploying, I can Add database proxy just fine using the proxy I deployed with Terraform. There are no issues with the code, nor any errors. The problem is that having to manually add the Database Proxy to each Lambda I deploy defeats the purpose of using Terraform.
Looks like AWS layers like AWSLambda-Python37-SciPy1x have a different account and head version in the ARN in different regions. Eg
us-east-1: arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python37-SciPy1x:22
us-east-2: arn:aws:lambda:us-east-2:259788987135:layer:AWSLambda-Python37-SciPy1x:20
From a script I need to add the layer that pertains to the lambda's region, but I'm not finding an AWS CLI or boto3 command that will give me the ARN of a "published" layer (ie one that was given access to by an AWS admin to all accounts), I can only find my own layers (eg aws lambda list-layers).
The AWS console for lambda in web browser shows the vendored layers, so I loaded the page and looked through js console and saw the following request is made:
https://console.aws.amazon.com/lambda/services/ajax?operation=listAwsVendedLayers&locale=en
So it looks like the REST API has this operation to get that, but I cannot find the equivalent anywhere in AWS CLI or boto3.
Any ideas (short of using curl with the proper request head and auth info, pain), perhaps a way to run a "raw" request in boto3 so I could give it this listAwsVendedLayers operation? I looked in the docs could not find anything.
I have created a Zend expressive application that basically exposes a few APIs. I want to deploy this now to AWS Lambda. What is the best way to refactor the code quickly and easily (or is there any other alternatives) to deploy it? I am fairly new in AWS.
I assume that you have found the answer already since the question is more than five months old. But I am posting what I have found in my recent research in the same criteria. Please note that you need to have at least some idea on how AWS IAM, Lambda, API Gateway in order to follow the steps I have described below. Also please note that I have only deployed the liminas/mezzio skeleton app during this research and you'll need much more work to deploy a real app because it might need database & storage support in the AWS environment which might require to adapt your application accordingly.
PHP application cab be executed using the support for custom runtimes in AWS. You could check this AWS blog article on how to get it done but it doesn't cover any specific PHP framework.
Then I have found this project which provides all then necessary tools for running a PHP application in serverless environment. You could go through their documentation to get an understanding how things work.
In order to get the liminas/mezzio (new name of the zend expressive project) skeltopn app working, I have followed the laravel tutorial given in the bref documentation. First I installed bref package using
composer require bref/bref
Then I have created the serverless.yml file in the root folder of the project according to the documentation and made few tweaks in it and it looked like as follows.
service: myapp-serverless
provider:
name: aws
region: eu-west-1 # Change according to the AWS region you use
runtime: provided
plugins:
- ./vendor/bref/bref
package:
exclude:
- node_modules/**
- data/**
- test/**
functions:
api:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
memorySize: 512 # Memory size for the AWS lambda function. Default is 1024MB
layers:
- ${bref:layer.php-73-fpm}
events:
- http: 'ANY /'
- http: 'ANY /{proxy+}'
Then I followed the deployment guidelines given in the bref documentation which is to use serverless framework for the deployment of the app. You can check here how to install serverless framework on your system and here to see how it need to be configured.
To install servreless I have used npm install -g serverless
To configure the tool I have used serverless config credentials --provider aws --key <key> --secret <secret>. Please note that this key used here needs Administrator Access to the AWS environment.
Then serverless deploy command will deploy your application to the AWS enviroment.
The result of the above command will give you an API gateway endpoint with which you application/api will work. This is intended as a starting point for a PHP serverless application and there might be lots of other works needed to be done to get an real application working there.
I have 2 repositories residing in Bitbucket - Backend (Laravel app as the API and entry point) and Frontend (Main application front-end - VueJs app). My goal is to set up continuous deployment so whenever something is pushed in either of the repos in master (or other branch selected by me) branch it triggers something so that the whole app builds and reaches the AWS EC2 server.
I have considered/tried the following:
AWS CodePipeline and/or CodeDeploy. This looked like a great option
since the servers are in AWS as well. However, there is no support
for Bitbucket out of the box, so it would have to go to Bitbucket
Pipeline -> AWS Lambda -> AWS S3 -> AWS CodePipeline/CodeDeploy ->
AWS EC2. This seems like a very lengthy journey and I am not sure if
that's a good practice whatsoever.
Using Laravel Forge to deploy the Laravel app, and add additional steps to build the VueJS app. This seemed like a very basic solution,
however, the build process seems to fail there as it just takes long
time and crashes with no errors (whereas I can run exact same process
on my local machine or a different server hosted elsewhere). I am not
sure if this is issue with the way server is provisioned, the way
Forge runs deployment script or the server is too weak to handle it.
The main question of mine would be what are the best pracises for deploying the app of such components? I have read many tutorials/articles about deploying a NodeJS app, or a Laravel app, but haven't gotten good information about a scenario like this.
Would it be better to build the front-end app locally and version control the built JS file? Or should I create a Pipeline in Bitbucket that would build the app and then deploy it? Or is it the best to just version control and deploy the source files and leave the whole build process as the last step in the deployment process that will be done by the server that is hosting the app itself? There are also some articles suggesting hosting the whole front-end app in S3 bucket - would that be bad practise as well?
Appreciate any help and resources that would help!
From the sounds of things it sounds like you have two types of deployments you might want to run.
Laravel API: If you're using Laravel Forge already then this is a great way to go about deploying your Laravel App, takes care of most of the process and easy server management.
Vue.js App: Few things you can do here, I personally prefer using a provider like Vercel or Netlify who let you deploy your static sites/frontends for free-low costs. You can write custom build steps but they have great presets that should work out the box.
If you really want to keep everything on AWS then look into how to host static sites on AWS
Hi im trying to connect to mySQL server hosted on aws using an AWS lambda function.I'm very new to this so it would be of great help if someone could provide me any sample code.
Objective is to devlop an alexa skill which retrieves certain data from the db and provides this as output
Please read lambda documentation on creating lambda deployment package which will answer your question. Ensure the packaged environment is same as Lambda Environment (Amazon Linux)
http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html