Next.js on Vercel, is server code on single Lambda? - aws-lambda

I found this quote on the Vercel website:
When using Next.js, Vercel optimizes Serverless Function output for server-rendered pages and API Routes. All functions will be created inside the same bundle (or in multiple chunks if greater than 50mb). This helps reduce cold starts since the bundled function is more likely warm
But is this also true for getServerSideProps?
I have a small project with an API and another page that loads the data with getServerSideProps. When the first API is done. I would except the next page with the getServerSideProps would be fast, but that also seems to have a cold boot.
The second time everything is fast.

Based on the two following comments from Vercel in related GitHub discussions:
One thing to note — in Vercel (a recent update), Next.js SSR pages and
pages/api routes get bundled into two separate Lambdas (λ), so you
should not have to worry about the Serverless Function limit.
Source: https://github.com/vercel/vercel/discussions/5093#discussioncomment-57628
API Routes will all be bundled, each SSR page will also be bundled -
both of these will be creating another function should they exceed
maximum capacity (50MB) although this is handled a little before the
limit to avoid it being too close.
Source: https://github.com/vercel/vercel/discussions/5458#discussioncomment-614662
My understanding is that in your scenario the API route (or any other API routes you may have) will be bundled into one function, and getServerSideProps into another function. If either exceeds the 50MB size limit then an additional function would be created.
What you're experiencing seems to be expected. The API route and getServerSideProps would be a different functions, thus having separate cold boots.

You can also use caching headers inside getServerSideProps and API Routes for dynamic responses. For example, using stale-while-revalidate.
if you use Vercel to host your frontend, with one line of code you can take advantage of the stale-while-revalidate cache strategy which has very similar behaviour to getStaticProps when using revalidate.
// This value is considered fresh for ten seconds (s-maxage=10).
// If a request is repeated within the next 10 seconds, the previously
// cached value will still be fresh. If the request is repeated before 59 seconds,
// the cached value will be stale but still render (stale-while-revalidate=59).
//
// In the background, a revalidation request will be made to populate the cache
// with a fresh value. If you refresh the page, you will see the new value.
export async function getServerSideProps({ req, res }) {
res.setHeader(
'Cache-Control',
'public, s-maxage=10, stale-while-revalidate=59'
)
return {
props: {},
}
}
Check the docs: https://nextjs.org/docs/going-to-production#caching
But it is important to know when to use getServerSideProps or api urls in next.js
When to use getServerSideProps
The getServerSideProps function fetches data each time a user requests the page. It will fetch the data before sending the page to the client If the client makes a subsequent request, the data will be fetched again. In these cases it is helpful to use:
SEO is important for the page
The content that is fetched is important and updates frequently
When we want to fetch data that relates to the user's cookies/activity and is consequently not possible to cache. Websites with user authentication features end up relying on getServerSideProps a lot to make sure users are properly signed in before fulfilling a request.
If the page is not SEO critical, for example, if you need to fetch the current users for the admin, you do not need to use getServerSideProps. However, if you are fetching the shopping items for your home page, it is good practice to use getServerSideProps.
When to use api functions
It is important to understand that some websites that we are building do not just need Html pages that are being served back to the users upon requests. You might for example have a feature on your website that allows users to submit feedback or signup for a newsletter. So when a user clicks on such a button to for example signup for a newsletter, think about what happens behind the scenes when we signup for newsletter.
We need to send data to some server to store that entered newsletter email addressd in some database, and that request being sent is not about fetching a site, it is about storing data. We do not want to get a Html page, Instead we want to send that user entered data to some database. that is why we use API's for. there are different types of api but the main idea is same. we have a server that exposes some certain urls. Those urls are not just sending html data, but they are about accepting some data and sending back responses with any kind of data.
So Api routes are special kind of Urls, which you can add to your next.js application, which are not about getting a standard browser request and sending back a prerendered html page, but which are about getting data, using data maybe storing data in some database and sending back data in any form of your choice.

Related

Dynamically Update Page in Application Requiring Authentication Via Azure AD

I am curious if anyone has a solution to this unique situation as I have a solution currently, though I feel it is not the most optimal.
The Situation.
I have built an MVC style web application that talks to a web API through http (authenticating via JWT). My web application is secured by appending authorization to its view controllers and redirecting to a Microsoft login endpoint - then directing back to the view where whichever given controller/function handles the request, connects to the API, appends data to the view, etc.
Preferably I would like to use JQuery/Ajax to submit http requests client-side and update a given view with whatever data the user may wish to see relative to the webpage they're on. This way I could control exactly what the user should see landing on the page any which way and submitting requests from web app to API. Also would enable better continuity between requests as there isn't actually a full refresh of the view. All in all it is my line of thought that this execution would lead to a nice user experience.
The Problem.
So the big issue that I have had to circumvent is CORS Policy. I initially attempted to use JS just as I said above but requests would be redirected to the login endpoint and blocked due to there being no CORS header appended to the request.
'So include a policy in your application and append an authorized header to your Ajax outgoing request' you might say, well... you cannot override CORS security around Microsoft's login endpoint.
My Solution.
What I have done simply instead is create HTML Forms around fields the user would pick and chose to specify what data they wanted from the API. Then carry over input data to the returned view via 'ViewData'
and using razor pages of course I can actually initialize JS variables via C# input.
Side Note
I use JS to transform the API data into graphs for the user to see. I am doing this with a JavaScript Library.
My Question to you.
This all leads me to ask then, is there a way to dynamically update a view without using JS? I require a method that can hit the login redirect without being blocked because the request initiated client-side.
Every solution I am aware in some way, shape, or form utilizes JS to make the request. So I am at a loss for how to truly get the functionality I am after without having my requests get blocked due to CORS Policy.
Thanks in advance y'all.

Laravel Vue SPA project insecure middlewares localstorage

I'm trying to create an SPA with auth and roles.
I've been reading a lot of tutorials that explains how to do it and everyone tells the same strategy:
Save the permissions on localstorage, for example accessToken and is_admin=0|1.
So when you login the backend response fills this data.
Then the vue routing is just checking these 2 fields for granting or preventing the access.
This is so unsecure, anyone can access to develop tools and see this data and change it, just writing a random accessToken grants access on this site... and then is_admin = 1 and wala.
Okay, is difficult to KNOW this but... And every single API call checks this accessToken on the backend.
So there is something we can do to prevent this? Or if we want this "agility navigation" we can't protect 100% route navigation middleware?
One of the guides I followed:
https://scotch.io/tutorials/vue-authentication-and-route-handling-using-vue-router
On the questions section so many people is comenting this, and their response is that is a frontend demo... but how can I rely this on the backend? If I want this every navigation click will refresh the page.
A SPA will only load once. After that navigation is handled by the frontend. However the data needed for the next page is loaded using ajax from the backend. Meaning you can still validate access in the backend before exposing the data you want to protect.

Securely accessing a private API from my own consumer website using AJAX

Could use some suggestions for how best to secure an API that for the time being will remain private. The backend API has been developed and will lie on its own system. The front end consumer website will have access to this API via a private API key. This is all in server side code. However, a new requirement has been made known: our website will also need to make AJAX requests to generate the code. I don't want to expose the API calls or token in the javascript code, so I'm trying to figure out options. One would be to create a REST controller on the front-end server-side which could then be called by javascript code, but this would effectively circumvent the API key security measure and therefore is not a true solution.
So what are the general practices for this? I think ideally (and I'm moving toward this, it's just not feasible time-wise currently) I would use OAuth tokens to validate requests and have some API calls(pulling in general information) not require any form of authentication etc, but even that would have some issues given the AJAX requirements. Is there perhaps some way to have client-side javascript and associated AJAX calls which will remain secure?
All this is to say - I'm currently at a loss of what to do here.
Thanks.
Edit: Current thought is to create controllers on the front end which can be accessed via ajax, which sends non-risky fetches to the API, and for risky ones relies on current user validation (e.g. user being logged in). Furthermore, logging in will not be an AJAX style request, so logging in should be a reliable security test.
You could develop a handler to accept the AJAX requests and pass them along to the private API using the normal access-token approach you would take elsewhere in non-public facing code.
That way, you don't expose the token or the API in javascript. You can build a whitelist of API calls in your handler so that it only deals with (presumably) benign AJAX requests from the front-end. This handler is both a firewall for bad requests and a way to protect the real mechanics of the private API.
If any of the API methods are potentially dangerous or destructive to data, this can (and should) be used in conjunction with the public website's authentication mechanisms.
A mockup (in PHP):
$whitelist = array(
'SomeApiCallPublicAlias'=>'RealApiMethod',
'AnotherPublicAlias'=>'SomeSafeApiMethod'
);
$call = $_POST['call']; // <-- SomeApiCallPublicAlias
if (!array_key_exists($call, $whitelist))
die('permission denied');
$data= $_POST['data'];
// hook in to the private API, pass the data, return the response
$response = make_private_api_call($whitelist[$call], $data);
die(json_encode($response));

Proposing an alternative way of securing ajax calls with a stored token

Now I think securing ajax calls, sometimes normal forms with a token is pretty common. It works like this. 1) The user requests a page 2) a token is put into the html, and into the session 3) on submit these values are checked.
Now one major obstacle I am facing with this is caching. I do not have a lot of changing content, so I want to be able to cache for at least 24 hours. On the other hand, I do some ajax calls on the front-end, and good practice is to have them a little secured.
Now I was thinking of this, but I do not know if it will work. Maybe you can help.
user requests a site, and the cached site is given.
On the site, the first ajax call is made, which only asks a token
In the backend, a token is generated, stored in the session and sent to the front-end
The token is stored in a var in the frontend, and now sent with every call
On every call we check the session and the given token
If they match we do our DB stuff, if not we make a call to the FBI
The FBI takes over the case
Just kidding about the last part. But will this work, because you are not sending a piece of the actual website.
Maybe you can make it a little smarter by storing an identifier of the form the user requests.
Actually, I have no idea if this will work, I actually doubt it. Maybe someone can explain to me why this will not work.
In order to prevent csrf with a token, each user must have a unique token that an attacker cannot guess. If you serve the same cached page to everyone, then the token isn't a secret and an attacker can forge requests.
That being said. You could have some JavaScript use an XHR to pull that users token from the user's session data store and populate a form or in ajax calls.

Secure Token Passing in Ajax in a Highly Cached Environment

Greetings SO Community!
I'm trying to think through a security issue with an ajax process in a highly cached environment, and could use some advice. My situation is this:
The users of the site are not logged in.
The site pages are highly cached (via akamai).
I have a service API that is to be accessed via AJAX from pages within my domain.
I need to protect that API from being used outside of my domain.
I can check the incoming "host" in the headers to see if the ajax request came from my domain, but that seems insecure, as such headers could be spoofed. Also, it seems to me that the usual token passing scheme will not work for me because my pages are cached, so I don't have the opportunity to inject tokens unique to the user/request (e.g. as described here: How can I restrict access to some PHP pages only from pages within my website?). Clearly, it's insecure to make a token request via ajax after page load, so I'm not sure how to make this happen. I suppose I could generate a shared use token that loads with the page and has a lifetime twice that of my maximum page cache life, but it seems like there must be a better way!
What are you trying to accomplish? Are you trying to prevent cross site request forgery or someone\something from using your API that is not the javascript you served to the user?
The former is accomplished via tokens that are stored in the source of the page. You can make it hard to conduct an XSRF attack by having tokens in the source ( or some code that creates tokens). Unfortunately, unless you can get unique data per user/request into the source, someone can always just grab your source and reverse engineer the token. Then they can forge requests. The general rule is don't worry about it unless the user is loged in because the attacker could just go to the page themselves.
The later(preventing unauthorized use) is impossible in anycase. An attacker can always make an account, strip the tokens/keys/credentials she needs, and then use some API on their server.

Resources