Our NextJS react server makes frequent queries to a backend to get collections of data that infrequently change. For instance, collections of values like site configurations, i18n labels, etc. It occurred to me we could just cache that collection of data on the SSR side, and then at the request level, the SSR side could check the cache, and only query the backend if the cache expires. This would reduce calls against our backend API considerably. (This is server-side-only behavior, it would not impact the browser-side.)
The point is that a request-level cache isn't sufficient. It would defeat the purpose. So it needs to be an app-level cache that initializes at server startup. (I'm thinking of an in-memory cache like node-cache.) And, it can't be a cache that operates only at the custom-server level outside of NextJS, since the usage of it depends on request-level information - the app-level cache needs to be available while NextJS SSR processes the request.
But I'm not sure how to supply the cache to the request level, so functions like getServerSideProps can access it. In the NextJS custom server, I'm tempted to create some sort of a context object that can be passed to next, but in the request handler:
const handle = nextApp.getRequestHandler();
...
nextAppRequestHandler(req, res);
There doesn't seem to be a way to pass along additional parameters like context.
What would be the best practice here? For a global store like this, should I extend or add it to the request object itself? I guess more generally this is a question about supplying application-scoped singletons to SSR NextJS requests.
Related
I found this quote on the Vercel website:
When using Next.js, Vercel optimizes Serverless Function output for server-rendered pages and API Routes. All functions will be created inside the same bundle (or in multiple chunks if greater than 50mb). This helps reduce cold starts since the bundled function is more likely warm
But is this also true for getServerSideProps?
I have a small project with an API and another page that loads the data with getServerSideProps. When the first API is done. I would except the next page with the getServerSideProps would be fast, but that also seems to have a cold boot.
The second time everything is fast.
Based on the two following comments from Vercel in related GitHub discussions:
One thing to note — in Vercel (a recent update), Next.js SSR pages and
pages/api routes get bundled into two separate Lambdas (λ), so you
should not have to worry about the Serverless Function limit.
Source: https://github.com/vercel/vercel/discussions/5093#discussioncomment-57628
API Routes will all be bundled, each SSR page will also be bundled -
both of these will be creating another function should they exceed
maximum capacity (50MB) although this is handled a little before the
limit to avoid it being too close.
Source: https://github.com/vercel/vercel/discussions/5458#discussioncomment-614662
My understanding is that in your scenario the API route (or any other API routes you may have) will be bundled into one function, and getServerSideProps into another function. If either exceeds the 50MB size limit then an additional function would be created.
What you're experiencing seems to be expected. The API route and getServerSideProps would be a different functions, thus having separate cold boots.
You can also use caching headers inside getServerSideProps and API Routes for dynamic responses. For example, using stale-while-revalidate.
if you use Vercel to host your frontend, with one line of code you can take advantage of the stale-while-revalidate cache strategy which has very similar behaviour to getStaticProps when using revalidate.
// This value is considered fresh for ten seconds (s-maxage=10).
// If a request is repeated within the next 10 seconds, the previously
// cached value will still be fresh. If the request is repeated before 59 seconds,
// the cached value will be stale but still render (stale-while-revalidate=59).
//
// In the background, a revalidation request will be made to populate the cache
// with a fresh value. If you refresh the page, you will see the new value.
export async function getServerSideProps({ req, res }) {
res.setHeader(
'Cache-Control',
'public, s-maxage=10, stale-while-revalidate=59'
)
return {
props: {},
}
}
Check the docs: https://nextjs.org/docs/going-to-production#caching
But it is important to know when to use getServerSideProps or api urls in next.js
When to use getServerSideProps
The getServerSideProps function fetches data each time a user requests the page. It will fetch the data before sending the page to the client If the client makes a subsequent request, the data will be fetched again. In these cases it is helpful to use:
SEO is important for the page
The content that is fetched is important and updates frequently
When we want to fetch data that relates to the user's cookies/activity and is consequently not possible to cache. Websites with user authentication features end up relying on getServerSideProps a lot to make sure users are properly signed in before fulfilling a request.
If the page is not SEO critical, for example, if you need to fetch the current users for the admin, you do not need to use getServerSideProps. However, if you are fetching the shopping items for your home page, it is good practice to use getServerSideProps.
When to use api functions
It is important to understand that some websites that we are building do not just need Html pages that are being served back to the users upon requests. You might for example have a feature on your website that allows users to submit feedback or signup for a newsletter. So when a user clicks on such a button to for example signup for a newsletter, think about what happens behind the scenes when we signup for newsletter.
We need to send data to some server to store that entered newsletter email addressd in some database, and that request being sent is not about fetching a site, it is about storing data. We do not want to get a Html page, Instead we want to send that user entered data to some database. that is why we use API's for. there are different types of api but the main idea is same. we have a server that exposes some certain urls. Those urls are not just sending html data, but they are about accepting some data and sending back responses with any kind of data.
So Api routes are special kind of Urls, which you can add to your next.js application, which are not about getting a standard browser request and sending back a prerendered html page, but which are about getting data, using data maybe storing data in some database and sending back data in any form of your choice.
I have an API written in go and I am using the gin-gonic framework to implement my endpoints. I am following clean architecture for my project which means that my entire application is divided into multiple layers namely - Controller, Service, Repository, And Session. The endpoints are secured by auth0 and the validation is carried out in a gin middleware. In the middleware I can extract the Subject from the JWT (Set in the header)
Now, here's my question. I want to use this subject value in my queries. I was wondering if I can store the Subject (sub) in the context and use it in other parts of my code WITHOUT PASSING CONTEXT AROUND. Is this possible? Or do I simply have to update all my functions and add a new parameter "Sub" to all downstream calls?
I am alluding to using a Global Variable of sorts to access Request Specific Data (SUB from the JWT token). I know it's a bad practice- I am just wondering if there is any other way to accomplish this other than passing around request specific data? Any help is appreciated.
It is really the whole point of the context - it exists to hold these kinds of things and to be passed around the chain. It's important because you want to keep it scoped to the request -- if you start using globals you could run into issues where you get contention because multiple requests are messing with the same data. Likewise if the token was invalidated between requests.
If your authentication middleware runs before your query (which it sounds like it does) then it should be simply a matter of having it put the subject in the context in a way you're happy with.
I have created a webapi2 app with angular2 application and angular service call to webapi , if webapi is got down , have to load the data from cache and the cache is expire after 20 minutes.
I am using Memory cache to cache the data in webapi. It is working fine in local application.
While moving to iis in production, it will be available for all the users or only available for the particular user.
It is a home page of the app, they don't have authentication. so everyone can see the page.
Else
Which mechanism of cache will work in the above scenario?
Output cache -or Memory cache or any other caching strategy for webapi2.
About Caching in webapi
when you cache data in a web application, which get cached for that application i.e. which is kind of a common data public to all, ie) means when you cache that cache data is available to each request you make to your application, which in turn available to all users in the application.
One thing you can do is create cache type you want but, put [Authrized] attribute on the method which is providing cache data, it allowed to access cache to only those users who are logged in your application.
I have created and use a file-based Caching in WebAPI, advantage is , it gets expire when file gets modified so there is no query database till file get modified, Now question to when to modify file: I modify the file when there is insert, update, delete done on the data which I am caching.
Suppose you would format your urls so that you could make direct model queries with a request using Ajax.
Making a query in Django:
MyModel.objects.get(id=4)
Making a query via request to url using Ajax:
/ajax/my-model/get/id/4/
The problem is that this presents a huge security problem, any user that knows how to make requests via Ajax could query the database by recognising that the url corresponds to a query of a specific model. However, if these kind of queries could be made secure, it would allow for much more well structured/reuseable client side code imo.
But I simply don't see how this can be made secure. I just want to make sure if this suspicion is correct.
Never trust input from the client. I think this is a very general rule in web development and applies to any request the client does. I think you have a couple options here:
use Django's internal Authorization mechanism. This is not authentication! Like this you can limit resources to be accessed to specific users only. Also look into reusable django apps, which seem to take some complexity out of that topic.
validate every input from the client. This is mostly for requests which are supposed to change data).
use an API framework like django-tastypie or django-restframework, which are easily plugable with your models and offer authentication and authorization out of the box.
In Django, such views will be protected by its authentication mechanism. It is possible to design the view so it will only allow specific users to query specific queries.
I want to use cache in this way for the application I have.
in first request data come from database and passing them to view from controller and storing them in cache.
in next request I want to get the data from cache and putted them in page and asynchronously get the data from database again and check if something new come then update the cache so.
because in last request data come from cache it's goes faster and update the cache for future request.
How I can implement this thing in my application [MVC 3]?