I have a question about proper or recommended way of loading a lot of data from database. I have Laravel 5.4 project with Vue.js 2.0. I need display employees table from database on a page. On client Vue component is used for requesting this data with promise and build grid with vue-tables-2.
The problem in case that i can't find proper way for logic. There is already 50 000+ records and will be more. So using Employees::all() is really bad idea. Data is requested with axios from api url. And there is no possibility of using reddis or memcached. Looks like i need some kind of lazy loading post request from client and maybe with Laravel pagination. I will request first part of data and make next request to paginator with next post... and will have spam of requests.
If i will use default caching mechanism i don't understand how to build logic of caching, how to detect that model was changed and cache require rebuilding.
Maybe there is a way to organize lazy loading of data with dynamically add it into table and if user start search or filter before loading finished make request to server for direct database search. But again in this case possibly i will have a lot of database requests.
So question is - maybe there is a recommended way for organization stuff like this ?
Related
I'm using NextJS with TanStack Query (formerly ReactQuery). TanStack Query acts as a cache between my NextJS app and the data stored in the backend.
I was previously doing SSR only, but I'm complementing it with TanStack Query for optimistic updates. I previously needed to fetch data on getServerSideProps for every "detail" page, but now I'm thinking that I could skip some of those fetches since I already have the data in the cache and it's still fresh.
For example. Let's say we have a TODO app. When I visit /todo/id_1 for the first time, it's nice to have SSR to send the page already rendered to the client. If I go somewhere else, and come back to /todo/id_1, I know for a fact that the contents of that TODO hasn't changed, but I still need to go through SSR.
Would there be a way to skip SSR in that case?
I was hoping I could to something like the following:
<Link href={`/todo/${id}`} skipSsr={cachedTodo[id].notStale} />
NextJs will always call getServerSideProps if the url changes. As mentioned in the comments, shallow routing does not work if the url actually changes.
I think there are a couple of ways around this:
set a cache-control: max-age on the response with the time of caching that you want. That way, the request will be made, but it will not hit your server, but come from the browser cache instead. As an advantage, those fetches will also succeed while you're offline.
instruct next to not make the query if the request comes from a client transition. There is an open discussion about this:
Add option to disable getServerSideProps on client-side navigation
shallow routing doesn't really solve it, so it seems you have to make the request and then check if it comes from SSR or not. This comment has the best workaround I guess ?
use incremental static site regeneration. It basically makes your page static, but it revalidates after a certain time if a request comes in.
I am now about knee-deep in a new Gatsby 4 project and may have hit a wall. I can't figure how to do CRUD in Gatsby with Sanity as my backend. Can I do GQL mutations in Gatsby GraphQL? I can't find any examples or tutorials anywhere.
I have done this in Next.js in a different project a but my backend was very different in my new project.
Any help is great.
I do not agree with #fstr's answer. All you can do in React can be done in Gatsby since it's a React-based framework.
if you mean using Gatsby data layer and by that the graphql portion of
it to apply a crud functionality in your site, then no. Gatsby's data
layer is a "one way street" it's designed to pull in data, if you want
to use mutations and/or subscriptions, you'll have to use something
like Apollo to handle the mutations aspects of the data.
This means that Gatsby fetches and gathers the data from the sources (like Strapi) when it builds the site (that's the one-way). The only thing you need to take into account is that because of Gatsby's staticity, you won't be able to create new pages on-demand (or on-the-fly) based on the user's input for example. If that's your scenario, you will need to trigger webhooks or to serve the site in the server, as Next and Gatsby v4 does.
If your use-case is, for example, display some database (or Strapi) fields, manipulate/mutate them based on some user's actions/inputs, and fetch them again to display them on the screen you can easily do using any Apollo implementation as this article, and many other, explains (Implementing CRUD in web application using React and GraphQL). In this case, you won't be taking many benefits of using Gatsby rather than any other CRA or React site, but it can be done either way.
So summarizing:
If you want to create dynamic pages based on some user's action, Gatsby won't help you much because you'll need to rebuild the site again, so it won't bypass the staticity.
If you want to mutate some data, send them back to the server, and display them again, it can be easily done.
If you mean modify your data stored in Sanity via Gatsby and then get Gatsby to show the updates in real-time (faster than rebuilding the whole site), this isn't something Gatsby will help you with as far as I know. Relevant:
https://github.com/gatsbyjs/gatsby/issues/22012
if you mean using Gatsby data layer and by that the graphql portion of it to apply a crud functionality in your site, then no. Gatsby's data layer is a "one way street" it's designed to pull in data, if you want to use mutations and/or subscriptions, you'll have to use something like Apollo to handle the mutations aspects of the data.
and
This is a good question! Gatsby data layer exists only during build time. After that, it is transformed into HTML and plain JSON files which you can deploy as a static site.
So at the moment if you want to use Apollo on the client, you need a separate GraphQL server.
Sanity does have an API you can use to modify data though:
https://www.sanity.io/docs/http-mutations
I'm developing a personal project using Laravel with Inertia.js. I tried retrieving data from back-end to front-end through HandleInertiaRequests Middleware. I was wondering how will malicious people could get advantage of the data I show up in front-end. Inertia.js webpage discourages retrieving sensible data in this way, but I can't figure out how that will be possible. I apologize if my answer sound a little naive, still pretty new to Laravel ad never used Inertia before. Thanks for your time!
The HandleInertiaRequests Middleware is nothing but a way to merge data into the array that will be available to your JavaScript components (Vue, React, Svelte) on the client side.
It is just a way to avoid repeating yourself on every controller for things that you'll probably need on a lot of pages. For example, instead of getting the data for a Menu component on every controller, you do this only in the HandleInertiaRequests Middleware.
So, they only warn you to be careful with what type of data you put into it. For example, you probably don't want to pass the user password through this in the same way you wouldn't do that using a Blade view instead of Inertia.
I am in the process of creating a simple application with Spring Mvc and thymeleaf and I am currently thinking of what functionality I want to implement but I don't know exactly how to do it.
Let's say I have a model class Person. Regularly I have a form and a controller where I am passing the new person object and persist it with JPA.
No problem there but what if I want to have a page that I give some of the person basics info and then hit the "next" button and give some additional information. Then hit "next" again, review the data and hit "save"?
You can do it by integrate Spring Webflow in your project.
Webflow is basically extensive part of WebMvc. Webflow has some configuration that, where you have to start and where you should go. If you have 5 page and you would like to all these data will put into database in one process then Webflow will help you. One more advantage is, you can add validation in particular pages and particular means you have five model and all these model will work in one flow.
Read more, https://projects.spring.io/spring-webflow/
I have not used Thymeleaf, but usually this kind of problem can be solved using some of the following methods or something similar:
1.) Save the unfinished data to database using the same schema or some other schema for this (or in session; in general sense, save it somewhere on server side). Problem with this is how to get rid of abandoned data where user has not moved to finish.
2.) Drag the data from page to page with request parameters. If the requests are of type POST then just in POST body, if they are type GET then as query parameters. Problem with this is it's not very clean.
3.) Don't do full page requests. Solve it with some front end solution using Javascript. Depending on the app it might or might not be possible.
4.) Do full page requests but still solve it in front end using local storage or session storage. Similar problems as with keeping the data in server side session.
I am currently designing an application that will have a few different pages, and each page will have components that update through AJAX. The layout is similar to the new Twitter design where 'Home', 'Discover', and 'Connect' are separate pages, but interacting within the page (such as clicking 'Followers' or 'Following') uses AJAX.
Since the design requires an initial page load with several components (in the context of Twitter: tweets, followers, following), each of which can be updated individually through AJAX, I thought it'd be best to have a default controller for serving pages, and other controllers with actions that, rather than serving full pages, strictly handle querying the database and returning JSON objects. This way, on initial page load several HMVC requests can be made to gather the data for each component, and AJAX calls can also be made to update each component individually.
My idea is to have a Controller_Default that handles serving pages. In the context of Twitter, Controller_Default would contain:
action_home()
action_connect()
action_discover()
I would then have other Controllers that don't deal with serving full pages, but rather components of pages. For instance, in the context of Twitter Controller_Tweet may have:
action_get()
which returns a JSON object containing tweets for a specific user. Action_home() could then make several HMVC requests to get the data for the several different components of the page (i.e. make requests to 'tweet/get', 'followers/get', 'following/get'). While on the page, however, AJAX calls could be made to the function specific controllers (i.e. 'tweet/get') to update the content.
My question: is this a good design? Does it make sense to have the pages served through a default controller, with page components served (in JSON format) through other function specific controllers?
If there is any confusion regarding the question please feel free to ask for clarification!
One of the strengths of the HMVC pattern is that employing this type of layered application doesn't lock you into a workflow that might be difficult to change later on.
From what you've indicated above, this would be perfectly acceptable as a way of serving content to a client; the default controller wraps sub-requests, which avoids multiple AJAX calls from the client to achieve the same goal.
Two suggestions I might make:
Ensure that your Twitter back-end requests are abstracted out and managed in a library to make the application DRY'er and easier to maintain.
Consider whether the default controller is making only the absolutely necessary calls on each request. Employ caching to avoid pulling infrequently changed data on every request (e.g., followers might only be updated every 30 seconds). This of course depends entirely on your application requirements, but if you get heavily loaded you could quickly find your Twitter API request limit being reached.
One final observation: if you do find the server is experiencing high load and Twitter API requests are taking a long time to return, consider provisioning another server and installing a copy of your application. You can then "point" sub-requests from the default gateway application to your second application server, which should help improve response times if the two servers are connected by a high-speed link.