I have a web application that uses many subdomains as the main page for each -- subdomain1.example.com, subdomain2.example.com, ... Each page is unique (a separate web app) , but uses similar html and js source generated server-side (from common php/db) for the specific subdomain.
I would now like to add two features: caching (for offline viewing) and push notifications. I would like the push notification approval/denial to be unique for each subdomain. In other words, denying push for subdomain1 would not deny it for other subdomains. So it seems I can use a service worker for each subdomain.
However, I am concerned that the common JS and CSS elements (like JQuery) will be cached in each subdomain cache. This seems quite inefficient and wasteful.
If I redirect subdomainX.example.com to something like example.com/?SD=X, I can have a single Service Worker for all the pages and thus a common cache. However, now the Push Notification approval/denial would be for all subdomains - not individually as I want.
Is there any way to have separate push notification approval/denial and a common cache?
Thanks for any advice.
Related
I want to develop my own single page web application (SPA) to get to grips with the modern and highly fluid world of web development. At the same time, I would like to use the page rendering technology (SSR) with built in data into html. However, there is an authorization problem.
Suppose that the user has already logged into the account before, as I imagine re-opening the site:
First request: the client makes a request to the frontend server along with identification and authorization data (for example, user id and token; the only option is to save them in cookies), the frontend server makes a request to the api server, transferring these service data, then the api server gives the information about the user and the content of the current page (in the same json), the frontend server renders this into a finished page and delivers it to the client.
Subsequent requests: the client directly addresses the api server, transferring the same (or updated after the first request) authorization data, receives json and processes it independently.
Actually, I want to move on to the question. Do I understand this interaction correctly? Can you do it differently / better? Are there tools that allow, for example, to use the components of the frontend framework as components of the MVC backend framework, so that one server does the rendering without unnecessary requests? Or a unified tool that includes the same coding for the frontend and backend to solve these problems? I will say right away that I would not like to write a backend in JS.
I can roughly imagine how you can get by with one request when using AngularJS (with a module for single page applications) and any backend MVC framework; although there will not be a full-fledged render, but search robots will not have to wait for my first fetch, since the data will be delivered initially, for example, through the data attribute. But in this case, I plan to choose Svelte (Sapper) and Ruby on Rails as the stack, although I think this is not important.
Thank you for your attention to the question!
Are there tools that allow, for example, to use the components of the frontend framework as components of the MVC backend framework, so that one server does the rendering without unnecessary requests?
If that's what you want you can install a frontend framework in Rails using webpacker. After that you will have a folder in your rails project that will contain your Svelte components. Then you import Svelte components in erb templates and pass data as props.
I have tried that approach but personally I prefer a separate frontend and backend talking through API calls. Then in your frontend you need something like Sapper if you need SSR. With webpacker you don't(assuming you mostly use Rails for routing).
If you are worried about authorization it's not really hard to implement. And after login you can store user info on local storage for instance for subsequent requests. But of course if you install with webpacker it's all done within Rails hence it's easier.
From my experience, using webpacker it's easy and quick in the beginning but you are more likely to get headaches in the future. With separate backend and frontend takes a bit more work, especially in the beginning, but it's smoother in the long run.
This helped me set the authentication between rails api and vue frontend.
So, if you wish to separate them, just install Rails as API only and I suggest you to use Jbuilder to build your jsons and serve them to the frontend as you need them.
Developing a SPA in the frontend (with Vue.js) which consumes endpoints from a (Laravel) API in the backend introduces some challenges that need to be tackled:
1. How to sync deployment when introducing new backend/frontend code
If the code is separated in two VCS repositories (frontend/backend) it can be challenging to sync deployment of both frontend and backend making sure that both finish at the exact same time. Otherwise this can lead to unexpected behaviour (e.g. calling endpoints that are not yet deployed or have changed). Anyone came up with a great solution for this? What is your best practice to tackle this problem? What if versioning every little change is not an option?
2. How to make sure that the frontend code of the SPA is being refreshed after deployment?
So you managed to keep your deployments in sync (see problem 1.), but how do you make sure that the SPA code of every currently active end user is being refreshed? With webpack code splitting enabled, the application might break immediately for users that are currently using your app in between a deployment.
How do you make sure that your users are being served the latest JS without having them reload the entire application on every request? What are best practices (besides forcing the user to refresh the entire page via websockets)? Are there solutions that allow currently active users to keep using the application without being forced to refresh while they might just finished something that's ready to be saved?
I am very interested in your findings, learnings and solutions!
1. How to sync deployment when introducing new backend/frontend code
The best practice here is to keep the backend and frontend in the same repo. You can, of course, extract some reusable code out of them to use in other projects but the code base should ideally be in the same repo or you will keep facing these frustrating code sync issues. Even if you look at popular Laravel libraries - they all have the frontend and backend in the same repo.
If that's not option, I would suggest that you use a versioning system that can link the versions of both repos. Yep, that means versioning every little change!
2. How to make sure that the frontend code of the SPA is being refreshed after deployment?
Usually, I'd avoid doing stuff to force a refresh on the client codebase but if you have long user sessions, it may actually make sense.
To do that, you can use any web socket implementation (such as Pusher) and have your CI notify the frontend through web sockets of any deployment. The frontend can then queue a page refresh. Check out this article on how to implement.
The two questions are tightly coupled and can't be answered separately in my opinion. I have some possibile strategies to deal with such a scenario:
1. Never introduce breaking changes in the API
API deployments should be incremental without breaking anything for users using the previous version. In this way you can simply push the changes on your backend and when the backend deployment is completed you deploy the frontend. Easily achieved if you have separate projects.
This can be performed for major releases by prefixing the API with the version:
https://website.url/api/v${version}/${endpoint}
while minor deployments should only be minor adjustments/bugfixes that do not break frontend functionality.
This approach is the best because it ensures absolutely no downtime in the user activity, but requires additional work and may not be feasible in many projects. If the backend does not introduce breaking changes, you can implement a simple polling system (with a long timespan, such as minutes) from the frontend that detects if a reload in necessary to load the new frontend deployment.
2. Standard response for outdated requests
Each request from the frontend includes an information about the version in use by the frontend. It could be a standard header, a param, whatever. You should wrap your requests in a function that add the information before sending the request itself.
If the server detects a request from an outdated frontend, it returns a standard response, such as:
{
"error": "update required"
}
The frontend detects the error and reload the page
I honestly don't like this approach, because the request may be a POST request with some form data and a page reload may lose the user all their input, which is annoying.
1. How to sync deployment when introducing new backend/frontend code
With a staging environment where you run both test suites before pulling on production.
2. How to make sure that the frontend code of the SPA is being refreshed after deployment?
Don't just break your API. Implement a grace period. For example, you could check for updates on every request, then notify the user that a new version is available so that they have to click a button at their earliest convenience. Record the used client version in your DB. Once all your users are updated, you can delete the old endpoints.
My single page app is hosted on Google's cloudstore. I love that I don't have to worry about a server. The app is, naturally, javascript heavy.
Now I would like to add a feature where users can store some data, generate a link to be shared with others and retrieve stored data. Think of a pastebin where some snippet of text is saved and a unique link is generated to be shared with others.
In fact, if it helps, think of this as my attempt to create a pastebin without having to setup a server.
It looks like Google's cloud datastore nosql solution is what I want. Given a key, it will return a snippet of text. However, all the examples on the documentation page imply that I have to setup a back end service using python, node, etc.
Questions:
Can't I just read and write from a web page, perhaps using ajax style http call (since I need to get and put text snippets once data has already been loaded)? I believe I can take care of cross-origin issues by changing some configs in the cloudstore static website server.
Obviously I don't want to serve any encryption keys from the web page. I'm hoping that since my site is served from Google as well, I can configure the nosql service handle permissions intelligently for this scenario.
Is there any documentation which shows how to do this correctly?
Google Datastore is not supposed to be used from client side, it's a served side database. You cannot do that w/o having server side code to authenticate, authorize and validate db related requests.
But there're an alternative. Firebase is a ready to use backend for client side applications, including Javascript apps. It's a separate project, that belongs to Google but not (yet?) part of Google Cloud. Take a look - https://www.firebase.com/
Although the API Rest is still beta, it is possible now to connect from a web client or anything RESTful capabilities. https://cloud.google.com/datastore/reference/rest/
I am working on a project which is built on Springs MVC and Google App Engine with Objectify.
The major functionality of this app is: If someone posts something new to the Datastore then it should be auto published to the browsers to which it is connected without refreshing the page content. Basically it is a news like site. The data sent to browser is REST APIs based JSON Data.
For implementing this functionality I thought of using the following ways:
AJAX : I thought of using AJAX call in every 2-3 minutes to get updated. But this solution doesn't seems to be feasible as there are many datastore read operation due to many AJAX calls from many browsers.
Web Socket : This concept is pretty new to me. I am not aware of this thing. Some pusher.com uses this technology for establishing such connections.
Now I need your suggestions, using which of the two above or I am also open to other solutions.
Google app engine does not support web sockets, however it supports something similiar called the channel api which works on older browsers as well. This may not be feasible depending on how many people you will have connected (channels cost 1c per 100). Channels also have some caveats: https://developers.google.com/appengine/docs/python/channel/overview#Caveats
As for using Ajax - if you cache the response in memcache and flush the key every 3 minutes then you won't be doing any data store reads unless a new instance is fired up or the key expires.
I am trying to make a web app using ExpressJS and Coffeescript that pulls data from Amazon, LastFM, and Bing's web API's.
Users can request data such as the prices for a specific album from a specific band, upcoming concert times and locations for a band, etc... stuff like that.
My question is: should I make these API calls client-side using jQuery and getJSON or should they be server-side? I've done client-side requests; how would I even make an API call from the server side?
I just want to know what the best practice is, and also if someone could point me in the right direction for making server-side API requests, that would be very helpful.
Thanks!
There's are two key considerations for this question:
Do calls incur any data access? Are the results just going to be written to the screen?
How & where do you plan to handle errors? How do you handle throttling?
Item #2 is really important here because web services go down all of the time for a whole host of reasons. Your calls to Bing, Amazon & Last FM will fail probably 1% or 0.1% of the time (based on my experiences here).
To make requests users server-side JS you probably want to take a look at the Request package on NPM.
It's often good to abstract away your storage and dependent services to isolate changes and offer a consolidated and consistent web api for your application. But sometimes, if you have a good hypermedia web api (RESTful responses link to other resources), you could reference a resource link from another service in the response from your service (ex: SO request could reference gravatar image/resource of user). There's no one size fits all - it depends on whether you want to encapsulate the dependency or integrate with it.
It might be beneficial to make the web-api requests from your service exposed via expressjs as your own web-apis.
Making http web-api requests is easy from node. Here's another SO post covering that:
HTTP GET Request in Node.js Express
well, the way you describe it I think you may want to fetch data from amazon, lastfm and so on, process it with node, save it in your database and provide your own api.
you can use node's http.request() to fetch the data and build your own rest api with express.js