I have this use case, where I have created server side views on sync gateway based on a rolling time window of 10 days. Is there a way to directly pull those on my device side?
When I look at the documentation, I see that there's no way these can be replicated directly and one needs to make REST calls:
http://developer.couchbase.com/documentation/mobile/1.2/develop/guides/sync-gateway/accessing-cb-views/index.html
Is that assumption correct?
The other approach I saw was that let all the data be replicated on the client side and then write Couchbase lite views on the client side using Map reduce functions. Which one's the correct approach out of the 2?
Yes I believe that your assumption is correct - views have to be queried directly via the public REST API. I also believe that your solution for syncing data and then querying it on the client side will also work.
In order to find the "correct approach" I would consider your app needs and deployment workflow:
Using view on the server will require:
Managing (CRUD) of the views in SG - similar to managing functions in a database. These would ideally be managed by some deployment / management code.
Clients need to be able to make the API call to the public interface to access view info. This then requires a cache to work offline.
Slicing data locally means that sync will bring down all data and the device will have to perform the search / slice / aggregation previously carried out by the server. This will:
Work offline.
Put a potential extra strain on the app device.
I don't think that there are any easy answers here - ideally views would be synced to the device, but I don't know if that's even possible with the current SG implementation.
(Note 1: that the views must be created in Sync Gateway via the admin REST interface and not through the Couchbase web interface.).
(Note 2: I'm a server-side programmer, so this view is tainted.)
What I ended up doing was writing webhooks, which basically let me have the same docs replicated onto a Couchbase server. Then I did all needed aggregations and pushed those to syn gatewy(which got replicated to the app).
May or mayn't be right but works for my case....
Related
I'm trying to build a small site that gets its data from a database (currently I use Firebase's Cloud Firestore).
I've build it using next.js and thought to host it on vercel. It looks very nice and was working well.
However, the site needs to handle ~1000 small documents - serve, search, and rarely update. In order to reduce calls to the database on every request, which is costly both in time, and in database pricing, I thought it would be better if the server could get the full list of item when it starts (or on the first request), and then hold them in memory and make data request get the data from its memory.
It worked well in the local dev server, but when I deployed it to vercel, it didn't work. It seems it forces me to work in serverless mode, where each request is separate, and I can't use a common in-memory cache to get the data.
Am I missing something and there is a way to achieve something like that with next.js on vercel?
If not, can you recommend other free cloud services that can provide what I'm looking for?
One option can be using FaunaDB and Netlify, as described in this post, but I ended up opening a free Wix site and using Wix data to store the data. I built http-functions module to provide access to the data via REST, which also caches highly used data in memory. Currently it seems to work like a charm!
I am mapping users to connections as described in the following link https://learn.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/mapping-users-to-connections so I can find which user's to send messages to.
I was wondering if there is any additional work required for this to work smoothly on multi node servers / load balancing. Im not experienced on the infrastructure side but I'm assuming if there are multi servers spun up, there would be multiple static hashmaps storing the mappings of users to connections - i.e., one for each server.
Would this mean users that have made a connection from their browser to node A will not be able to communicate to users who've connected to node B ?
If this is the case, how would we go about making this possible.
In that same link, just below the Introduction section, it discusses 4 different mapping methods:
The User ID Provider (SignalR 2)
In-memory storage, such as a dictionary
SignalR group for each user
Permanent, external storage, such as a database table or Azure table storage
And after that there is a table that show which of these works in different scenarios. One of those scenarios being "More than one server".
Since it is not mentioned, it depends on which mapping method you are following.
From there, you can check out "scaling out" on the same site you noted which has several methods you can follow depending on what suites your needs. This is where sending messages to clients regardless of which server they connect are handled.
I am trying to create a Microservice architecture for a hobby project and I am confused about some decisions. Can you please help me as I never worked using Microservice before?
One of my requirements is that my AngularJS GUI will need to show some drop-down or List of values (example: a list of countries). This can be fetched using a Microservice REST call, but where should the values come from? Can I fetch these from my Config Server? or should it come from Database? If the latter, then should each of the Microservice have their own Database for lookup value or can it be a common one?
How would server-side validation work in this case? I mean, there will certainly be a Microservice call the GUI will make for validation but should the validation service be a common Microservice for all Use Cases/Screens or should it be one per GUI page or should the CRUD Microservice be reused for validation as well?
How do I deal with a use-case where the back-end is not a Database but a Web-service call? Will I need some local DB still to maintain some state in between these calls (especially to take care of scenario where the Web-service call fails) and finally pass on the status to GUI?
First of all, there is no single way design micro-service , one has to choose according to the use case and project requirement.
Can I keep these in a Config Server? or should it come from Database?
Again, it depends upon the use case and requirement. However, because every MS should have their own DB then you can use DB if the countries have only names. But if they have some relationship with City/State then you should use DB only.
If DB should each of the Microservice have their own DB for lookup
value or can it be a common one?
No, IMO multiple MS should not depend on a single DB.Because if the DB fails then all the MS will fail, which should not be done. Each MS should work alone with depending on other DB or MS.
should the validation service be a common microservice for all
UseCases/Screens
Same as point 2
How do I deal with a use-case where the backend is not a Database call
but another Web-service call? Will I need some local DB still to
maintain some state in between these calls and finally pass on the
status to GUI?
If you are using HTTP then you should not save the state of any request. If you want to redirect the request to another MS then you can use Feign client which provides a very good way to call rest-api and other important features like: Load balancing.
Microservice architecture is simple. Here we divide each task into separate services(like Spring-boot application).
Example in every application there will be login function,registration function so on..each of these will a separate services in micro-service architecture.
1.You can store that in database, since in feature if you want add more values it is easy to add.
You can maintain separate or single db. Single db with separate collections or table for each microservices.
Validation means you are asking about who can use which microservice(Role based access)???
3.I think you have to use local db.
Microservices is a collection loosely coupled services. For example, if you are creating an ecommerce application, user management can be a service, order management can be a service and refund & chargeback management can be another service. Now each of these services can be further divided into smaller units, lets call them API Endpoints. For example - user management can have login as an endpoint and signup as another endpoint.
If you want to leverage the power of Microservice architecture in its true sense, here is what I would suggest. For the above example, create 3 Springboot Applications for each service. First thing that you should do after this, is establish trust between those applications. I would prefer JWTs for trust establishment. After that everything is a piece of cake. Here are the answers you are looking for :
You should ideally use a database, as opposed to keeping the values in config server, for fetching a list of countries so that you need not recompile your code every time a new country is added.
You can easily restrict access using #PreAuthorize if Role based access is what you are referring to.
You can use OkHttp or any other HttpClient in this usecase. And you certainly need not maintain any local db. However, you can cache the output of the webservice call if that is a requirement.
P.S.: Establishing trust between microservices can be a complex task if you dont understand all the delicacies. In which case, I would recommend going ahead with a single Springboot application; which is a monolithic architecture. I would still recommend JWTs though.
I am using a react-native app with relay modern.
Currently our app's fetchQuery implementation, just does a fetch on the network (like in https://facebook.github.io/relay/docs/en/network-layer.html),
Although there is a possibility of another local-network layer like https://github.com/relay-tools/relay-local-schema which returns data from a local-db like sqlite/realm.
Is there a way to setup offline-first response from local-network layer, followed by automatic request to real network which also populates the store with fresher data (along with writing to local-db)?
Also should/can they share the same store?
From the requirements of Network.create(), it should return a promise containing the payload, there does not seem a possibility to return multiple values.
Any ideas/help/suggestions are appreciated.
What you trying to achieve its complex, and ill go for the easy approach which is long time cache.
As you might know relay modern uses a local storage and its exact copy of the data you are fetching, you can configure this store cache as per your needs, no cache on mutations.
To understand how this is achieve the best library around to customise Relay Modern or Classic network layer you can find in https://github.com/nodkz/react-relay-network-modern
My recommendation: setup your cache and watch your request.... (you going to love it)
Thinking in Relay,
https://facebook.github.io/relay/docs/en/thinking-in-relay.html
I'm new to CouchDb and am trying to comprehend how to properly make use of it. I'm coming from MongoDB where I would always write a web layer and put it in front of mongo so that I could allow users to access the data inside of it, etc. In fact, this is how I've used all databases for every web site that I've ever written. So, looking at Couch, I see that it's native API is HTTP and that it has built in things like OAuth support, and other features that hint to me that perhaps I should no longer have my code layer sitting in front of Couch, but instead write Views and things and just give out accounts to Couch to my users? I'm thinking in terms of like an HTTP-based API for a site of mine, or something that users would consume my data through. Opening up Couch like this seems odd to me, though. Is OAuth, in Couch's sense, meant more for remote access for software that I'd write and run internal to my own network "officially", or is it literally meant for the end users?
I know there might be things that could only be done through a code layer on top of CouchDB, like if you wanted additional non-database related things to occur during API requests, also. So thinking along those lines I think I will still need a code layer, anyway.
Dealer's choice.
Nodejitsu has a great writeup on this sort of topic here.
Not knowing your application specifics I'll take a broad approach...
Back-end
If you want to prevent users from ever seeing your database then make it back-end. You can pipe everything through something like node.js and present only what the user needs to see and they'll never know anything about the database.
See Resource View Presenter
Front-end
If you are not concerned about data security, you can host an entire app on CouchDB; see CouchApp. This approach has the benefit of using the replication mechanism to control publishing your site/data. The drawback here is that you will almost certainly run into some technical limitations that will require moving CouchDB closer to the backend.
Bl-end
Have the app server present the interface and the client pull the data from the database separately. This gives the most flexibility but can be a bag of hurt because even with good design this could lead to supportability and scalability issues.
My recommendation
Use CouchDB on the backend. If you need mobile clients to synchronize then use a secondary DB publicly exposed for this purpose and selectively sync this data to wherever it needs to go.
Simply put, no.
There's no way to secure Couch properly on a public facing site. There's no way to discriminate access at a fine enough granular level. If someone has access to any of the data, they have access to all of the data.
Not all data on a site is meant for public consumption, save for the most trivial of sites.