Twitter typeahead.js remote and search on client - ajax

As of my understanding typeahead.js got three ways of fetching data.
Local: hardcoded data
Prefetch: Load a local json file, or by URL
Remote: Send a query to the backend which responds with matching results
I want to fetch all data from the backend and then
process it on the client.
The data my server responds with got the following structure:
[{id:2, courseCode:IDA530, courseName:Software Testing, university:Lund University},
{id:1, courseCode:IDA321, courseName:Computer Security, university:Uppsala University}, ...]
I want it to search on all fields in each entry. (id, courseCode, courseName, university)
I wanna do more on the client and still fetching one time for each user (instead of every time a user are typing), I probably misunderstood something here but please correct me.

You should re-read the docs. Basically there are two things you need:
Use the prefetch: object to bring all the data from the backend to the client only once (that's what you are looking for, if I understand correctly.)
Use a filter function to transform those results into datums. The returned datums can have a tokens field, which will be what typeahead searched by, and can be built from all your data.
Something along the lines of:
$('input.twitter-search').typeahead([{
name: 'courses',
prefetch: {
url: '/url-path-to-server-ajax-that-returns-data',
filter: function(data) {
retval = [];
for (var i = 0; i < data.length; i++) {
retval.push({
value: data[i].courseCode,
tokens: [data[i].courseCode, data[i].courseName, data[i].university],
courseCode: data[i].courseCode,
courseName: data[i].courseName,
template: '<p>{{courseCode}} - {{courseName}}</p>',
});
}
return retval;
}
}
}]);

Related

how to get the Graphql request body in apollo-server [duplicate]

I have written a GraphQL query which like the one below:
{
posts {
author {
comments
}
comments
}
}
I want to know how can I get the details about the requested child fields inside the posts resolver.
I want to do it to avoid nested calls of resolvers. I am using ApolloServer's DataSource API.
I can change the API server to get all the data at once.
I am using ApolloServer 2.0 and any other ways of avoiding nested calls are also welcome.
You'll need to parse the info object that's passed to the resolver as its fourth parameter. This is the type for the object:
type GraphQLResolveInfo = {
fieldName: string,
fieldNodes: Array<Field>,
returnType: GraphQLOutputType,
parentType: GraphQLCompositeType,
schema: GraphQLSchema,
fragments: { [fragmentName: string]: FragmentDefinition },
rootValue: any,
operation: OperationDefinition,
variableValues: { [variableName: string]: any },
}
You could transverse the AST of the field yourself, but you're probably better off using an existing library. I'd recommend graphql-parse-resolve-info. There's a number of other libraries out there, but graphql-parse-resolve-info is a pretty complete solution and is actually used under the hood by postgraphile. Example usage:
posts: (parent, args, context, info) => {
const parsedResolveInfo = parseResolveInfo(info)
console.log(parsedResolveInfo)
}
This will log an object along these lines:
{
alias: 'posts',
name: 'posts',
args: {},
fieldsByTypeName: {
Post: {
author: {
alias: 'author',
name: 'author',
args: {},
fieldsByTypeName: ...
}
comments: {
alias: 'comments',
name: 'comments',
args: {},
fieldsByTypeName: ...
}
}
}
}
You can walk through the resulting object and construct your SQL query (or set of API requests, or whatever) accordingly.
Here, are couple main points that you can use to optimize your queries for performance.
In your example there would be great help to use
https://github.com/facebook/dataloader. If you load comments in your
resolvers through data loader you will ensure that these are called
just once. This will reduce the number of calls to database
significantly as in your query is demonstrated N+1 problem.
I am not sure what exact information you need to obtain in posts
ahead of time, but if you know the post ids you can consider to do a
"look ahead" by passing already known ids into comments. This will
ensure that you do not need to wait for posts and you will avoid
graphql tree calls and you can do resolution of comments without
waiting for posts. This is great article for optimizing GraphQL
waterfall requests and might you give good idea how to optimize your
queries with data loader and do look ahead
https://blog.apollographql.com/optimizing-your-graphql-request-waterfalls-7c3f3360b051

Abstract relay-style connections in Apollo client

I am building an app that will use GraphQL on the backend and Apollo-client on the front-end. I am going to use Relay-style connection types as it would allow us to put metadata on relationships.
However, we don't want our react components to have to deal with the additional complexity added by connections. For legacy reasons and also because it seems cleaner, I would prefer that my react components don't have to deal with nodes and edges. I prefer to pass around:
Snippet 1:
const ticket = {
title: 'My bug'
authors: [{ login: 'user1', login: 'user2' }]
}
rather than
Snippet 2:
const ticket = {
title: 'My bug'
authors: {
nodes: [{
login: 'user1',
login: 'user2',
}]
}
}
Also in typescript, I really don't see myself defining a ticket type that would contain nodes and metadata such as nextPage, lastPage etc...
I am trying to come up with an abstraction, maybe at the apollo client level that would allow me to automatically convert Snippet 2 to Snippet 1 while still allowing access to Snippet 1 when I actually need those metadata.
Has this problem been solved by someone else? Do you have suggestions on a possible solution? Am i heading in the wrong directions?
Rather than trying to solve this client-side, you can simply expose additional fields in your schema. You can see this done with the official SWAPI example:
query {
allFilms {
# edges
edges {
node {
...FilmFields
}
}
# nodes exposed directly
films {
...FilmFields
}
}
}
This way you can query the nodes with or without the connection as needed without having to complicate things on the client side.

Resolve to the same object from two incoherent sources in graphql

I have a problem I don't know how to solve properly.
I'm working on a project where we use a graphql server to communicate with different apis. These apis are old and very difficult to update so we decided to use graphql to simplify our communications.
For now, two apis allow me to get user data. I know it's not coherent but sadly I can't change anything to that and I need to use the two of them for different actions. So for the sake of simplicity, I would like to abstract this from my front app, so it only asks for user data, always on the same format, no matter from which api this data comes from.
With only one api, the resolver system of graphql helped a lot. But when I access user data from a second api, I find very difficult to always send back the same object to my front page. The two apis, even though they have mostly the same data, have a different response format. So in my resolvers, according to where the data is coming from, I should do one thing or another.
Example :
API A
type User {
id: string,
communication: Communication
}
type Communication {
mail: string,
}
API B
type User {
id: string,
mail: string,
}
I've heard a bit about apollo-federation but I can't put a graphql server in front of every api of our system, so I'm kind of lost on how I can achieve transparency for my front app when data are coming from two different sources.
If anyone has already encounter the same problem or have advice on something I can do, I'm all hear :)
You need to decide what "shape" of the User type makes sense for your client app, regardless of what's being returned by the REST APIs. For this example, let's say we go with:
type User {
id: String
mail: String
}
Additionally, for the sake of this example, let's assume we have a getUser field that returns a single user. Any arguments are irrelevant to the scenario, so I'm omitting them here.
type Query {
getUser: User
}
Assuming I don't know which API to query for the user, our resolver for getUser might look something like this:
async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
// transform response
if (userFromA) {
const { id, communication: { mail } } = userFromA
return {
id,
mail,
}
}
// response from B is already in the correct "shape", so just return it
if (userFromB) {
return userFromB
}
}
Alternatively, we can utilize individual field resolvers to achieve the same effect. For example:
const resolvers = {
Query: {
getUser: async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
return userFromA || userFromB
},
},
User: {
mail: (user) => {
if (user.communication) {
return user.communication.mail
}
return user.mail
}
},
}
Note that you don't have to match your schema to either response from your existing REST endpoints. For example, maybe you'd like to return a User like this:
type User {
id: String
details: UserDetails
}
type UserDetails {
email: String
}
In this case, you'd just transform the response from either API to fit your schema.

Firestore - Using cache until online content updates

I am starting with Firestore. I've read docs and tutorials about the offline data persistence but I have not really clear if Firestore downloads data again even if the content hasn't been modified.
For example, if I have a query where the results will be updated once a week and I don't need that the app download the content again until the changes were made, what is the best way in terms of efficiency to write the code?
Thanks!
You want to use the "snapshot listener" API to listen to your query:
https://firebase.google.com/docs/firestore/query-data/listen#listen_to_multiple_documents_in_a_collection
Here's some JavaScript as an example:
db.collection("cities").where("state", "==", "CA")
.onSnapshot(function(querySnapshot) {
var cities = [];
querySnapshot.forEach(function(doc) {
cities.push(doc.data().name);
});
console.log("Current cities in CA: ", cities.join(", "));
});
The first time you attach this listener Firestore will access the network to download all of the results to your query and provide you with a query snapshot, as you'd expect.
If you attach the same listener a second time and you're using offline persistence, the listener will be fired immediately with the results from the cache. Here's how you can detect if your result is from cache or local:
db.collection("cities").where("state", "==", "CA")
.onSnapshot({ includeQueryMetadataChanges: true }, function(snapshot) {
snapshot.docChanges.forEach(function(change) {
if (change.type === "added") {
console.log("New city: ", change.doc.data());
}
var source = snapshot.metadata.fromCache ? "local cache" : "server";
console.log("Data came from " + source);
});
});
After you get the cached result, Firestore will check with the server to see if there are any changes to your query result. If yes you will get another snapshot with the changes.
If you want to be notified of changes that only involve metadata (for example if no documents change but snapshot.metadata.fromCache changes) you can use QueryListenOptions when issuing your query:
https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/QueryListenOptions

Bloodhound does not cache data from remote fetches in local storage

I am trying to load autocompletion information of people's names for typeahead and then not have to query the server again if I already have a result.
For example if i search a person's name and the data for that person (among others) gets retrieved from a remote query, when I delete the name and search for the surname instead I want to have the previously cached names with that surname to show up. What actually happens is that the results are again retrieved from the server and the suggested.
Caching only works while typing a single word ("Mic" -> "Mich" -> "Micha" -> "Michael").
TL;DR: I want to cache results from bloodhound in Local Storage not only from prefetch (which cannot be applied to my situation) but from remote as well and use that before querying remote again.
What i currently have is
function dispkey(suggestion_object){
console.log(suggestion_object);
return suggestion_object["lastname"] + ", " + suggestion_object["firstname"];
}
var engine = new Bloodhound({
name: 'authors',
local: [],
remote: 'http://xxxxxx.xxx/xxxx/xxxxxxxxxx?query=%%QUERY',
datumTokenizer: function(d) {
return Bloodhound.tokenizers.whitespace(d.val);
},
queryTokenizer: function (s){
return s.split(/[ ,]+/);
},
});
engine.initialize();
$('.typeahead').typeahead({
highlight: true,
hint: true,
minLength: 3,
},
{
displayKey: dispkey,
templates: {
suggestion: Handlebars.compile([
'<p id="author_autocomplete_email_field" >{{email}}</p>',
'<p id="author_autocomplete_name_field">{{lastname}} {{firstname}}</p>',
].join(''))},
source: engine.ttAdapter(),
});
I haven't found something similar and i am afraid there is no trivial solution to this.
P.S.: I also noticed that datumTokenizer never gets called
datumTokenizer: function(d) {
console.log("Lalalalala");
return Bloodhound.tokenizers.whitespace(d.val);
},
when i used this, "Lalalalala" was never outputted in the chrome debug console.
As jharding mentioned it's not possible to have remote suggestions pulled from localstorage at this point.
However, I recently worked on a small project where I needed to store previous form inputs for future use in typeahead.js. To do this I saved an array of form input values to localstorage.
var inputs = ['val1', 'val2', 'val3', 'val4'];
localStorage.setItem('values', JSON.stringify(inputs));
I then retrieved the array for use in the typeahead field.
var data = JSON.parse(localStorage.getItem('values'));
$('input').typeahead({
minLength: 3,
highlight: true,
},
{
name: 'data',
displayKey: 'value',
source: this.substringMatcher(data)
});
You can view my full source here.

Resources