I have some neo4j graphs and I want to export their information in a JSON that’s compatible with javascript.D3.
I found a fairly reasonable tutorial of doing so at this link :
https://neo4j.com/developer/example-project/
However, the one thing I don’t understand is how the following data was generated
// JSON object for whole graph viz (nodes, links - arrays)
curl http://localhost:8080/graph[?limit=50]
{"nodes":
[{"title":"Apollo 13","label":"movie"},{"title":"Kevin Bacon","label":"actor"},
{"title":"Tom Hanks","label":"actor"},{"title":"Gary Sinise","label":"actor"},
{"title":"Ed Harris","label":"actor"},{"title":"Bill Paxton","label":"actor"}],
"links":
[{"source":1,"target":0},{"source":2,"target":0},{"source":3,"target":0},
{"source":4,"target":0},{"source":5,"target":0}]}
I don’t understand how the JSON payload above is generated.
All of my neo4j graphs are exported in a neo4j JSON (which is a more complex payload structure than the one above). Which is alright but I specifically want to generate the code shown above. A curl command is just going to fetch existing data, so at the very least I need existing data formatted properly which I don’t have.
Related
I'm using protobuf to serialize json from api for flutter app.
however I'm having an issue where I need to serialize this list for example:
"value_array": [ "",
"",
null
]
If I use the usual:
repeated string value_array = 6;
I get an exception during parsing the json.
and sadly I can't have the json changed from api. even worse I can't just manually remove the null from json before parsing it as this element in json is repeated in many different api calls.
PS. I don't need to differentiate the empty string from null, just want to avoid the exception.
thanks in advance for any help.
protobuf has a very opinionated view on JSON, and not all JSON concepts map cleanly to protobuf concepts; for example, protobuf has no notion of null
It might be fine and reasonable to use the protobuf JSON variant if you're always talking protobuf-to-protobuf and want readability (hence text over binary), but if you're working with an external (non-protobuf) JSON tool, honestly: don't use protobuf. Use any relevant JSON-specific tool for your platform - it will do a better job of handling the JSON and supporting your needs. You can always re-map that data to your protobuf model after you have deserialized it, if you need.
I'm writing client-side code for an app that will query a GraphQL server. In a couple of places in my code, I'm passing around data that will eventually get turned into a query variable, so it needs to validate against a specific GraphQLInputType in my schema. On looking into some of the utilities that graphql-js provides, it looks like the isValidJSValue checker is exactly what I'm looking for, and its comments even mention that it's intended to be used to do just that.
The issue is that I don't have access to the GraphQL type I want to validate against as a JS object, which is what I'm pretty sure that function is looking for. I'm importing my schema (as an npm depdendency) as JSON, or I also have it in the schema notation. Is there some other utility I can use to get the JS type I need from one of those sources, and then use that to check my data with isValidJSValue? Or is there some other way I could go about this that I just haven't thought of?
You can use the JSON schema you have imported to construct an actual GraphQL schema instance using buildClientSchema here: https://github.com/graphql/graphql-js/blob/master/src/utilities/buildClientSchema.js
Then, it should be a simple matter of looking in the types field of the resulting schema to find your input type, and then calling isValidJSValue on it.
I'm curious, though - why validate the value on the client before sending it, rather than just relying on the validation the server will do?
I'm new beginner of Elastic Search. One feature I found is that elastic search documents is particularly expressed in JSON. I google a while but I can not found any reason about that.
Can someone help to explain why JSON not XML or other format?
It is because json document has key, value structure and it helps elasticsearch to index on basis of keys. Suppose if there is an XML, then a lot of effort will be required to just parse the data whereas in json , according to key value elastic search can directly index the required data.
Basically there are mainly 2 standard ways to transport data between a server and client, XML and JSON. Old services use XML as well as JSON as a way to transfer data as most of the old consumers of the services are stick to XML parsers, but recent services use JSON as a standard mainly because of simplicity that comes with JSON. JSON parsers are easy to build and use. At the same time XML parsers needs to be customized as per fields. Although there are some great libraries for parsing a XML response like SAX parser in JAVA, its still not that straight forward. Also JSON can be directly used in javascript. I hope I have answered your question.
I make an ajax call (very simple), that currently returns a string of html. Depending on who is calling it, this string can become very long at times. What I'd like to know is why it's better to return a JSON result and build my HTML afterwards, rather than just returning a long string.
Some advantages of returning JSON instead of HTML:
The data can be used as data for analysis or other uses, not just used for presentation.
JSON data is often/usually much smaller than the full presentation HTML so you are transferring less data over the internet.
You create a separation between data and presentation rather than mix them both into one single API. Your server returns the data which a separate piece of code then turns into presentation.
The JSON data can be processed or modified more easily before presentation (such as filtered, sorted, tagged, expanded/collapsed, etc...).
You can use the same JSON data for many different types of presentation. If you return HTML, the presentation is already baked in so if you want a different presentation, you then have to create a whole new AJAX call.
If you want an extreme way to think about this, then ask yourself why does a database return raw data and not an HTML view of the data? It's because you can do so many more different kinds of things with the actual data so therefore the data is so much more useful by having it give you just the data and then different pieces of code to do something with the data (analyze it, combine it with other data, make decisions based on it, present it for viewing, etc....). If the database only returned an HTML view of the data, it would be far harder to do all these other things with the data. The same is true of an Ajax call which is really just the client's access to data.
Is there another way to view the profiling results of MiniProfiler (I'm specifically interested in EF5 version)?
Every tutorial that I've seen uses MiniProfiler.RenderIncludes(); but since my MVC app mostly returns JSON, that is not an option for me.
Is there a way to write results to file or something like that?
You can read and write results to just about anywhere by changing the MiniProfiler.Settings.Storage to a different IStorage implementation from the default (which stores to http cache). If you wanted to, this could store to and read from a file pretty easily (you would have to write your own custom implementation for that).
The files served by RenderIncludes are the html templates for displaying the results and the script to retrieve the results from the server and render them on the client (all found here). But you are by no means obliged to use this mechanism. If you want to write your own logic for retrieving and displaying results, you should base this off of the logic found in MiniProfilerHandler.GetSingleProfilerResult. This function roughly performs the following (putting in the siginificant steps for your purposes):
Gets Id of next results to retrieve (through MiniProfiler.Settings.Storage.List())
Retrieves the actual results (MiniProfiler.Settings.Storage.Load(id))
Marks the results as viewed so that they wont be retrieved again (MiniProfiler.Settings.Storage.SetViewed(user, id))
Converts these to ResultsJson and returns it
With access to MiniProfiler.Settings.Storage, you should be able to retrieve, serve and consume the profile results in any way that you want. And if you are interested in using the RenderIncludes engine but want to mess around with the html/js being served, you can provide your own custom ui templates that will replace the default behavior.