I have a JSON structure something like this:
{
Id: 1
Data: [
{
X: 1,
Content: [...]
},
{
X: 2,
Content: [...]
},
]
}
Where Content can be very large byte arrays. The problem I have is that serialising and deserializing this structure using JSON.NET takes ups a large amount of memory on the LOH.
Is there some way I can WebAPI to stream the byte arrays to files?
The only other solution I can see is abandon JSON and use multipart/form-data but this seems kind of ugly, or is there some other way?
Related
I'm doing a simple transaction with a single transfer instruction for 0,1 SOL from one account to another. Then I want to get the transaction data and use it to verify the data it carries - in this case that a transfer has been made for 0,1 SOL.
I use getTransaction with the tx signature and get a response like this:
{
message: Message {
header: {
numReadonlySignedAccounts: 0,
numReadonlyUnsignedAccounts: 1,
numRequiredSignatures: 1
},
accountKeys: [ [PublicKey], [PublicKey], [PublicKey] ],
recentBlockhash: '9S44wiNxXZSdP5VTG6nvaumUJBiUW1DGUXmhVfuhwTMh',
instructions: [ [Object] ],
indexToProgramIds: Map(1) { 2 => [PublicKey] }
},
signatures: [
'8ykRq1XtgrtymXVkVhsWjaDrid5FkKzRPJrarzJX9a6EArbEUYMrst6vVC6TydDRG4sagSciK6pP5Lw9ZDnt3RD'
]
}
So, I dig into message.instructions and find the following object:
{ accounts: [ 0, 1 ], data: '3Bxs411Dtc7pkFQj', programIdIndex: 2 }
Ok, so data is the base58-encoded string '3Bxs411Dtc7pkFQj'. I decode that from base58 (using bs58), but that only gives me a Uint8Array, which I am not really sure how to convert into a JS object.
I can see in the tx in Solscan that the data information is decoded into hex:
And I can also get this info in my script:
---> Instructions:
{ accounts: [ 0, 1 ], data: '3Bxs411Dtc7pkFQj', programIdIndex: 2 }
---> Instructions Data:
3Bxs411Dtc7pkFQj
0200000000e1f50500000000
-----
But not sure what to do next. How do I get the actual amount data out of it.
So, I guess the question is: How to decode the instruction data into a JS object?
So, for your second question:
To know how to deserialize the array returned from the bs58 decoding you need to know how the sender serialized the instruction.
I was redirected to solana.stackexchange.com and there I found the answer. Basically, I could use getParsedTransaction instead of just getTransaction.
The parsed function would give me exactly what I need, since the instructions prop is an array containing one object (in my case), which looks like this:
{
parsed: {
info: {
destination: 'foo',
lamports: 100000000,
source: 'bar'
},
type: 'transfer'
},
// ... other props
}
I am trying to boil down a pretty complicated problem into its essence so I can get some help on how to model or architect it. Here it goes.
Say we are compiling functions in this order:
function test() {
sum(mul(2, 3), mul(3, 4))
}
function sum(a, b) {
return a + b
}
function mul(a, b) {
return a * b
}
We end up with an AST something like this:
{
type: 'Program',
blocks: [
{
type: 'Function',
name: 'test',
args: [],
body: [
{
type: 'Call',
function: 'sum',
args: [
{
type: 'Call',
function: 'mul',
...
},
...
]
}
]
},
{
type: 'Function',
name: 'mul',
args: ...,
body: ...
},
{
type: 'Function',
name: 'sum',
args: ...,
body: ...
}
]
}
Now we start compiling this AST into more easily manipulated objects, with direct pointers to functions and such. The final result might look like this:
{
type: 'Program',
blocks: [
{
type: 'Function',
name: 'test',
args: [],
body: [
{
type: 'Call',
pointer: 2,
args: [
{
type: 'Call',
pointer: 1,
...
},
...
]
}
]
},
{
type: 'Function',
name: 'mul',
args: ...,
body: ...
},
{
type: 'Function',
name: 'sum',
args: ...,
body: ...
}
]
}
The main difference is that the "final" version has a pointer to the index where the function is defined. This is a very rough sketch. The reality would be there could be multiple passes required to resolve some context sensitivity, and so you end up with multiple partial/intermediate data structures in the transition from the AST to the final compiled object.
How do you make types to deal with this situation? The ideal is that there is an "initial" and a "final" type. The reality is that on our first pass, we have a "placeholder type" for the function calls, which we can't resolve until we have completed our first pass. So on the first pass, we have:
function: String
On the second pass we change it to:
pointer: Int
How do you reconcile this? How do you architect the algorithm so as to allow for these "placeholder" types for the final data structure?
I have tried searching the web for these sorts of topics but haven't found anything:
partial types
intermediate types
placeholder types
virtual types
temporary types
transitional types
how to have temporary placeholders in data structures
etc.
Create a hashmap.
In a first pass write name/index pairs to the hashmap without modifying the AST itself. For the example that would result in this hashmap (represented in JSON format):
{
"mul": 1,
"sum": 2
}
In a second pass you can use the hashmap to replace references to the keys of this hashmap with a pointer property that gets the corresponding value.
I would suggest not trying to understand how to store intermediate data types, but understanding how to store "references" or "holes". Go look up how a typical serialization/deserialization algorithm works (especially one that can deal with something like repeated substructure or circular references): http://www.dietmar-kuehl.de/mirror/c++-faq/serialization.html
It may give you helpful ideas.
From Redux docs:
This [normalized] state structure is much flatter overall. Compared to
the original nested format, this is an improvement in several ways...
From https://github.com/paularmstrong/normalizr
:
Many APIs, public or not, return JSON data that has deeply nested objects. Using data in this kind of structure is often very difficult for JavaScript applications, especially those using Flux or Redux.
Seems like normalized database-ish data structures are better to work with on front end. Then why GraphQL is so popular if it's whole language style is revolved around quickly getting any nested data? Why do people use it then?
This kind of discussion is off-topic on SO ...
it's not only about [normalized] structures ...
graphql client (like apollo) takes care of all data fetching related nuances (error handling, cache, refetching, data conversion, and many more) also but hardly doable with redux.
Different use cases, you can use both:
keep (complex) app state in redux,
handle data fetching in apollo (you can use it for local state, too).
Let's look at why we want to normalize the cache and what kind of work we have to do to get a normalized cache.
For the main page we fetch a list of TODOs and a list of high priority TODOS. Our two endpoints return the following data:
{
all: [{ id: 1, title: "TODO 1" }, { id: 2, title: "TODO 2" }, { id: 2, title: "TODO 2"}],
highPrio: [{ id: 1, title: "TODO 1" }]
}
If we would store the data like this into our cache, we have a difficult time updating a single todo, because we have to update the todo in every array we have in our store or might have in our store in the future.
We can normalize the data and only store references in the array. This way we can easily update a single todo in a single place:
{
queries: {
all: [{ ref: "Todo:1" }, { ref: "Todo:2" }, { ref: "Todo:2" }],
highPrio: [{ ref: "Todo:1" }}]
},
refs: {
"Todo:1": { id: 1, title: "TODO 1" },
"Todo:2": { id: 2, title: "TODO 2" },
"Todo:3": { id: 3, title: "TODO 3" }
}
}
The downside is, that this shape of data is now much harder to use in our list component. We will have to transform the cache a lot, roughtly like so:
function denormalise(cache) {
return {
all: cache.queries.all.map(({ ref }) => cache.ref[ref]),
highPrio: cache.queries.highPrio.map(({ ref }) => cache.ref[ref]),
};
}
Notice how now updating Todo:1 inside of the cache will update all queries that reference the todo automatically, if we run this function inside of the React component (this is often called a selector in Redux).
The magical thing about GraphQL is that it is a strict specification with a type system. This allows GraphQL clients like Apollo to globally identify objects and normalise that cache. At the same time it can also automatically denormalise the cache for you and update objects in the cache automatically after a mutation. This means that most of the time you have to write no caching logic at all. And this should explain why it is so popular: The best code is no code!
const { data, loading, error } = useQuery(gql`
{ all { id title } highPrio { id title }
`);
This code automatically fetches the query on load, normalizes the response and writes it into the cache. Then denormalizes the cache back into the shape of the query using the cache data. Updates to elements in the cache automatically update all subscribed components.
I'm writing a web socket service for my Ember app. The service will subscribe to a URL, and receive data. The data will push models into Ember Data's store.
The URL scheme does not represent standard RESTful routes; it's not /posts and /users, for example, it's something like /inbound. Once I subscribe it will just be a firehose of various events.
For each of these routes I subscribe to, I will need to implementing data munging specific to that route, to get the data into a format Ember Data expects. My question is, where is the best place to do this?
An example event object I'll receive:
event: {
0: "device:add",
1: {
device: {
devPath: "/some/path",
label: "abc",
mountPath: "/some/path",
serial: "abc",
uuid: "5406-12F6",
uniqueIdentifier: "f5e30ccd7a3d4678681b580e03d50cc5",
mounted: false,
files: [ ],
ingest: {
uniqueIdentifier: 123
someProp: 123,
anotherProp: 'abc'
}
}
}
}
I'd like to munge the data to be standardized, like this
device: {
id: "f5e30ccd7a3d4678681b580e03d50cc5",
devPath: "/some/path",
label: "abc",
mountPath: "/some/path",
serial: "abc",
uuid: "5406-12F6",
mounted: false,
files: [ ],
ingestId: 123
},
ingest: {
id: 123,
someProp: 123,
anotherProp: 'abc'
}
and then hand that off to something that will know how to add both the device model and the ingest model to the store. I'm just getting confused on all the abstractions in ember data.
Questions:
Which method should I pass that final, standardized JSON to in order to add the records to the store? store.push?
Where is the appropriate place for the initial data munging i.e. getting the event data from the array? Application serializer's extractSingle? pushPayload? Most of the munging will be non-standard across the different routes.
Should per-type serializers be used for each key in the data after I've done the initial munging? i.e. should I had initial "blob" to application serializer, which will then delegate each key to the per-model serializers?
References:
Docs on the store
RESTSerializer
how do I pass a 2-dimensional array from javascript to ruby, please? I have this on client side:
function send_data() {
var testdata = {
"1": {
"name": "client_1",
"note": "bigboy"
},
"2": {
"name": "client_2",
"note": "smallboy"
}
}
console.log(testdata);
$.ajax({
type: 'POST',
url: 'test',
dataType: 'json',
data: testdata
});
}
and this on server side:
post '/test' do p params end
but I can't get it right. The best I could get on server side is something like
{"1"=>"[object Object]", "2"=>"[object Object]"}
I tried to add JSON.stringify on client side and JSON.parse on server side, but the first resulted in
{"{\"1\":{\"name\":\"client_1\",\"note\":\"bigboy\"},\"2\":{\"name\":\"client_2\",\"note\":\"smallboy\"}}"=>nil}
while the latter has thrown a TypeError - can't convert Hash into String.
Could anyone help, or maybe post a short snippet of correct code, please? Thank you
You may want to build up the JSON manually, on the javascript side:
[[{'object':'name1'},{'object':'name2'}],[...],[...]]
This will build an array of arrays with objects.
It may look like this:
testdata = [[{
"1": {
"name": "client_1",
"note": "bigboy"
}],
[{"2": {
"name": "client_2",
"note": "smallboy"
}]
}]
I may have something off here, but this should be close to what it would look like.
I'm not sure if this will help but I've got two thoughts: serialize fields and/ or iterate the array.
I managed to get a json array into activerecord objects by setting serializing the fields which had to store sub-arrays:
class MyModel < ActiveRecord::Base
serialize :tags
end
and using an iterator to process the json array:
f = File.read("myarrayof.json")
jarray = JSON.load(f)
jarray.each { |j| MyModel.create.from_json(j.to_json).save }
The conversion back-and-forth seems a bit cumbersome but I found it the most obvious way to handle the array.