How to send byte array (Blob) to GraphQL mutation - graphql

We have a GraphQL mutation with a byte array (Blob) field. How can I use tools like Insomnia or GraphQL playground to send byte array data to test the API?
mutation {
saveSomething(something: {
contentByteArray: [97, 110, 103, 101, 108, 111]
}) {
content
}
}

You can send data like that, however I understand that GraphQL sends a List of bytes rather than an array. In C# such a field (I'm uploading a photo) would be described as
Field<ListGraphType<ByteGraphType>>("photo");
and would need to be converted back into an array in order to be saved to the database
e.g.
IDictionary<string, object> dicPlayer = (IDictionary<string, object>)context.Arguments["input"];
...
if (dicPlayer.ContainsKey("photo")) {
if (dicPlayer["photo"] == null) {
playerInput.Photo = null;
} else {
List<byte> lstB = new List<byte>();
foreach (var objB in (List<object>)dicPlayer["photo"]) {
byte b = (byte)objB;
lstB.Add(b);
}
byte[] arrB = lstB.ToArray();
playerInput.Photo = arrB;
}
}

Related

Convert non typical json string to JSON

I'm fetching from a random API url and I'm getting a response like this one:
"key='jio3298', age=24, key='oijf032', age=62". How can I turn this non-json string into a list of JSON objects (i.e [{'key': 'jio3298', age: 24}, {'key':'oijf032', 'age':62}]) in an efficient way using JavaScript? I did get this code problem in an interview (one of the part of the problem. I needed that list to loop and filter based on a condition) and it seems my answer was at the very least slow.
you should use
String.prototype.split()
to create an array and then loop on this arry and create pears of key and value.
here an expmple:
const parser = (input) => {
input = input.split(",");
let keyAgeList = [],
output = [];
for (let i of input) {
i = i.replace("'", "");
i = i.replace(" ", "");
i = i.split("=");
if (i[0] === "key") {
keyAgeList[0] = i[1];
} else {
keyAgeList[1] = i[1];
}
if (keyAgeList.length > 1) {
let x = {
key: keyAgeList[0].replace("'", ""),
age: keyAgeList[1].replace("'", ""),
};
output.push(x);
keyAgeList = [];
}
}
console.log(JSON.stringify(output))
return JSON.stringify(output);
};
parser("key='jio3298', age=24, key='oijf032', age=62")

How to get full data using aqueduct and socket?

My flutter app connects to a socket via https and I am using aqueduct to get secure data. the socket data is full length string such as;
2_7#a_b_c_d_e_f_g#h_i_j_k_l_m_n#
I convert the data to json and my json data looks like:
"{data:2_7#a_b_c_d_e_f_g#h_i_j_k_l_m_n#}"
and sent to flutter app. the 2_7# means I have 2 rows and 7 column.
The original server socket data 152_7# which means I have 152 rows with 7 column.
when I try to get this data (152_7#) using socket in aqueduct I get only 12 rows or sometimes 25 rows.
If the server data short I get all of them, but can get big string data.
my question is how to get full data using aqueduct and socket?
import 'package:aqueduct/aqueduct.dart';
import 'package:ntmsapi/ntmsapi.dart';
Socket socket;
String _reply;
var _secureResponse;
var _errorData;
class NtmsApiController extends Controller {
#override
Future<RequestOrResponse> handle(Request request) async {
try {
String _xRequestValue = "";
_reply = "";
_errorData = "Server_Error";
if (request.path.remainingPath != null) {
_xRequestValue = request.path.remainingPath;
// returns a string like 152 row with 11 column
socket = await Socket.connect('192.168.xxx.xxx’, 8080);
socket.write('$_xRequestValue\r\n');
socket.handleError((data) {
_secureResponse = "$_errorData";
});
await for (var data in socket) {
// _cant get all the data from reply, reply shows about 25 row data
_reply = new String.fromCharCodes(data).trim();
_secureResponse = "$_reply";
socket.close();
socket.destroy();
}
} else {
_secureResponse = "$_errorData";
}
} catch (e) {
_secureResponse = "$_errorData";
}
return new Response.ok("$_secureResponse")
..contentType = ContentType.json;
}
}

How should I handle very large projections in an event-sourcing context?

I wanted to explore the implications of event-sourcing v.s. active-record.
Suppose I have events with payloads like this:
{
"type": "userCreated",
"id": "4a4cf26c-76ec-4a5a-b839-10cadd206eac",
"name": "Alice",
"passwordHash": "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824"
}
... and...
{
"type": "userDeactivated",
"id": "39fd0e9a-1025-42e6-8793-ed5bfa236f40"
}
I can reach the current state of my system with a reducer like this:
const activeUsers = new Map();
for (const event of events) {
// userCreated
if (event.payload.type == 'userCreated') {
const { id, name, passwordHash } = event.payload;
if (!activeUsers.has(id)) {
activeUsers.set(id, { name, passwordHash });
}
}
// userDeactivated
if (event.payload.type == 'userDeactivated') {
const { id } = event.payload;
if (activeUsers.has(id)) {
activeUsers.delete(id);
}
}
}
However, I cannot have my entire user table in a single Map.
So it seems I need a reducer for each user:
const userReducer = id => // filter events by user id...
But this will lead to slow performance because I need to run a reducer over all events for each new user.
I could also shard the users by a function of their id:
const shard = nShards => id => {
let hash = 0, i, chr;
if (this.length === 0) {
return hash;
}
for (i = 0; i < this.length; i++) {
chr = this.charCodeAt(i);
hash = ((hash << 5) - hash) + chr;
hash |= 0; // Convert to 32bit integer
}
return hash % nShards;
};
Then the maps will be less enormous.
How is this problem typically solved in event-sourcing models?
As I understand you think you need to replay all the events using a reducer in order to query all the users, correct?
This is where cqrs comes into play together with read models/denormalizers.
What almost everyone does is they have a read model (which for example is stored in a sql database or something else which is good at querying data). this read model is constantly being updated when new events are created.
When you need to query all users you query this read model and not replay all events.

One field in Protocol Buffers is always missing when reading from SequenceFile

Something mysterious is happening for me:
What I wanted to do:
1. Save a Protocol Buffers object as SequenceFile format.
2. Read this SequenceFile text and extract the field that I need.
The mystery part is:
One field that I wanted to retrieve is always null.
Product_Perf is the field that I wanted to extract from SequencFiles that is always missing.
Here's my protocol buffers schema:
message ProductJoin {
Signals signals = 1;
int64 id = 2;
}
message Signals {
ProductPerf product_perf = 1;
}
message ProductPerf {
int64 impressions = 1;
}
Here's how I save the protocol buffers as SequenceFiles:
JavaPairRDD<BytesWritable, BytesWritable> bytesWritableJavaPairRdd =
flattenedPjPairRdd.mapToPair(
new PairFunction<Tuple2<Long, ProductJoin>, BytesWritable, BytesWritable>() {
#Override
public Tuple2<BytesWritable, BytesWritable> call(Tuple2<Long, ProductJoin> longProductJoinTuple2) throws Exception {
return new Tuple2<>(
new BytesWritable(longProductJoinTuple2._2().getId().getBytes()),
new BytesWritable(longProductJoinTuple2._2().toByteArray()));
}
}
//dump SequenceFiles
bytesWritableJavaPairRdd.saveAsHadoopFile(
"/tmp/path/",
BytesWritable.class,
BytesWritable.class,
SequenceFileOutputFormat.class
);
Below is the code that how I read the SequenceFile:
sparkSession.sparkContext()
.sequenceFile("tmp/path", BytesWritable.class, BytesWritable.class)
.toJavaRDD()
.mapToPair(
bytesWritableBytesWritableTuple2 -> {
Method parserMethod = clazz.getDeclaredMethod("parser");
Parser<T> parser = (Parser<T>) parserMethod.invoke(null);
return new Tuple2<>(
Text.decode(bytesWritableBytesWritableTuple2._1().getBytes()),
parser.parseFrom(bytesWritableBytesWritableTuple2._2().getBytes()));
}
);

Windows 8.1 store xaml save InkManager in a string

I'm trying to save what i have drawn with the pencil as a string , and i do this by SaveAsync() method to put it in an IOutputStream then convert this IOutputStream to a stream using AsStreamForWrite() method from this point things should go fine, however i get a lot of problems after this part , if i use for example this code block:
using (var stream = new MemoryStream())
{
byte[] buffer = new byte[2048]; // read in chunks of 2KB
int bytesRead = (int)size;
while (bytesRead < 0)
{
stream.Write(buffer, 0, bytesRead);
}
byte[] result = stream.ToArray();
// TODO: do something with the result
}
i get this exception
"Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection."
or if i try to convert the stream into an image using InMemoryRandomAccessStream like this:
InMemoryRandomAccessStream ras = new InMemoryRandomAccessStream();
await s.CopyToAsync(ras.AsStreamForWrite());
my InMemoryRandomAccessStream variable is always zero in size.
also tried
StreamReader.ReadToEnd();
but it returns an empty string.
found the answer here :
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/2359f360-832e-4ce5-8315-7f351f2edf6e/stream-inkmanager-strokes-to-string
private async void ReadInk(string base64)
{
if (!string.IsNullOrEmpty(base64))
{
var bytes = Convert.FromBase64String(base64);
using (var inMemoryRAS = new InMemoryRandomAccessStream())
{
await inMemoryRAS.WriteAsync(bytes.AsBuffer());
await inMemoryRAS.FlushAsync();
inMemoryRAS.Seek(0);
await m_InkManager.LoadAsync(inMemoryRAS);
if (m_InkManager.GetStrokes().Count > 0)
{
// You would do whatever you want with the strokes
// RenderStrokes();
}
}
}
}

Resources