Does Avro schema evolution require access to both old and new schemas? - protocol-buffers

If I serialize an object using a schema version 1, and later update the schema to version 2 (say by adding a field) - am I required to use schema version 1 when later deserializing the object? Ideally I would like to just use schema version 2 and have the deserialized object have the default value for the field that was added to the schema after the object was originally serialized.
Maybe some code will explain better...
schema1:
{"type": "record",
"name": "User",
"fields": [
{"name": "firstName", "type": "string"}
]}
schema2:
{"type": "record",
"name": "User",
"fields": [
{"name": "firstName", "type": "string"},
{"name": "lastName", "type": "string", "default": ""}
]}
using the generic non-code-generation approach:
// serialize
ByteArrayOutputStream out = new ByteArrayOutputStream();
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
GenericDatumWriter writer = new GenericDatumWriter(schema1);
GenericRecord datum = new GenericData.Record(schema1);
datum.put("firstName", "Jack");
writer.write(datum, encoder);
encoder.flush();
out.close();
byte[] bytes = out.toByteArray();
// deserialize
// I would like to not have any reference to schema1 below here
DatumReader<GenericRecord> reader = new GenericDatumReader<GenericRecord>(schema2);
Decoder decoder = DecoderFactory.get().binaryDecoder(bytes, null);
GenericRecord result = reader.read(null, decoder);
results in an EOFException. Using the jsonEncoder results in an AvroTypeException.
I know it will work if I pass both schema1 and schema2 to the GenericDatumReader constructor, but I'd like to not have to keep a repository of all previous schemas and also somehow keep track of which schema was used to serialize each particular object.
I also tried the code-gen approach, first serializing to a file using the User class generated from schema1:
User user = new User();
user.setFirstName("Jack");
DatumWriter<User> writer = new SpecificDatumWriter<User>(User.class);
FileOutputStream out = new FileOutputStream("user.avro");
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
writer.write(user, encoder);
encoder.flush();
out.close();
Then updating the schema to version 2, regenerating the User class, and attempting to read the file:
DatumReader<User> reader = new SpecificDatumReader<User>(User.class);
FileInputStream in = new FileInputStream("user.avro");
Decoder decoder = DecoderFactory.get().binaryDecoder(in, null);
User user = reader.read(null, decoder);
but it also results in an EOFException.
Just for comparison's sake, what I'm trying to do seems to work with protobufs...
format:
option java_outer_classname = "UserProto";
message User {
optional string first_name = 1;
}
serialize:
UserProto.User.Builder user = UserProto.User.newBuilder();
user.setFirstName("Jack");
FileOutputStream out = new FileOutputStream("user.data");
user.build().writeTo(out);
add optional last_name to format, regen UserProto, and deserialize:
FileInputStream in = new FileInputStream("user.data");
UserProto.User user = UserProto.User.parseFrom(in);
as expected, user.getLastName() is the empty string.
Can something like this be done with Avro?

Avro and Protocol Buffers have different approaches to handling versioning, and which approach is better depends on your use case.
In Protocol Buffers you have to explicitly tag every field with a number, and those numbers are stored along with the fields' values in the binary representation. Thus, as long as you never change the meaning of a number in a subsequent schema version, you can still decode a record encoded in a different schema version. If the decoder sees a tag number that it doesn't recognise, it can simply skip it.
Avro takes a different approach: there are no tag numbers, instead the binary layout is completely determined by the program doing the encoding — this is the writer's schema. (A record's fields are simply stored one after another in the binary encoding, without any tagging or separator, and the order is determined by the writer's schema.) This makes the encoding more compact, and saves you from having to manually maintain tags in the schema. But it does mean that for reading, you have to know the exact schema with which the data was written, or you won't be able to make sense of it.
If knowing the writer's schema is essential for decoding Avro, the reader's schema is a layer of niceness on top of it. If you're doing code generation in a program that needs to read Avro data, you can do the codegen off the reader's schema, which saves you from having to regenerate it every time the writer's schema changes (assuming it changes in a way that can be resolved). But it doesn't save you from having to know the writer's schema.
Pros & Cons
Avro's approach is good in an environment where you have lots of records that are known to have the exact same schema version, because you can just include the schema in the metadata at the beginning of the file, and know that the next million records can all be decoded using that schema. This happens a lot in a MapReduce context, which explains why Avro came out of the Hadoop project.
Protocol Buffers' approach is probably better for RPC, where individual objects are being sent over the network (as request parameters or return value). If you use Avro here, you may have different clients and different servers all with different schema versions, so you'd have to tag every binary-encoded blob with the Avro schema version it's using, and maintain a registry of schemas. At that point you might as well have used Protocol Buffers' built-in tagging.

To do what you are trying to do you need to make the last_name field optional, by allowing null values. The type for last_name should be ["null", "string"] instead of "string"

I have tried to circumvent this problem. I am putting it here:
I have also tried using two schemas one schema just an addition of another column to the other schema using the refection API of Avro. I have the following schema:
Employee (having name, age, ssn)
ExtendedEmployee (extending Employee and having gender column)
I am assuming on file which had the Employee objects earlier now also has the ExtendedEmployee object and I tried to read that file as :
RecordHandler rh = new RecordHandler();
if (rh.readObject(employeeSchema, dbLocation) instanceof Employee) {
Employee e = (Employee) rh.readObject(employeeSchema, dbLocation);
System.out.print(e.toString());
} else if (rh.readObject(schema, dbLocation) instanceof ExtendedEmployee) {
ExtendedEmployee e = (ExtendedEmployee) rh.readObject(schema, dbLocation);
System.out.print(e.toString());
}
This solves the problem here. However, I would love to know if there is an API wherein we can give the ExtendedEmployee schema to read the objects of Employee as well.

Related

How to parse a json with dynamic property name in OIC?

I need to consume and parse incoming json from a third party system in my code. I used RestTemplate to do it. So the response from the system looks like below.
{ "data": { "05AAAFW9419M11Q": { "gstin": "05AAAFW9419M11Q", "error_cd": "SWEB_9035", "message": "Invalid GSTIN / UID" } } }
Now the problem is the property name ("05AAAFW9419M11Q" in this case) in dynamic and in the next response it would be another string. In this case, how can I parse this json as this is not fixed in Oracle Integration Cloud? Response wrapper is not capturing the data apart from the one that is used for configuring the adapter which is fair enough as fieldname itself is changing.
Is there is any workaround for this?
You will have to go to PL/SQL and dynamic SQL, and if it's always the value of gstin entry, you can get the path of the key with
select '$.data.' ||
json_query(js_column, '$.data.*.gstin') into v_key path from table_with_json_column where ... conditions... ;
(assuming there is only 1 "data" per JSON payload) to later build a dynamic query based on json_table.

How to query in GraphQL with no fixed input object contents?

I want to query to get a string, for which I have to provide a input object, however the input data can have multiple optional keys and quite grow large as well. So I just wanted something like Object as data type while defining schema.
// example of supposed schema
input SampleInput {
id: ID
data: Object // such thing not exist, wanted something like this.
}
type Query {
myquery(input: SampleInput!): String
}
here data input can be quite large so I do not want to define a type for it. is there a way around this?

kafka streams DSL: add an option parameter to disable repartition when using `map` `selectByKey` `groupBy`

According to the documents, streams will be marked for repartition when applied map selectKey groupBy even though the new key has been partitioned appropriately. Is it possible to add an option parameter to disable repartition ?
Here is my user case:
there is a topic has been partitioned by user_id.
# topic 'user', format '%key,%value'
partition-1:
user1,{'user_id':'user1', 'device_id':'device1'}
user1,{'user_id':'user1', 'device_id':'device1'}
user1,{'user_id':'user1', 'device_id':'device2'}
partition-2:
user2,{'user_id':'user2', 'device_id':'device3'}
user2,{'user_id':'user2', 'device_id':'device4'}
I want to count user_id-device_id pairs using DSL as follow:
stream
.groupBy((user_id, value) -> {
JSONObject event = new JSONObject(value);
String userId = event.getString('user_id');
String deviceId = event.getString('device_id');
return String.format("%s&%s", userId,deviceId);
})
.count();
Actually the new key has been partitioned indirectly. There is no need to do it again.
If you use .groupBy(), it always causes data re-partitioning. If possible use groupByKey instead, which will re-partition data only if required.
In your case, you are changing the keys anyways, so that will create a re-partition topic.

Realm Xamarin LINQ Select

Is there a way to restrict the "columns" returned from a Realm Xamarin LINQ query?
For example, if I have a Customer RealmObject and I want a list of all customer names, do I have to query All<Customer> and then enumerate the results to build the names list? That seems cumbersome and inefficient. I am not seeing anything in the docs. Am I missing something obvious here? Thanks!
You have to remember that Realm is an object based store. In a RDBMS like Sqlite, restricting the return results to a sub-set of "columns" of an "record" makes sense, but in an object store, you would be removing attributes from the original class and thus creating a new dynamic class to then instantiate these new classes as objects.
Thus is you want just a List of strings representing the customer names you can do this:
List<string> names = theRealm.All<Customer>().ToList().Select(customer => customer.Name).ToList();
Note: That you take the Realm.All<> results to a List first and then using a Linq Select "filter" just the property that you want. Using a .Select directly on a RealmResults is not currently supported (v0.80.0).
If you need to return a complex type that is a subset of attributes from the original RealObject, assuming you have a matching POCO, you can use:
var custNames = theRealm.All<Customer>().ToList().Select((Customer c) => new Name() { firstName = c.firstName, lastName = c.lastName } );
Remember, once you convert a RealmResult to a static list of POCOs you do lose the liveliness of using RealmObjects.
Personally I avoid doing this whenever possible as Realm is so fast that using a RealmResult and thus the RealObjects directly is more efficient on processing time and memory overhead then converting those to POCOs everytime you need to new list...

How to upsert into elasticsearch in spark?

With HTTP POST, the following script can insert a new field createtime or update lastupdatetime:
curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
"doc": {
"lastupdatetime": "2015-09-16T18:00:00"
}
"upsert" : {
"createtime": "2015-09-16T18:00:00"
"lastupdatetime": "2015-09-16T18:00",
}
}'
But in spark script, after setting "es.write.operation": "upsert", i don't know how to insert createtime at all. There is only es.update.script.* in the official document... So, can anyone give me an example?
UPDATE: In my case, i want to save the information of android devices from log into one elasticsearch type, and set it's first appearance time as createtime. If the device appear again, i only update the lastupdatetime, but leave the createtime as it was.
So the document id is android ID, if the id exists, update lastupdatetime, else insert createtime and lastupdatetime.So the setting here is(in python):
conf = {
"es.resource.write": "stats-device/activation",
"es.nodes": "NODE1:9200",
"es.write.operation": "upsert",
"es.mapping.id": "id"
# ???
}
rdd.saveAsNewAPIHadoopFile(
path='-',
outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
keyClass="org.apache.hadoop.io.NullWritable",
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
conf=conf
)
I just don't know how to insert a new field if the id not exist.
Without seeing your Spark script, it will be hard to give a detailed answer. But in general you will want to use elasticsearch-hadoop (so you'll need to add that dependency to your Build.sbt file, e.g.) and then in your script you can:
import org.elasticsearch.spark._
val documents = sc.parallelize(Seq(Map(
"id" -> 1,
"createtime" -> "2015-09-16T18:00:00"
"lastupdatetime" -> "2015-09-16T18:00"),
Map(<next document>), ...)
.saveToEs("test/type1", Map("es.mapping.id" -> "id"))
as per the official docs. The second argument to saveToES specifies which key in your RDD of Maps to use as the ElasticSearch document id.
Of course, if you're doing this with Spark it implies you've got more rows than you'll want to type out by hand, so for your case you'd need to transform your data into an RDD of Maps from key -> value within your script. But without knowing the data sources I can't go into much more detail.
Finally, i got a solution which is not perfect:
add createtime to all source doc;
save to es with create method and ignore already created error;
remove createtime field;
save to es again with update method;
For now(2015-09-27), step 2 can be implemented by this patch.

Resources