Converted tfjs model throwing model config null errors - image

I am trying to use a converted Keras model (https://github.com/idealo/image-quality-assessment/tree/master/models/MobileNet) in tfjs.
All of the Keras conversion examples mention h5 files, while these are hdf5 files. Is there a difference between them?
When I tried using the converted model I get this error
TypeError: Cannot read property 'model_config' of null
Is there a way to fix this?

Related

convert object from one assembly to another cli c++

I have an multi Module Project. Here we are passing System::Object^ from one exe to code from another DLL.
When I am trying to convert that Object to its type(Here we have the same definition) in DLL, I am getting the below error...
[A] can not be cast to [B]; A originates from one assembly , b from another
I tried through some documents but could not crack through.
Both the below code giving that conversion error
LocalClassType ^x1 = LocalClassType (x);
LocalClassType ^x2 = cli::safe_cast<LocalClassType ^>(x)
Can anyone please suggest or guide me through the document where I can convert correctly.
I was able to resolve the issue, and cast it using static_cast
LocalClassType ^x2 = static_cast<LocalClassType ^>(x)

How to save fasttext model in binary and text formats?

The documentation is a bit unclear how to save the fasttext model to disk - how do you specify a path in the argument, I tried doing so and it failed with an error
Example in documentation
>>> from gensim.test.utils import get_tmpfile
>>>
>>> fname = get_tmpfile("fasttext.model")
>>>
>>> model.save(fname)
>>> model = FastText.load(fname)
Furthermore, how can I save the model in text format like can be done with word2vec models?
'word2vecmodel.wv.save_word2vec_format("D:\w2vmodel.txt")'
EDIT
After trying the suggestion to make a file first I keep kgetting the same error as before when I run this code
savepath = os.path.abspath('D:\fasttextmodel.v3.bin');
from gensim.test.utils import get_tmpfile
fname = get_tmpfile(savepath)
fasttext_model.save(fname)
TypeError: file must have a 'write' attribute
Documentation in FastText save()/load() example is misleading, they suggest you use get_tmpfile. I am able to save the model if I pass the data file name as a string and do not wrap it in get_tmpfile:
model.save("fasttext.model")
Then you can load the same way, passing the string directly:
model = FastText.load("fasttext.model")
Note that this will save multiple files for models that are large. However, when you load the model, you only need to specify the main fasttext.model file, and the function will automatically load additional files, if there are any.
Did you try creating a file in your local directory called "fasttext.model" before trying to save it?
Also, I'm assuming you trained the model before this correct?

Parquet-MR AvroParquetWriter - how to convert data to Parquet (with Specific Mapping)

I'm working on a tool for converting data from a homegrown format to Parquet and JSON (for use in different settings with Spark, Drill and MongoDB), using Avro with Specific Mapping as the stepping stone. I have to support conversion of new data on a regular basis and on client machines which is why I try to write my own standalone conversion tool with a (Avro|Parquet|JSON) switch instead of using Drill or Spark or other tools as converters as I probably would if this was a one time job. I'm basing the whole thing on Avro because this seems like the easiest way to get conversion to Parquet and JSON under one hood.
I used Specific Mapping to profit from static type checking, wrote an IDL, converted that to a schema.avsc, generated classes and set up a sample conversion with specific constructor, but now I'm stuck configuring the writers. All Avro-Parquet conversion examples I could find [0] use AvroParquetWriter with deprecated signatures (mostly: Path file, Schema schema) and Generic Mapping.
AvroParquetWriter has only one none-deprecated Constructor, with this signature:
AvroParquetWriter(
Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary,
boolean enableValidation,
WriterVersion writerVersion,
Configuration conf
)
Most of the parameters are not hard to figure out but WriteSupport<T> writeSupport throws me off. I can't find any further documentation or an example.
Staring at the source of AvroParquetWriter I see GenericData model pop up a few times but only one line mentioning SpecificData: GenericData model = SpecificData.get();.
So I have a few questions:
1) Does AvroParquetWriter not support Avro Specific Mapping? Or does it by means of that SpecificData.get() method? The comment "Utilities for generated Java classes and interfaces." over 'SpecificData.class` seems to suggest that but how exactly should I proceed?
2) What's going on in the AvroParquetWriter constructor, is there an example or some documentation to be found somewhere?
3) More specifically: the signature of the WriteSupport method asks for 'Schema avroSchema' and 'GenericData model'. What does GenericData model refer to? Maybe I'm not seeing the forest because of all the trees here...
To give an example of what I'm aiming for, my central piece of Avro conversion code currently looks like this:
DatumWriter<MyData> avroDatumWriter = new SpecificDatumWriter<>(MyData.class);
DataFileWriter<MyData> dataFileWriter = new DataFileWriter<>(avroDatumWriter);
dataFileWriter.create(schema, avroOutput);
The Parquet equivalent currently looks like this:
AvroParquetWriter<SpecificRecord> parquetWriter = new AvroParquetWriter<>(parquetOutput, schema);
but this is not more than a beginning and is modeled after the examples I found, using the deprecated constructor, so will have to change anyway.
Thanks,
Thomas
[0] Hadoop - The definitive Guide, O'Reilly, https://gist.github.com/hammer/76996fb8426a0ada233e, http://www.programcreek.com/java-api-example/index.php?api=parquet.avro.AvroParquetWriter
Try AvroParquetWriter.builder :
MyData obj = ... // should be avro Object
ParquetWriter<Object> pw = AvroParquetWriter.builder(file)
.withSchema(obj.getSchema())
.build();
pw.write(obj);
pw.close();
Thanks.

I can't import com.parse.ParseImageView

I've been trying to import com.parse.ParseImageView so i can display image queried from parse. But I get this error in my xml file
The following classes could not be found:
- com.parse.ParseImageView
Is ParseImageView no longer supported ? What is the alternative way to display image from parse database? Thanks

Mongoengine Django Rest Framework - Serializer Error - ReferenceField is not JSON serializable

Everything works great until the ObjectID value of the ReferenceField no longer points to a valid document. Then The ObjectID is left as the value, and json doesn't know how to serialize this.
How do I deal with invalid ReferenceFields?
E.g.
class Food(Document):
name = StringField()
owner = ReferenceField("Person")
class Person(Document):
first_name = StringField()
last_name = StringField()
...
p = Person(...)
apple = Food(name="apple", owner=p)
p.delete() # might be the wrong method, but you get the idea
At this point, attempting to fetch a list of foods via the REST API will fail with the is not JSON serializable error, since apple.owner no longer points to an owner that exists.
Since you are using DRF with mongoengine, you must be using django-rest-framework-mongoengine.
Apparenly, its a bug in django-rest-framework-mongoengine. Check this open issue on Github which was reported recently regarding the same.
https://github.com/umutbozkurt/django-rest-framework-mongoengine/issues/91
One way is to write your own JSONEncoder for this. This link might help.
Another option is to use the json_util library of Pymongo. They provide explicit BSON conversion to and from json.
As per json-util docs:
This module provides two helper methods dumps and loads that wrap the
native json methods and provide explicit BSON conversion to and from
json. This allows for specialized encoding and decoding of BSON
documents into Mongo Extended JSON‘s Strict mode. This lets you encode
/ decode BSON documents to JSON even when they use special BSON types.

Resources