On one of the applications that I am writing, I was asked to provide the feature for "pencil and eraser" to allow the user to doodle randomly on a document (for proofreading, note-taking, etc.)
What would be the best way to store such data?
I was thinking of using an image with transparency for each doodle (so that I can also support multiple colors of "doodles") but it seems like it will very quickly make any saved project with doodles grow large in file size.
I am looking if there is a better (existing) alternative (e.g. is there a DoodleXML spec out there?) or just any suggestions.
I think the "DoodleXML" spec you're looking for might just be SVG. Simply save the doodles as a series of lines. You don't need a full SVG engine as long as you're only supporting the subset that you generate in the first place.
Related
Could anybody give me pointers on how to process Switchboard dataset for training with RETURNN? I did see BlissDataset class that seems to be designed for switchboard, but it's not clear to me what I should include in the paths given in the example:
Example:
./tools/dump-dataset.py "
{'class':'BlissDataset',
'path': '/u/tuske/work/ASR/switchboard/corpus/xml/train.corpus.gz',
'bpe_file': '/u/zeyer/setups/switchboard/subwords/swb-bpe-codes',
'vocab_file': '/u/zeyer/setups/switchboard/subwords/swb-vocab'}"
The switchboard dataset has several folders with audios, i.e. swb1_d2/data/*.sph and transcripts swb1_LDC97S62/swb_ms98_transcriptions/**/*
I'm not quite sure how to proceed with this to get a dataset that can be used to train RETURNN.
At our group (RWTH Aachen University), we use the config as it was published on GitHub. As you see, this one uses ExternSprintDataset. That dataset uses
The implementation uses Sprint (publicly called RWTH ASR (RASR), see here) as an external tool (ran in a subprocess) to handle the data (feature extraction, etc). Sprint gets a Bliss XML file which describes all the segments with path to audio and audio offsets and transcriptions, and also it gets further configs for the feature extraction and maybe other things. There is an open source version of RASR which should work but it might be a bit involved to get this to work.
The BlissDataset was planned to be a simpler replacement for that. However, the implementation is incomplete. Also, you still would need to generate the Bliss XML by yourself in some way (we have used some own internal scripts to prepare that based on the official LDC data).
So, unfortunately, there is no simple way yet. Actually, I think the easiest way would be to come up with yet another custom format, which might be similar to the LibriSpeechDataset implementation, or maybe just the same, and then you could just reuse LibriSpeechDataset, or at least parts of that. That dataset implementation takes the data in some zip format which contains the transcripts in txt files and the audio in ogg or wav files. It uses librosa to do MFCC feature extraction (or also other feature types). I planned to implement that for Switchboard, and then reproduce the results, however I did not have time yet and not sure when I will get to that. But if you want to try that on your own, I will be happy to help you however I can. The starting point would be to look at LibriSpeechDataset and understand how the format of that looks like.
I'm starting to work with the Google Tango Tablet, hopefully to create (basic) 2D / 3D maps from scanned areas. But first I would like to read as much about the Tango (sensors / API) as I can, in order to create a plan to be as time efficient as possible.
I instantly noticed the ability to learn areas, which is a very interesting concept, nevertheless I couldn't find anything about these so called Area Description Files (ADF).
I know the ADF files can be geographically referenced, that they contain metadata and an unique UUID. Furthermore I know their basic functionalities, but that's about it.
In some parts of the modules ADF files are referred to as 'maps', in other parts they are just called 'descriptions'.
So what do these files look like? Are they already basic (GRID) (2D) maps, or are they just descriptions?
I know there are people who already extracted the ADF files, so any help would be greatly appreciated!
From Tango ADF Doco
Important: Saved area descriptions do not directly record images or
video of the location, but rather contain descriptions of images of
the environment in a very compressed form. While those descriptions
can’t be directly viewed as images, it is in principle possible to
write an algorithm that can reconstruct a viewable image. Therefore,
you must ask the user for permission before saving any of their
learned areas to the cloud or sharing areas between users to protect
the user's privacy, just as you would treat images and video.
Other than that there doesn't seem to be much info about the file internals - I use a lot of them, but I've never been compelled to look inside - curious yes, but not compelled
Without any direct info from the project Tango folks anything we provide would be merely speculation. I'm with Mark, not much compelling reason to get details. My speculation: probably contains a set of image descriptors, like SIFT, and whatever other known device settings are available, like GPS location, orientation (gravity), time(?), etc.
I got the ADF file, basically coded binaries and seems difficult to decode.
I will be happy to share the file if anyone is still interested.
I have built a SOLR Index which has the image thumbnail urls that I want to render an image along with the search results. The problem is that those images can run into millions and I think storing the images in index as binary data would make the index humongous.
I am seeking guidance on how to efficiently store those images after rendering them from the URLs , should I use the plain file system and have them rendered by tomcat , or should I use a JCR repository like Apache Jackrabbit ?
Any guidance would be greatly appreciated.
Thank You.
I would evaluate the effective requiriments before finally deciding how to persist the images.
Do you require versioning?
Are you planning to stir eonly the images or additional metadata?
Do you have any requirements in horizontal scaling?
Do you require any image processing or scaling?
Do you need access to the image metatdata?
Do you require additional tooling for managing the images?
Are you willing to invest time in learning an additional technology?
Storing on the file system and making them available by an image sppoler implementation is the most simple way to persist your images.
But if you identify some of the above mentioned requirements (which are typical for a content repo or a dam system), then would end up reinventing the wheel with the filesystem approach.
The other option is using a kind of content repository. A JCR repo like for example Jackrabbit or it's commercial implementation CRX is one option. Alfresco (supports CMIS) would be the another valid.
Features like versioning, post processing (scaling ...), metadata extraction and management belong are supported by both mentioned repository solutions. But this requires you to learn a new technology which can be time consuming. Both mentioned repository technologies can get complex.
If horizontal scaling is a requirement I would consider a commercially supported repository implementations (CRX or Alfresco Enterprise) because the communty releases are lacking this functionality.
Me personally I would really depend any decision on the above mentioned requirements.
I extensively worked with Jackrabbit, CRX and Alfresco CE and EE and personally I would go for the Alfresco as I experienced it to scale better with larger amounts of data.
I'm not aware of a image pooling solution that fits your needs exactly but it shouldn't be to difficult to implement that, apart from the fact that recurring scaling operations may be very resource intensive.
I would go for the following approach if FS is enough for you:
Separate images and thumbnail into two locations.
The images root folder will remain, the thumbnails folder is
temporary.
Create a temporary thumbnail folder for each indexing run.
All thumbnails for that run are stored under that location, scaling
can be achived with i.e ImageMagick.
The temporary thumbnail folder can then easily be dropped as soon as
the next run has been completed.
If you are planning to store millions of images then avoid putting all files in the same directory. Browsing flat hierarchies with two many entries will be a nightmare.
Better create a tree structure by i.e. inverting the current datetime (year/month/day/hour/minute ... 2013/06/01/08/45).
This makes sure that the number of files inside the last folder get's not too big (Alfresco is using the same pattern for storing binary objects on the FS and it has proofen to work nicely).
I spend most of my time plotting data, but unfortunately I haven't found a decent solution for my plotting needs. At the moment, the most powerful and pleasant library I found that performs plotting is matplotlib. The results are stunning, but I mostly spend my time fighting with the library when trying to do simple things like having an arrow as I want. SImilar programs like R and gnuplot produce visually less appealing results, and they are not GUI based.
On the other hand, programs like xmgrace (or better) allow direct manipulation of the plotted objects and direct feedback, but they fail on two important points:
if my dataset (normally stored in csv files) changes for some reason, I have to reimport it and perform the manipulations again, by hand
once I obtain a nice plot setup, the only way I have to recreate the plot is to use a graphical, interactive program. I would like to have the possibility to run a command line utility on my csv files and get the .pdf as a result, with no human intervention.
I still have to find something that provides me both worlds, and it has an affordable price. Ideally, I would need an interactive GUI program (a la Origin) to generate matplotlib-based python scripts.
Does anyone have any hints on software that could address my needs on OSX (preferably) or Linux ?
You may want to check out Igor Pro. It's quite old, and quirky but it provides the most advanced plotting system I've found yet on the Mac. You can modify anything graphically, at a command line or in script files. The most powerful feature (IMO) is the ability to automatically generate a script to recreate a figure or to use a figure to create a script that generates figures like (in style etc.) a particular figure. I use Igor for all publication figures I produce.
Data is stored in "waves" (translation: vectors) which encapsulate data and information about the delta between data points (e.g. time step). Figures reference waves as their data source. When you update a wave (e.g. by re-importing a CSV file and specifying that the data overwrite specific waves), all figures that reference that wave are automatically updated.
You can create "layouts" which are page-layouts containing multiple graphs. These layouts are also automatically updated whenever any of the figures in the layout are updated (see above). You can add drawing/text/annotations to either graphs or layout.s
Be warned: Igor Pro's scripting language is something like the bastard child of VB and Matlab. It makes my eyes bleed. It makes me pray to whatever God that the pain just end. But the entire system is so powerful that it's worth it.
I have always used Matlab or R for this sort of thing. While you may not like how the generic plots look, I find that once I familiarize myself with the libraries I can make them as fancy as I want them to be.
R being free, I would try to stick it out with that. It is extremely powerful and perfectly suited to what you need (generate charts on the fly directly from datafiles). I bet that the more you get comfortable with it, you'll find yourself using R for a wide range of tasks outside of plotting data.
MathGL is cross-platform GPL library which meet all yours criteria. It can produce nice graphics, it can read csv files, it have window for displaying graphics (you don't need to know widget libraries), and it can plot in console (don't need a window or X at all). At this you can use C/C++/Fortran/Python/... for yours own code or MGL scripts for simplicity (see UDAV front-end in the last case).
Finally it can produce bitmaps (PNG/JPEG/GIF/...) or vector (EPS/SVG) output. Later it can be converted to PDF easily. Or you can create a PDF with U3D directly -- you'll need HPDF and U3D libraries in this case.
I am in the process of selecting an image format that will be used as the storage format for all in-house textures.
The format will be used as a source format from which compressed textures for different platforms and configurations will be generated, and so needs to cover all possible texture types (2D, cube, volymetric, varying number of mip-maps, floating point pixel formats, etc.) and be completely lossless.
In addition the format has to be able to keep a bit of metadata.
Currently a custom format is used for this, but a commonly available format will be easier to work with for the artists since its viewable in most image editors.
I have thought of using DDS, but this format does not support metadata as far as I can see.
All suggestions appreciated!
With your requirements you should stay with your selfmade format. I don't know about any image-format besides DDS that supports volumetric and cube-textures. Unfortunately DDS does not support meta-data.
The closest thing you can find is TIFF. It does not directly support cube-maps or volumetric textures, but it supports any number of sub-images. That way you could re-use the sub-images as slices or cube-sides.
TIFF also has a very good support for custom meta-data. The libtiff image reading/writing library works pretty good. It looks a bit archaic if you come from a OO side, but it gets it's job done.
Nils
When peeking inside various games' resources I found out that most of them store textures (I don't know whether they're compressed or not) in TGA
TIFF would probably be your closest bet for a format which supports arbitrary meta-data and multiple frames, but I think you are better off keeping the assets (in this case, images) separate from how they are converted and utilized in your engine.
Keep images in 32 bit PNG format, and put type- and meta information in XML. That keeps your data human viewable, readable and editable. Obscure custom formats are for engines, not people.
Stick with whatever your artists work with.
If you are a windows/mac shop and use
photoshop stick with .psd
If you are a unix shop and use gimp
stick with .xcf
These formats will store layers and all the stuff your artists need and are used to.
Since your artists will be creating loads of assets make their life as easy as possible,
even if it means to write some extra code.
Put the meta data (whatever it may be) somewhere "along" the images if the native format (psd/xcf) doesn't support it.
For stuff like cube maps, mipmaps (if not generated by the converter) stick to naming guidlines or guidlines on how to put them into one file.
Depending on what tool you use to create the volumetric stuff, just stick with that tools native format.
While writing custom formats for the target is usually a good idea,
writing custom formats for artists results in mayhem...
My experience with DDS is that it is a poorly documented and difficult format to work with and offers few advantages. It is generally simpler to just store a master file for each image class that has references to the source images that make it up ( i.e. 6 faces for a cube map, an arbitrary number of slices for a volume texture ) as well as any other useful meta-data. It's always going to be a good idea to keep the meta-data in a seperate file ( or in a database ) as you do not want to be loading large numbers of images when carryong out searches, populating browsers, etc. It also makes sense to seperate your source image format ( tiff, tga, jpeg, dds ... ) from your "meta-format" ( cube, volume ... ) since you may well find that you need to use lossy compression to support HDR formats or very large source volume data.
Have you tried PNG? http://java.sun.com/javase/6/docs/api/javax/imageio/metadata/doc-files/png_metadata.html
As an alternative solution, maybe spend some time writing a plugin for a Free Image Editor for your file format? I've never done it before, so I don't know the work involved, but there is boatloads of example code out there for you.