It seems that neither LOINC nor SNOMED-CT have codes for air quality (such as the PM10, PM2.5 particulate measures, or nitrogen oxide or nitrogen dioxide).
Are there extensions which may cover these, or alternative coding dictionaries which may work?
Related
I'm writing a screen recording app for a specific game, with focus on performance, support for older versions of windows is irrelevant, so it seems that using either of the two APIs is valid compatibility wise, but I can't find much data on performance, afaik NVIDIA deprecated NVFBC in favor of DDA https://developer.download.nvidia.com/designworks/capture-sdk/docs/NVFBC_Win10_Deprecation_Tech_Bulletin.pdf, according to the document, DDA with GPU encoding (whether NVENC or equivalent) seems quite good performance wise, where does Windows.Graphics.Capture fit in all of this, I know it's newer but is it more performant, I can't run my own benchmarks, because I'm not familiar with either of the APIs and I could introduce an error or simply by coding either sub optimally I may end up simply with comparing the performance of my implementation instead of the API, the game is mostly CPU bound on most systems, so for most users the GPU would have spare power for encoding, also aside from performance what are the benefits & drawbacks of using either of the aforementioned APIs
I'm trying to come up with a platform independent way to render unicode text to some platform specific surface, but assuming that most platforms support something at least kinda similar, maybe we can talk in terms of the win32 API. I'm most interested in rendering LARGE buffers and supporting rich text, so while I definitely don't want to ever look inside a unicode buffer, I'd like to be told myself what to draw and be hinted where to draw it, so if the buffer is modified, I might properly ask for updates on partial regions of the buffer.
So the actual questions. GetTextExtentExPointW clearly allows me to get the widths of each character, how do I get the width of a nonbreaking extent of text? If I have some long word, he should probably be put on a new line rather than splitting the word. How can I tell where to break the text? Will I need to actually look inside the unicode buffer? This seems very dangerous. Also, how do I figure how far each baseline should be while rendering?
Finally, this is already looking like it's going to be extremely complicated. Are there alternative strategies for doing what I'm trying to do? I really would not at all like to rerender a HUGE buffer each time I change tiny chunks of it. Something between looking at individual glyphs and just giving a box to spat text in.
Finally, I'm not interested in using anything like knuth's word breaking algorithm. No hyphenation. Ideally I'd like to render justified text, but that's on me if the word positioning is given. Ragged right hand margin is fine by me.
What you're trying to do is called shaping in unicode jargon. Don't bother writing your own shaping engine, it's a full-time job that requires continuous updates to take into account changes in unicode and opentype standards. If you want to spend any time on the rest of your app you'll need to delegate shaping to a third-party engine (harbuzz-ng, uniscribe, icu, etc)
As others wrote:
– unicode font rendering is hellishly complex, much more than you expect
– winapi is not cross platform at all
The three common strategies for rendering unicode text are:
1. write one backend per system (plugging on the system native text stack) or
2. select one set of cross-platform libs (for example freebidi + harfbuzz-ng + freetype + fontconfig, or a framework like QT) and recompile them for each target system or
3. take compliance shortcuts
The only strategy I strongly advise against is the last one. You can not control unicode.org normalization (adding upper ss to German), you do not understand worldwide script uses (both African languages and Vietnamese are Latin variants but they exercise unexpected unicode properties), you will underestimate font creators ingenuity (oh, indic users requested this opentype property, but it will be really handy for this English use-case…).
The two first strategies have their own drawbacks. It's simpler to maintain a single text backend but deploying a complete text stack on a foreign system is far from hassle-free. Almost every project that tries cross-plarform has to get rid of msvc first since it targets windows and its language dialect won't work on other platforms and cross-platform libs will typically only compile easily in gcc or llvm.
I think harfbuzz-ng has reached parity with uniscribe even on windows so that's the lib I'd target if I wanted cross-platform today (chrome, firefox and libreoffice use it at least on some platforms). However Libreoffice at least uses the multi-backend strategy. No idea if it reflects current library state more than some past historic assessment. There are not so many cross-platform apps with heavy text usage to look at, and most of them carry the burden of legacy choices.
Unicode rendering is surprisingly complicated. Line breaking is just the beginning; there are many other subtleties that I don't think you've appreciated (vertical text, right-to=left text, glyph combining, and many more). Microsoft has several teams dedicated to doing nothing but implementing text rendering, for example.
Sounds like you're interested in DirectWrite. Note that this is NOT platform independent, obviously. It's also possible to do a less accurate job if you don't care about being language independent; many of the more unusual features only occur in rarer languages. (Chinese being a notable exception.)
If you want some perfect multi platform there will be problems. If you draw one sentence with GDI, one GDI+, one with Direct2D, one on Linux, one on Mac all with the same font size on the same buffer, you'll have differences some round some position to int some other use float for examples.
There is not one, but a least two problems. Drawing text and computing text position, line break etc are very different. Some library do both some do only computing or rendering part. A very simplified explanation is drawing do only render one single char at the position you ask, with transformations zoom, rotation and anti aliasing. Computing do everything else chose each char position in word, sentences line break, paragraphs etc
If you want to be platform independent you could use FreeType to read font files and get every information on each characters. That library get exact same result on each platform and predictability in font is good. The main problem with font is lot of bad, missed, or even wrong information in characters descriptions.Nobody do text perfectly because it's very hard (tipping hat to word, acrobat and every team who deal directly with fonts)
If your computing of font is good. There is lot of work to do everything you could see in a good word processor software (space between characters, spaces between word, line break, alignment, rotation, grease, aliasing...) then you could do the rendering. It should be the easier. You can with same computation do GDI, Direct2D, PDF or printing rendering path.
I was going through this wiki article on SQALE(Software Quality Assessment based on Lifecycle Expectations). The Software Quality assurance part of it is clear. But I am unable to understand the "based on LifeCycle expectations" part of the model. Can someone please explain in a clear to understand way the LifeCycle expectations part.
Ever since I first encountered SQALE, I've suspected that the creators were working backward from the acronym, and "lifecycle expectations" was the best that they could come up with that wasn't already taken. (There are other similarly acronymed guidelines or methodologies in the software quality space, such as Squale and SQuaRE).
All but the most rudimentary models of software quality incorporate the notion of quality over the full lifecycle of a software product (from "gleam in someone's eye" phase all the way through eventual decomissioning), not just its state at the time of initial shipping.
Such models also acknowledge that the desirable investment in any given aspect of software quality (e.g.: maintainability) is not equal for all software products. For example, one would tend to make very different decisions in maintainability investments for a boxed-style software product one hopes to sell (and upgrade) for years vs for a short-term marketing promo site that will be decomissioned one month after it goes live.
So... All that "lifecycle expectations" bit probably implies is something along the lines of covering software quality aspects that affect multiple part of the lifecycle of a product, as well as allowing adjustment/calibration to different expectations for the various quality characteristics.
If you're interested in how the SQALE authors might have meant this (assuming they weren't only trying to shoehorn the name into the acronym), there are a few hints in the SQALE manual that can be downloaded from http://www.sqale.org/download. They seem to feel that they've done something novel and useful in projecting the ISO/IEC 9126 quality characteristics over a product lifecycle in a chronological order, and perhaps this is what the name is meant to reflect.
BTW, if you're interested in software quality and want to understand this sort of quality modeling in more depth, I'd recommend taking a look at SQuaRE (incl. the parts of ISO/IEC 9126 for which there are no SQuaRE replacements yet). It's not necessarily the most exciting reading, but I find that it provides excellent background for evaluating the utility of various quality methodologies and tools.
I am a total beginner with Xcode and Objective C, but I have some experience with OOP in C++. I bought this book. I read about how to make a simple app, and skimmed the rest of the book. What I want to do is make an iPhone app people can use to look up math equations such as the quadratic eqauation, pythagorean identity, etc. I plan to include a lot of stuff, and do a lot of things better than other apps I have seen. However, before I pay Apple $99 to be a full fledged iOS developer, I want to know that it isn't too hard to make the Greek letters and Math notation that we see in math books. So for example, what code is needed to make an iPhone app that display . Of course I want to use features that I understand are included in Xcode for doing this sort of thing, rather than, make a graphic with another program that my app would use when needed. Besides that specific example, where is the Apple documentation for making other math symbols and notation that my iPhone app will display? If this is the wrong place to ask, it would be great if you could tell me of a beter place to post my question.
It's going to require a lot of writing to get good layouts using the system frameworks. All the building blocks are there, but your program would need significant rendering customization to get the layouts you expect. In detail, the characters you need are there, but you will need to write a bunch of supporting code in order to resize, position, and layout these characters correctly.
You may want to look for a suitably licensed library you can use which specializes in this purpose. Perhaps a LaTeX renderer would offer some good leads.
Use core animiation layers to construct the elements of a parsed equation. Use Quartz to draw lines, symbols, for rendering visual elments of the operation with the equation. Also use Core Plot. And then eventually output to Latex once parsed into hierarchical data structure. Also check out Graham Cox's GCMathParser.
Similar question: Drawing formulas with Quartz 2d
For my job i've been using a Java version of ARToolkit (NyARTookit). So far it proven good enough for our needs, but my boss is starting to want the framework ported in other platforms such as web (Flash, etc) and mobiles. While i suppose i could use other ports, i'm increasingly annoyed by not knowing how the kit works and beyond that, from some limitations. Later i'll also need to extend the kit's abilities to add stuff like interaction (virtual buttons on cards, etc), which as far as i've seen in NyARToolkit aren't supported.
So basically, i need to replace ARToolkit with a custom mark detector (and in case of NyARToolkit, try to get rid of JMF and use a better solution via JNI). However i don't know how these detectors work. I know about 3D graphics and i've built a nice framework around it, but i need to know how to build the underlying tech :-).
Does anyone know any sources about how to implement a marker-based augmented reality application from scratch? When searching in google i only find "applications" of AR, not the underlying algorithms :-/.
'From scratch' is a relative term. Truly doing it from scratch, without using any pre-existing vision code, would be very painful and you wouldn't do a better job of it than the entire computer vision community.
However, if you want to do AR with existing vision code, this is more reasonable. The essential sub-tasks are:
Find the markers in your image or video.
Make sure they are the ones you want.
Figure out how they are oriented relative to the camera.
The first task is keypoint localization. Techniques for this include SIFT keypoint detection, the Harris corner detector, and others. Some of these have open source implementations - i think OpenCV has the Harris corner detector in the function GoodFeaturesToTrack.
The second task is making region descriptors. Techniques for this include SIFT descriptors, HOG descriptors, and many many others. There should be an open-source implementation of one of these somewhere.
The third task is also done by keypoint localizers. Ideally you want an affine transformation, since this will tell you how the marker is sitting in 3-space. The Harris affine detector should work for this. For more details go here: http://en.wikipedia.org/wiki/Harris_affine_region_detector