I have an app in iOS and android and I'm getting the user's location in both of them! With CoreLocation for iOS and Google Maps in android... After the location retrieval, we apply reverse geocoding, to get the locality out of them... Once we do, we perform equalTo queries to find photos of a specific location...
The thing is, there are occasions where the locality of iOS, is slightly different to android's! For example, "Palaiochori" in iOS and "Paleochori" in android! Observe that two letters are different!
So, whereas the location is the same in both devices, the equalTo query will obviously fail!
What I want to know, is if there is any way to create a type of query, where we don't check for equality. Instead, we check for similarity!
Notice, that we do use cloud code, so any server-side solution is acceptable and preferred!!
Clearly the location name can not guarantee uniqueness. Two possible solutions:
Ensure consistency of your database
Store a GeoPoint for these locations. When you already have Palaiochori in your database and a user finds himself in Paleochori, before saving a new object, run a whereNear() query to see if you have near places for that location.
If you have results within a reasonable radius, ask the user: Do you mean <list of near places>? The user will likely recognize the place with a similar name, and tap on it. This way you avoid duplication.
Use a consistent location database
As soon as you have lat/lng, you could use Google Places API to ask for places near that location. Google will return a Place object with a unique placeId that you can store in your database. The id is guaranteed to remain the same and can be used for your queries reliably.
Related
I am building an app that will require extensive use of the autocomplete function and have currently implemented under Nearby Search. I recently learned however that this is the priciest option given its high cost + associated Contact and Atmospheric data imposed costs.
I am therefore looking for a good option to get relevant autocomplete search results based on the users location without the need for 'Nearby search'. I care about the UX and thus want to avoid people scrolling too much to find a place near them. The only field I need is name & potentially address.
I tried Nearby search, if I understand correctly this is the only way to get autocomplete predictions based on where you physically are located - I have now learned that this is too expensive however
Autocomplete and Nearby Search are entirely different operations and APIs, you can combine both to build a user-friendly experience but they each play a very different role.
Place Autocomplete provide predictions of places based on the user's input, i.e. characters the enter into an input field. These predictions can be biased, even restricted, to a small area around the user's location, to increase the chances that they will represent places near to the user. Depending on whether places far away from the user are acceptable or useful, or not, you can use one or the other:
locationbias if predictions far away are acceptable and useful, e.g. a user searching for a place that is not necessaraly where they are, or in situations where the user location is either not available or not very precise, e.g.
user wants to find a place to go to
user location is obtained from geolocating their IP address
user location is obtained from geolocating their cell towers
locationrestriction if only very nearby predictions are acceptable and user location is known to be very precise (e.g. GPS or other high-precision sources). This would make sense in mobile applications when the user location is provided (by the phone's OS) with a small radius (e.g. under 100 m.) and the user really just wants to find places that describe where they are now. Even then, beware that some places can be bigger than you'd expect, e.g. airports include runways.
Note on billing: Place Autocomplete can be free under specific conditions: when your application implements session tokens and there is a Place Details request at the end of the session, in which case Place Details is billed and Autocomplete is not. However, even if your application implements session tokens, each time a user doesn't pick a prediction, Autocomplete is billed as a session without Details. And in the simpler case, if your application does not implement session tokens, all Autocomplete is billed as per-request (and Place Details is billed separately, on top of that).
Nearby Search can provide nearby places (and can rankby=distance) based on only the user's location and without user's input. This can be used to show an initial list of places (e.g. the nearest 5 places) even before the user starts typing. There a few caveats:
results depends heavily on the user location being very precise
results will only include establishment places, i.e. business, parks, transit stations
If you'd want addresses instead of businesses, you could use reverse geocoding instead of Nearby Search, with the caveat that this can return results that are near/ish and don't necessarily represent the exact place where the user is at. This is more useful when you want to find addresses around a location; they may include the actual address of that location, but that is not guaranted.
We've built a system for generating anesthesia records.
We're now trying to model them as FHIR documents.
I understand that a Document (in FHIR terms) is supposed to end up being kind of a self-contained resource.
But, in our case, we have a process where this document will be gradually assembled.
What's the best way to handle this while we're gathering resources before we're ready to create a document.
We want to use FHIR to create and save various resources as we go, and then at the very end, assemble a document.
Assume the following:
A patient
A provider
A health history
Some info about the procedure being performed
An extensive set of vitals observations
An extensive set of drug doses administered
Various procedure, and recovery notes
A final signature by the provider that will "finalize" the report
I understand we can create and save various resources throughout. But we want to kind of keep them all lumped together so we can easily fetch everything related to what will ultimately become that document.
How would this work in terms of RESTful operations?
POST /Bundle of type "document" with a composition as first element (to create document)
Use resulting ID from bundle? Will I also get an ID for the composition?
Then, how do I add/update/remove individual items from the composition? Do i need to do PUTs of the entire composition to add something?
I have entire series of checkpoints every 5 minutes with full vitals (BP, SpO2, Temp, Respiratory rate, etc). Would I first create those observations with a POST, and then do a PUT to update the composition with a reference to them?
As I'm sure you can tell, I just want to understand how FHIR expects you to do this kind of thing in terms of HTTP operations.
Thanks in advance for any guidance!
You'd start by posting a Composition to have a focal point (table of contents) to update as you gather your data. You would then POST your individual Observations, Procedures, etc. and either PUT or PATCH the Composition to add references to the relevant data. Once you've got all of the relevant information gathered and tied into the Composition, you would then generate the document Bundle. You could create the Bundle earlier in the process and update it each time the Composition changes if you wanted to be able to render the draft document using a FHIR document rendering tool, but otherwise there's no real reason for the Bundle to exist until you're ready to lock down the document.
NTFS files can have object ids. These ids can be set using FSCTL_SET_OBJECT_ID. However, the msdn article says:
Modifying an object identifier can result in the loss of data from portions of a file, up to and including entire volumes of data.
But it doesn't go into any more detail. How can this result in loss of data? Is it talking about potential object id collisions in the file system, and does NTFS rely on them in some way?
Side node: I did some experimenting with this before I found that paragraph, and set the object id's of some newly created files, here's hoping that my file system's still intact.
I really don't think this can directly result in loss of data.
The only way I can imagine it being possible is if e.g. a backup program assumes that (1) every file has an Object Id, and (2) that the program is keeping track of all IDs at all times. In that case it might assume that an ID that is not in its database must refer to a file that should not exist, and it might delete the file.
Yeah, I know it sounds ridiculous, but that's the only way I can think of in which this might happen. I don't think you can lose data just by changing IDs.
They are used by distributed link tracking service which enables client applications to track link sources that have moved. The link tracking service maintains its link to an object only by using these object identifier (ID).
So coming back to your question,
Is it talking about potential object id collisions in the file system
?
I dont think so. Windows does provides us the option to set the object IDs using FSCTL_SET_OBJECT_ID but that doesnt bring the risk of ID collision.
Attempting to set an object identifier on an object that already has an object identifier will fail.
.. and does NTFS rely on them in some way?
Yes. Object identifiers are used to track files and directories. An index of all object IDs is stored on the volume. Rename, backup, and restore operations preserve object IDs. However, copy operations do not preserve object IDs, because that would violate their uniqueness.
How can this result in loss of data?
You wont get into a serious problem if you change(or rather set) object ID of user-created files(as you did). However, if a user(knowingly/unknowingly) sets object ID used by a shared object file/library, change will not be reflected as is.
Since Windows doesnt want everyone(but developers) to play with crutial library files, it issues a generic warning:
Modifying an object identifier can result in the loss of data from
portions of a file, up to and including entire volumes of data.
Bottom line: Change it if you know what you are doing.
There's another msn article on distributed link tracking and object identifiers.
Hope it helps!
EDIT:
Thanks to #Mehrdad for pointing out.I didnt mean object identifiers of DLLs themselves but ones which they use internally.
OLEACC(a dll), provides the Active Accessibility runtime and manages requests from Active Accessibility clients[source]. It use OBJID_QUERYCLASSNAMEIDX object identifier [ source ]
I have a question about getting 'random' chunks of available content from a RESTful service, without duplicating what the client has already cached. How can I do this in a RESTful way?
I'm serving up a very large number of items (little articles with text and urls). Let's pretend it's:
/api/article/
My (software) clients want to get random chunks of what's available. There's too many to load them all onto the client. They do not have a natural order, so it's not a situation where they can just ask for the latest. Instead, there are around 6-10 attributes that the client may give to 'hint' what type of articles they'd like to see (e.g. popular, recent, trending...).
Over time the clients get more and more content, but at the server I have no idea what they have already, and because they're sent randomly, I can't just pass in the 'most recent' one they have.
I could conceivably send up the GUIDS of what's stored locally. The clients only store 50-100 locally. That's small enough to stuff into a POST variable, but not into the GET query string.
What's a clean way to design this?
Key points:
Data has no logical order
Clients must cache the content locally
Each item has a GUID
Want to avoid pulling down duplicates
You'll never be able to make this work satisfactorily if the data is truly kept in a random order (bear in mind the Dilbert RNG Effect); you need to fix the order for a particular client so that they can page through it properly. That's easy to do though; just make that particular ordering be a resource itself; at that point, you've got a natural (if possibly synthetic) ordering and can use normal paging techniques.
The main thing to watch out for is that you'll be creating a resource in response to a GET when you do the initial query: you probably should use a resource name that is a hash of the query parameters (including the client's identity if that matters) so that if someone does the same query twice in a row, they'll get the same resource (so preserving proper idempotency). You can always delete the resource after some timeout rather than requiring manual disposal…
I am kind of a newbie at programming (have worked a bit with Delphi years back) but have started to build an application for Windows Phone 7.5 Mango, as I have a great idea for an app :D
In the application the user should be able to pick different locations from a list (a very large list, 5k+ items) - to make sure that all users always get the latest list, I have created a SQL on my website to generate the list as XML - which I load to the application via httpwebrequest; I am not quite sure what best practise is when dealing with a large list, which will be updated frequently etc.?
That is not the main question thou, because this seems to work pretty okay - my real question is, how to add a search function to my application, so the user can search for a location instead of scrolling throug the entire list?
My SQL is build up with ID, Country, State, Region, City (and a few more irrelevant tables for a search function).
I do not know what the best way to approach this is? Should I make a query on my website and generate the result as XML and use httpwebrequest to get the result to the phone - or should it be a search function on the device to search the entire list? And if so, how do I do that?
Thank you ;-)
First of all I have to inform you that fetching an list with over 5k+ items via a smartphone that not are using Wireless Network, will take a while. So, acrording to me there would be a huge waste of traffic to download the whole list if the user only are interested in a few items. This basically means that you are downloading a bunch of date but only are using 0,01% of it which is not the way you should build a program.
So, acording to me you should make an webservice so that the user can call the webservice and make a http request using its search parameter. And then you basically just use the parameter in an SQL Search Query, which could be an stored procedure or just bare in code, but I don't know how your server/database is build and structed so you can basically choose that whatever.
Here is an example:
SELECT * FROM TABLE_NAME WHERE (ID=#ID) OR (Country=#Country) OR (State=#State) OR (Region=#Region) OR (City=#City)
If i would have done this application I would have had two parameters. One that are representing the user input, in other words the search text and one that explains what the serach parameter is. (ID, Country, State, Region or City?).
You have to register your handle function for TextBox.TextChanged with some logic to filter queries that happens too often (for example typing name John may cause 4 requests: J, Jo, Joh, John). It can be done by using System.Threading.Timer with delayed start and changing it starting time when user types new character (if you need example - ask).
Then I recommend you to use WCF service to "talk" with SQL database. On WCF service use any ORM (Entity framework is simplest) to query you database.