I have an application which allows users to make bookings for a class in a particular time slot. I am supposed to capture the current location of user and check if he has been near the booked class at the time slot in which he had made the booking and alert him to rate the class at the same time. Can someone let me know how to proceed with this scenario. Do I need to continuously capture the location and check if it matches with the location of the booked class. What do I do if the user has closed the app? Should I capture the location in background. How do I compare it with the desired location when the app is in background. Also there will be an effect on the battery if I continuously capture the location. Can It lead to rejection of my app from Apple too?I'm bit of a newbie so any help in this matter will be appreciated. Have checked couple of links too like Update user's location in background (iOS) and How to track user location in background? but haven't found satisfactory answers.
Related
We have a screen that allows users to create a row for each of their addresses. Each row has a query we want to remain in scope for the app's life.
detachInactiveScreens={false} doesn't seem to have the effect I was hoping for.
How can we keep this screen mounted?
I'm working on a project which has settings for each display. I want my application to support a display being removed and later re-added, possibly with another display used in between, with the original settings for each display being applied when the display is seen again.
As far as I can tell there is no way, through NSScreen, to uniquely identify one outside of the context of the current display configuration. I can't just use screen dimensions/properties as the user could have multiple displays of the same model in different locations; this problem applies to all persistent properties of a screen as far as I can tell.
Is there a good, known way to do this?
Thanks for your time. Any help is greatly appreciated.
The documentation to -[NSScreen deviceDescription] talks about getting the CGDirectDisplayID and the documentation about the CGDirectDisplayID says:
When a monitor is attached, Quartz assigns a unique display identifier (ID). A display ID can persist across processes and system reboot, and typically remains constant as long as certain display parameters do not change.
When assigning a display ID, Quartz considers the following parameters:
Vendor
Model
Serial number
Position in the I/O Kit registry
This sounds pretty close to what you are looking for.
For this question, I don't require a full explanation of all code, but helping me get insight into the process for achieving this result would be very helpful! Some information sources that will lead me to where I want to be.
Don't hesitate to give your opinion or make suggestions on how you would make it better in case you have better ideas - We just want to jump off the regular photo album system.
In the added screenshot I have added a painted image that makes the purpose clear.
Albums are created by tapping the "+" sign. (This shows a popup window in which the user can tag a bar/event to which the picture applies; the bar/event profile picture will appear on the album cover).
Newly taken pictures should appear in a separate band on the screen. They will float there until the user drags and drops them into an album. Note that the picture is also taken from within the app (using the native camera of the smartphone).
When the user added them to an album that was tagged, they will also be displayed automatically in the gallery of the tagged bar/event profile. (Of course in the app, personal profiles will be available as well).
Which technologies / workflow would you advise me?
What I need to create now is just an empty shell for the app that demonstrates the visual workflow (the data flows are not important at this point).
I have read about some libraries such as three20 or UIImagePicker, I don't know if they are easy to customize towards our needs.
Thanks!
I cannot comment on the likes of Three20 as I have never actually used them.
One method I can suggest, is using a number of scrollviews. Based on your example, you would require 2 individual scrollviews. (For ease lets call them AlbumsSV and PicturesSV).
The AlbumsSV would dynamically load content, based on your backing store. One approach I have used in the past, is to load custom views into a scroll view, as this allows for maximum control, you can specify any requirements as properties of the view (i.e Primary key etc), also you can load a 'preview image' based on the data held in your data store.
Assuming you always want the ability to add new items to be last element added to the AlbumsSV, then you can simply add another custom view to the AlbumsSV after all other items have processed.
PicturesSV would simply load content based upon what is in the users camera roll. Again I would recommend using a custom view, as you can set properties such as FileURL etc on the custom view, this will aid when it comes to dragging items into a specific album.
Hope this helps :)
I have some doubts regarding secondary tiles.
How many secondary tiles can be
pinned to the start screen for an
application (is there any limit)
Secondary tiles are pinned to the
start programmatically. Do we have
to ask the user whether he wants the changes to a particular section to be pinned to the start as secondary tile or without getting any confirmation from the user we can pin the secondary tiles.
There is no, currently published, restriction on the number of tiles that can be added.
Due to the screen real estate available on the start screen and the value of updated/animating live tiles which aren't always available (or at least in the first few "screens" worth of tiles) becomes decreasingly useful.
While pinning tiles can be done without a user initiated action, when a new tile is added the app closes and the phone returns to the start screnn to show the new tile. As such it's not possible to add lots of live tiles without the user being aware.
If a marketplace requirement isn't added regarding the behaviour around the creation of additional tiles I would expect it to be added quite quickly.
The potentially bad user experience of an app which repeatedly creates additional tiles without a specific instruction from the user would, I strongly expect, lead to that app not getting used much. ;)
In summary:
You can add as many secondary tiles as you wish but only do it when the user requests it and only make the option available to the user when it will add real value.
I want to implement an application on my website where once the users are connected, they share a text editor. If one user enters anything on
the text editor available on his screen, the same text appears on the second user’s
screen at the same coordinates.
Same thing goes for the other user. Also there would be pointer shaped images on both user’s screens to represent mouse pointers.
When user A moves his mouse pointer, the image on user B’s screen should be
moved according to the movement of user A’s mouse and similarly, when user B
moves his mouse, the image on user A’s screen should be moved accordingly.
The problem is I am using database to store the coordinates of each user. And this approach results in the a lot of lag and delay. What should I use in place of the database?? Please Help !
You probably don't want to request the updates but push them to your clients: http://en.wikipedia.org/wiki/Comet_%28programming%29 . This reduces the delay between an update from your client to the server and other clients again checking for updates.
how abt using Redis : http://code.google.com/p/redis/
an example of a similar collaborative text editor using this : http://www.web2media.net/laktek/2010/05/25/real-time-collaborative-editing-with-websockets-node-js-redis/