I've been experimenting with running location services in WP7 using background periodic tasks. I've been testing the accuracy values (Default and High) I can specify on GeoCoordinateWatcher and I was hoping I could get some feedback from people who have dealt with similar issues. I was running the background agent with location services under Default accuracy and was able to get maybe 3 or 4 location updates throughout the day. I would have preferred getting more frequent position updates. Would a higher accuracy (by using GPS with High accuracy) help with this?
I'm concerned that by increasing accuracy, the 25 second time limit set on periodic tasks could become a problem. Has someone run location services in background using a high accuracy? Has it impacted location update frequency? Any problems staying under the 25 seconds? Is there a penalty on the app if the OS has to shut down the periodic task several times for taking longer than 25 seconds? Will I need to relaunch my app to get the periodic task running again?
Any advice or feedback on the subject will be greatly appreciated. Thanks in advance.
See this MSDN link:
GeoCoordinateWatcher: This API, used for obtaining the geographic
coordinates of the device, is supported for use in background agents,
but it uses a cached location value instead of real-time data. The
cached location value is updated by the device every 15 minutes.
Given the location is cached and up to 15 mins old, I'm pretty sure specifying high accuracy isn't going to help.
Related
I need to show usage stats at any points on time for last 3 months, 6 months and 1 year. I am planning to use the KStream sliding windows for the durations mentioned above. Most of the examples I see are using durations in minutes or seconds. I would like to know is that OK to use the bigger time duration for sliding windows? Any performance impact? Any specific configuration It should use to get optimum performance?
Thanks,
Jinu
It will really depend on the density of the data and what kind of aggregations you are doing. It could end up with very large number of windows updating and not closing since the end time is so far out. Also if it is too heavy I am not sure the state stores could handle it. But with the correct load and retention times I don't see an obvious reason it wouldn't work.
Edit: If you do end up trying it I would be very interested in seeing how it works out.
I am able to get Performance counters for every two seconds in Windows Server 2008 machine using Powershell script. But when i go to Task Manager and check for the CPU Usage, powershell.exe is taking 50% of CPU. So i am trying to get those Performance counters using other third party tools. I have searched and found this and this. Those two are need to refresh manually and not getting automatically for every two seconds. Can anyone Please suggest some tool which gives the Performance Counters for every two seconds and analyzes the Maximum, Average of those counters and stores the results in text/xls or any other format. Please help me.
I found some Performance tools from here, listed below:
Apache JMeter
NeoLoad
LoadRunner
LoadUI
WebLOAD
WAPT
Loadster
LoadImpact
Rational Performance Tester
Testing Anywhere
OpenSTA
QEngine (ManageEngine)
Loadstorm
CloudTest
Httperf.
There are a number of tools that do this -- Google for "server monitor". Off the top of my head:
PA Server Monitor
Tembria FrameFlow
ManageEngine
SolarWinds Orion
GFI Max Nagios
SiteScope. This tool leverages either the perfmon API or the SNMP interface to collect the stats without having to run an additional non-native app on the box. If you go the open source route then you might consider Hyperic. Hyperic does require an agent to be on the box.
In either case I would look to your sample window as part of the culprit for the high CPU and not powershell. The higher your sample rate the higher you will drive the CPU, independent of tool. You can see this yourself just by running perfmon. Use the same sets of stats and what what happens to the CPU as you adjust the sample rate from once every 30 seconds, to once in 20, then ten, 5 and finally 2 seconds as the interval. When engaged in performance testing we rarely go below ten seconds on a host as this will cause the sampling tool to distort the performance of the host. If we have a particularly long term test, say 24 hours, then adjusting the interval to once in 30 seconds will be enough to spot long term trends in resource utilization.
If you are looking to collect information over a long period of time, 12 hours to more, consider going to a longer term interval. If you are going for a short period of sampling, an hour for instance, you may want to run a couple of different periods of one hour at lesser and greater levels of sampling (2 seconds vs 10 seconds) to ensure that the shorter sample interval is generating additional value for the additional overhead to the system.
To repeat, tools just to collect OS stats:
Commercial: SiteScope (Agentless). Leverages native interfaces
Open Source: Hyperic (Agent)
Native: Perfmon. Can dump data to a file for further analysis
This should be possible without third party tools. You should be able to collect the data using Windows Performance Monitor (see Creating Data Collector Sets) and then translate that data to a custom format using Tracerpt.
If you are still looking for other tools, I have compiled a list of windows server performance monitoring tools that also includes third party solutions.
I have a site running on amazon elastic beanstalk with the following traffic pattern:
~50 concurrent users normally.
~2000 concurrent users for 1/2 minutes when post is made to Facebook page.
Amazon web services claim to be able to rapidly scale to challenges like this but the "Greater than x for more than 1 minute" setup of cloudwatch doesn't appear to be fast enough for this traffic pattern?
Usually within seconds all the ec2 instances crash, killing all cloudwatch metrics and the whole site is down for 4/6 minutes. So far I've yet to find a configuration that works for this senario.
Here is the graph of a smaller event that also killed the site:
Are these links posted predictably? If so, you can use Scaling by Schedule or as alternative you might change DESIRED-CAPACITY value of Auto Scaling Group or even trigger as-execute-policy to scale out straight before your link is posted.
Do you know you can have multiple scaling policies in one group? So you might have special Auto Scaling policy for your case, something like SCALE_OUT_HIGH which adds say 10 more instances at once. Take a look at as-put-scaling-policy command.
Also, you need to check your code and find bottle necks.
What HTTPD do you use? Consider of switching to Nginx as it's much more faster and less resource consuming software than Apache. Try to use Memcache... NoSQL like Redis for hight read and writes is fine option as well.
The suggestion from AWS was as follows:
We are always working to make our systems more responsive, but it is
challenging to provision virtual servers automatically with a response
time of a few seconds as your use case appears to require. Perhaps
there is a workaround that responds more quickly or that is more
resilient when requests begin to increase.
Have you observed whether the site performs better if you use a larger
instance type or a larger number of instances in the steady state?
That may be one method to be resilient to rapid increases in inbound
requests. Although I recognize it may not be the most cost-effective,
you may find this to be a quick fix.
Another approach may be to adjust your alarm to use a threshold or a
metric that would reflect (or predict) your demand increase sooner.
For example, you might see better performance if you set your alarm to
add instances after you exceed 75 or 100 users. You may already be
doing this. Aside from that, your use case may have another indicator
that predicts a demand increase, for example a posting on your
Facebook page may precede a significant request increase by several
seconds or even a minute. Using CloudWatch custom metrics to monitor
that value and then setting an alarm to Auto Scale on it may also be a
potential solution.
So I think the best answer is to run more instances at lower traffic and use custom metrics to predict traffic from an external source. I am going to try, for example, monitoring Facebook and Twitter for posts with links to the site and scaling up straight away.
I want to send my current location to php web service after every 5 min even if my application is runing in background. I try to make this thing but its working good when my application in running state but when i put this application in background it stop sending data so please any buddy tell how can i run my application in background.
By "running in background", do you mean running when under the lock screen? If this is the case, then you need to set PhoneApplicationService.Current.ApplicationIdleDetectionMode = IdleDetectionMode.Disabled;
The post Running a Windows Phone Application under the lock screen by Jaime Rodriguez covers the subject well.
However, if you're talking about running an application that continues to run while the user uses other applications on the device, then this is not possible. In the Mango build of the operating system you can create background agents, but these only run every 30 minutes and only for 15 seconds as described on MSDN.
There is a request on the official UserVoice forum for Windows Phone development to Provide an agent to track routes, but even if adopted, this would not be available for quite some time.
Tracking applications are the bulk of what I do for a living, and the prospect of using WP7 like this is the primary reason I acquired one.
From a power consumption perspective, transmitting data is the single most expensive thing you can do, followed closely by sampling the GPS and accelerometers.
To produce a trace that closely conforms to roads, you need a higher sampling rate. WP7 won't let you sample more than once per second. This is (just barely) fast enough to track a motor vehicle, and at this level of power consumption the battery will last for about an hour assuming you log the data on the phone and don't attempt to transmit it.
You will also find that if you transmit for every sample, your sampling interval will be at least 15 seconds. Running the web call on another thread won't help because it will take more than one second to complete and you will run out of sockets in less than a minute with a one second sample interval.
There are solutions to all of these problems. For example, in a motor vehicle you can connect to vehicle power and run hot. You can batch and burst your data on a background thread.
These, however, are only the basic problems faced by every tracker designer. More interesting are the questions of proximity in space and time, measurement of deviation from a route, how to specify routes and geofences in a time dependent manner, how to associate them into named sets for rule evaluation purposes and how to associate rules with named sets of routes and geofences.
And then there is periodic clustering, which introduces all the calendrical nightmares that are too much for your average developer of desktop software. To apply the speed limit for a school zone you need to know the time zone, daylight savings, two start and two stop times and the start and end dates for school holidays in that region.
If you are just doing this for fun or as some kind of hiking trace then a five minute interval will impose much milder power demands than one second sampling, but I still suggest batch and burst because it means you can track locations that don't have comms.
Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good).
How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need?
How would you formulate parameters such as concurrent users, simultaneous connections, daily hits and DB read/write ratio to a decision on how much, and which, hardware you need?
Any resources on this issue would be very helpful...
Specifically - any hard numbers from real world experience and case studies would be great.
Capacity Planning is quite a detailed and extensive area. You'll need to accept an iterative model with a "Theoretical Baseline > Load Testing > Tuning & Optimizing" approach.
Theory
The first step is to decide on the Business requirements: how many users are expected for peak usage ? Remember - these numbers are usually inaccurate by some margin.
As an example, let's assume that all the peak traffic (at worst case) will be over 4 hours of the day. So if the website expects 100K hits per day, we dont divide that over 24 hours, but over 4 hours instead. So my site now needs to support a peak traffic of 25K hits per hour.
This breaks down to 417 hits per minute, or 7 hits per second. This is on the front end alone.
Add to this the number of internal transactions such as database operations, any file i/o per user, any batch jobs which might run within the system, reports etc.
Tally all these up to get the number of transactions per second, per minute etc that your system needs to support.
This gets further complicated when you have requirements such as "Avg response time must be 3 seconds etc" which means you have to figure in network latency / firewall / proxy etc
Finally - when it comes to choosing hardware, check out the published datasheets from each manufacturer such as Sun, HP, IBM, Windows etc. These detail the maximum transactions per second under test conditions. We usually accept 50% of those peaks under real conditions :)
But ultimately the choice of the hardware is usually a commercial decision.
Also you need to keep a minimum of 2 servers at each tier : web / app / even db for failover clustering.
Load testing
It's recommended to have a separate reference testing environment throughout the project lifecycle and post-launch so you can come back to run dedicated performance tests on the app. Scale this to be a smaller version of production, so if Prod has 4 servers and Ref has 1, then you test for 25% of the peak transactions etc.
Tuning & Optimizing
Too often, people throw some expensive hardware together and expect it all to work beautifully. You'll need to tune the hardware and OS for various parameters such as TCP timeouts etc - these are published by the software vendors, and these have to be done once the software are finalized. Set these tuning params on the Ref env, test and then decide which ones you need to carry over to Production.
Determine your expected load.
Setup a machine and run some tests against it with a Load testing tool.
How close are you if you only accomplished 10% of the peak load with some margin for error then you know you are going to need some load balancing. Design and implement a solution and test again. Make sure you solution is flexible enough to scale.
Trial and error is pretty much the way to go. It really depends on the individual app and usage patterns.
Test your app with a sample load and measure performance and load metrics. DB queries, disk hits, latency, whatever.
Then get an estimate of the expected load when deployed (go ask the domain expert) (you have to consider average load AND spikes).
Multiply the two and add some just to be sure. That's a really rough idea of what you need.
Then implement it, keeping in mind you usually won't scale linearly and you probably won't get the expected load ;)