I'm using a virtual server in U.S.A/Texas and its time is synchronized with "Time.windows.com". I too have in Canada/Quebec 3 PCs with a time synchronized with the same internet time. Unfortunately, my server in Texas is 40 seconds less than my 3 others PCs in Canada.
All PCs use the same time zone (UTC-5). The only difference is the country set in "Region and Language/Location"
Can someone explain how is this possible.
Thanks
See the following article it explains the (not) working of time synchronize:
http://www.pretentiousname.com/timesync/
In short:
The time synchronize job only runs once a week, and because the interal clocks of computers are unreliable you get a time difference..
Solution is to create a special sync Task that runs more often to keep the time up to date.. (al explained in the article)
Related
I am trying to simulate 5000 virtual users using Locust, with each user sending a message every 5 seconds. What are the resources needed in terms of EC2 specifications in order to achieve this with some level of concurrency.
Number of users is not so important (in my experience, at least when talking about less than a couple of Users per worker), the only thing that matters is the number of requests per second.
Because the performance depends on exactly what your tests do, it is impossible to give a hard number. But the manual gives some best-case figures on what you can do with FastHttpUser/HttpUser
https://docs.locust.io/en/stable/increase-performance.html#increase-locust-s-performance-with-a-faster-http-client
It is impossible to say what your particular hardware can handle, but in a best case scenario you should be able to do close to 5000 requests per second per core, instead of around 850 for the normal HttpUser (tested on a 2018 MacBook Pro i7 2.6GHz)
If your test plan is reasonably simple then you should be fine running at ~50% of that load.
while using nagios with multiple hosts spread over the network,hosts status shows a recognizable lag and takes a long time to reflect on nagios server cgi.Thus what is the optimal nrpe/nagios configration to speed up the status process for a distributed host environment.
In my case I use nagios core 4.1
nrpe 1.5
server/clients: Amazon ec2
The GUI is usually only updated once each minute (automatically), though clicking refresh can provide you with 'nearly' the latest information. I say nearly because there is a distinct processing loop inside of the Nagios core that causes it to never be real time. NRPE is going to run at the speed of your network connection - it does little else besides sending and receiving tiny amounts of data. About the only delay here is the time it takes to actually perform the check and send back the response - which, of course, has way to many factors to mention. Try looking at the output of
[nagioshome]/bin/nagiostats
There are several entries that tell you:
'Latency' - the time between when the check was scheduled to start, and the actual start time.
'Execution Time' - the amount of time checks are actually taking to run.
These entries will have three numbers, which are; Min / Max / Avg
High latency numbers (in my book that means Avg is greater than 1 second) usually means your Nagios server is over worked. There are a few things you can do to improve latency times, and these are outlined in the 'nagios.cfg' file. This latency has nothing to do with network speed or the speed of NRPE - it is primarily hardware speed. If you're already using the optimal values specified in nagios.cfg, then its time to find some faster hardware.
High execution times (for me an Avg greater than 5 seconds) can be blamed on just about everything except your Nagios system. This can be caused by faulty networks (improper packet routing), over loaded networks, faulty and/or poorly designed checks, slow target systems, ... the list is endless. Nothing you do with the Nagios and/or NRPE configs will help lower these values. Well, you could disable NRPE's encryption to improve wire time; but if you have encryption enabled in the first place, then its not likely you'd want it disabled.
I want to send my current location to php web service after every 5 min even if my application is runing in background. I try to make this thing but its working good when my application in running state but when i put this application in background it stop sending data so please any buddy tell how can i run my application in background.
By "running in background", do you mean running when under the lock screen? If this is the case, then you need to set PhoneApplicationService.Current.ApplicationIdleDetectionMode = IdleDetectionMode.Disabled;
The post Running a Windows Phone Application under the lock screen by Jaime Rodriguez covers the subject well.
However, if you're talking about running an application that continues to run while the user uses other applications on the device, then this is not possible. In the Mango build of the operating system you can create background agents, but these only run every 30 minutes and only for 15 seconds as described on MSDN.
There is a request on the official UserVoice forum for Windows Phone development to Provide an agent to track routes, but even if adopted, this would not be available for quite some time.
Tracking applications are the bulk of what I do for a living, and the prospect of using WP7 like this is the primary reason I acquired one.
From a power consumption perspective, transmitting data is the single most expensive thing you can do, followed closely by sampling the GPS and accelerometers.
To produce a trace that closely conforms to roads, you need a higher sampling rate. WP7 won't let you sample more than once per second. This is (just barely) fast enough to track a motor vehicle, and at this level of power consumption the battery will last for about an hour assuming you log the data on the phone and don't attempt to transmit it.
You will also find that if you transmit for every sample, your sampling interval will be at least 15 seconds. Running the web call on another thread won't help because it will take more than one second to complete and you will run out of sockets in less than a minute with a one second sample interval.
There are solutions to all of these problems. For example, in a motor vehicle you can connect to vehicle power and run hot. You can batch and burst your data on a background thread.
These, however, are only the basic problems faced by every tracker designer. More interesting are the questions of proximity in space and time, measurement of deviation from a route, how to specify routes and geofences in a time dependent manner, how to associate them into named sets for rule evaluation purposes and how to associate rules with named sets of routes and geofences.
And then there is periodic clustering, which introduces all the calendrical nightmares that are too much for your average developer of desktop software. To apply the speed limit for a school zone you need to know the time zone, daylight savings, two start and two stop times and the start and end dates for school holidays in that region.
If you are just doing this for fun or as some kind of hiking trace then a five minute interval will impose much milder power demands than one second sampling, but I still suggest batch and burst because it means you can track locations that don't have comms.
Ok, so the situation is as follows.
I have a server with services for a game, a particular command from the server sends a timestamp for when the next game round should commence. To get this perfectly synced on all connected clients I also have a webbservice that returns a timestamp of the servers current time.
What I know: the time between request sent and answer recieved.
What I dont know: where the latency lies, on client processing or server processing or bandwidth issues.
What is the best practice to get a reasonable result here. I guess that GPS must have solved this in some fashion but I´ve been unable to find a good pattern.
What I do now is to add half the latency of the request to the server timestamp, but it's not quite good enough. This may have to do that the time between send and recieve can be as high as 11 seconds.
Suggestions?
There're many common solutions to sync time between machines, including correct PLL implementation done by NTPD with RTP. This is useful to you if you can change machine's local time. If not, perhaps you should do more or less what you did, but drop sync points where the latency is unreasonable.
The best practice is usually not to synchronise the absolute times but to work with relative times instead.
Curious as to 99.95% uptime REALLY means; Is it really going to go down 7 minutes a month? Please post your longest/average uptimes on EC2, thanks.
Usually uptime is calculated in a yearly basis. So if you have a Service Level Agreement for 99.95% this means:
365 * 0.0005 = 0.1825 days or 4.38 hours
If during a year of service there is an outage and your system is down for more than that, then you are liable for compensation.
As of your question, I have a server running unstopped in EC2 for about 3 months now. I would say that their uptime is good, but if you have a mission critical application you definitely need to have a fail-over solution. A good uptime only means that they will be able to respond to an outage quickly. Even a 99.9999% uptime won't be able to save you if you aren't prepared for an outage.
Read the SLA carefully (http://aws.amazon.com/ec2-sla/) they only count "Region Unavailable" as downtime, and what is more they only count it as downtime if the region is down for 5 consecutive minutes.
“"Annual Uptime Percentage” is calculated by subtracting from 100% the percentage of 5 minute periods during the Service Year in which Amazon EC2 was in the state of “Region Unavailable.”
By my count this mean any downtime of less then 4 minutes is not countable. Also if they do break the SLA they are only in for %10 of the month in which you had largest downtime bill.
So if they where down for all of January and your bill was $100 they would apply a $10 credit to your account.
I would have a hard time convensing my boss that this is a serious product with a SLA like that.
SLA's are useless. They only measure how much risk the company is willing to take on and have no bearing on actual uptime. I've seen SLA's, with heavy penalties, offered when the company knew the could not meet the SLA in order to land the sale.
I have one client with 400+ days of EC2 uptime and another with 300+ days as measured by web pulse, this is by far the most reliable service I've worked with.
For my single instance running in the US-East availability zone, 9 months, 0 downtime.
Since Amazon switched to provide an SLA, I've never had an instance go down on me. When I've had instances go down in the past, Amazon has always sent a message informing me that the instance is degraded before it actually disappeared, so I've had time to start up a new instance.
The previous answer makes a good point, though; EC2's service model dictates that you write your apps to handle failover to a new server if you're not prepared for extended down time.
conrad#papa ~ $ uptime
04:42:36 up 495 days, 8:51, 8 users, load average: 0.02, 0.02, 0.00
Checking out the AWS Service Health Dashboard will get you a good idea of any current or past issues. My experience is that the AWS uptime is better than most "traditional" hosting options (even full-blown redundant RackSpace setup...).
However, simply going with AWS for uptime is like getting a car for the keychain (ok, almost.. ;)). With an architecture utilizing AWS the big benefit is scaling (without upfront costs).
SLA... Guaranteed uptime...
These are all very nice taglines. But when the servers aren't available for an hour (March 1, 2012, in the EU region) and the clients start calling, then it won't help you that they still have a 300 days uptime.
And when the lightning struck 1 out of 3 of their datacenters in the EU, we all found out that they have no off-site redundancies, and the fact that they have 3 datacenters doesn't mean a thing.
One must love the phrase "degraded performance", that actually means: "cross your fingers and pray that your data will still be available after the catastrophe passes".
I'm still trying to look for any official/non-official statistics about the availability percentages of all of their datacenters.
No luck thus far...
I've never had downtime on EC2, however, I do keep local backups and make daily images of my machines and port them to another availability zone, just in case. I use twilio to alert me if a machine is unreachable with a phone call to all my devices. Then I can just log in to EC2 and fire up a machine in another availability zone; worst case I'll be down for a few minutes.
Which in my case, is potentially pretty sucky, because my machines are doing 24/7 Forex trading.
My rule: know the potential cost of downtime, and be willing to invest that much in redundancy assuming it will happen - because it will.
That said, EC2 has never let me down. Helps probably that my servers are not in an area of the country where natural disasters are common. If you're in an earthquake zone, tornado alley, or a potential hurricane path, downtime truly is an inevitability.