i've been using ec2 to crate ssh tunnels to test the rendering of our site from different geographical regions. I now need a node in canada - which ec2 doesn't support. Are there any well known (preferably free) SSH Shell providers in canada i could use for this?
thanks
rich
Canada is a very big place.
People in Vancouver will get similar results to US west coast, those in Toronto or Montreal will get east coast results, and geoip will often kick in and clock or enable content for anywhere in Canada.
You could try OVH.ca. They're a French company, and have a big data centre on the outskirts of Montreal, Quebec, in a city called BHS (Beauharnois, QC).
They generally have cheap dedicated servers, starting at 29 CAD/month; be advised that BHS only seems to have one big fibre pipe running into it, which gets damaged every now and then, and during the time that it's damaged, the most you'd be getting from them is something along the lines of 10kbps or less. Otherwise, I have no problem getting 200'000 kbps (and above) from them.
Related
I have an AWS server with availability in EU West (Paris).
IP: 35.180.120.0
Public DNS of server: ec2-35-180-120-0.eu-west-3.compute.amazonaws.com
When using visual Traceroute, the final country shows USA not France.
There also seems to be a large number of hops.
Test results: https://www.monitis.com/traceroute/index.jsp?url=quickbus.com&testId=2439438
Any ideas Why?
IP geolocation is hard. This is simply a mistake since hops 13 through 25 are clearly not in the US.
But whatever IP geolocation service monitis is using simply returns a wrong location. It's probably returning a US location because these IPs belong to Amazon and Amazon itself is registered in the US.
If you want better geolocation, use a service like db-ip. They're pretty good.
Also, Mat is right. The fact that the locations are approximate are clearly written in the fine print in the screenshot you posted.
I want to geolocate people from there IP. I know there are services online with an API.
I want to know if there is any way to calculate this data on my own (In other words, is there any way to manually map 192.168.0.1 to (say) New York)?
First, "calculate" is not the right word for it. There is nothing to calculate, it is simple mapping of IP to location. First, you would have to map IP blocks to ISPs. Then get the location of the ISPs. Manually. This is a huge task, and you need to constantly update the list, especially with IPv6 fast growing.
I see no reason to do the job yourself. There are APIs, use them. Maxmind does a great job at providing accurate information such as country, region, ISP name.
MaxMind's free GeoLite databases are probably your best bet.
I am using API to get ec2 spot price history, but I cannot get anything except for last 90 or so days, and cannot specify frequency of observations. Is there a way to get a complete history of spot prices, preferable at minute or hourly frequency?
While not explicitly documented for the DescribeSpotPriceHistory API action, this restriction is at least mentioned for the AWS Management Console (which uses that API in turn), see Viewing Spot Price History:
You can view the Spot Price history over a period from one to 90 days based on the instance type, the operating system you want the instance to run on, the time period, and the Availability Zone in which it will be launched.
Since anybody could have retrieved and logged the entire spot price history ever since this API is available (and without a doubt quite some users and researchers have done just that; even the AWS blog listed some dedicated Third-Party AWS Tracking Sites, but these are all defunct at first sight), this restriction admittedly seems a bit arbitrary, but is certainly pragmatic from a strictly operational point of view, i.e. you have all the information you need to base future bids upon (esp. given AWS has only ever reduced prices so far, and regularly does so in fact much to the delight of its customers).
Likewise there's no option to change the frequency, so you'd need to resort to client side code for the hourly aggregation.
This website has re-sampled EC2 spot price histories for some regions, you may access them via a simple API directly from your Python script:
http://ec2-spot-prices.ai-mmo-games.de/
I hope this helps.
AWS only provides 90 days of history. And the data is raw, i.e., not normalized by hour or even minute. so there are holes in the data sometimes.
One approach would be to suck in the data into an ipython notebook and use pandas excellent time series tools to resample by minute or 5-min etc. here's a short tutorial:
https://medium.com/cloud-uprising/the-data-science-of-aws-spot-pricing-8bed655caed2
here are more details on using pandas for time-series resampling:
http://pandas.pydata.org/pandas-docs/stable/timeseries.html
hope that helps...
I live in city X, but when i try to get my location via ip all the "find location by ip" websites point to city Y. Some ads "Hang tonight with girls in city X" they precisely know my location. How this is possible? Exists some kind of data, a database with ips which those ads site have?
There is no such thing as precise location from an IP... the quality of any such service never reaches 100%... as you write there are several different databases out there - each with some very good and rather weak spots... some databases are updates regularly, some aren't etc.
Those ads use databases which just happen to have their weak spots somewhere where you don't live...
I have never come across any such service that told my city correctly (although it is not small)... they are off by 20-400 miles sometimes even claiming that I am in a very small city far away...
Mostly you can tell the country correctly... although even that can be fooled by proxy/VPN/anonymizer...
For some insight see:
http://www.private.org.il/IP2geo.html
http://ipaddressextensions.codeplex.com/
http://software77.net/geo-ip/
http://jquery-howto.blogspot.com/2009/04/get-geographical-location-geolocation.html
A rather special and different case is this:
One rather precise way to tell the location is when you use a device (usually mobile phone)... these have several sources available (like tower locations, like GPS)... another point are the databases Google and Apple build by using anonymized from phones... they basically aggregate data regarding tower, GPS and WLAN HotSpot/access points reachable... this way they can (with a small margin of error) tell from the WLAN data (like MAC address) the location...
The use will be to serve dynamic content from data on S3. You can make up any definition of "normal" you think is normal.
What about small, medium, and large instances?
Ok. People want some data to work with, so here:
The webservice is about 100kb at start, and uses AJAX, so it doesnt have to reload the whole page much, if at all. When it loads the page, it will send between 20 - 30 requests to the database (S3) to get small chunks of text (like comments). The average user will stay on the page for 10 min, translating to about 100kb at offset, and about 400kb more through requests. Assume that hit volume is the same at night and day.
Depends on with what and how you're serving the content, not to mention how often those users will be accessing it, the size and type of the content, etc. There's essentially not one bit of information you've provided that allows us to answer your question in any sort of meaningful way.
As others have said, this might require testing under your exact conditions. Fortunately, if you're willing to go as far as setting up a test version of your server setup, you can spawn instances that simulate users. Create a bunch of these test instances, and run Apache's ab benchmarking tool on them, directing them at your test site. If the instances are within the same availability zone as your test site, you won't be charged for bandwidth, just by the hour for the running instances. Run a test for under an hour, shutting down the test instances afterward, and it will cost you very little to organize this stress test.
As one data point, running the Apache ab tool locally on my small instance, which is serving up a database-heavy Drupal site, it reported the ability of the server to handle 45-60 requests per second. I'm assuming that ab is a reasonable tool for benchmarking, and I might be wrong there, but this is what I'm seeing.
As a suggestion, not knowing too much about your particular case, I'd move your database to an Elastic Block Store (EBS) volume. S3 is not really intended to host databases, and the latency it has might kill your performance. EBS volumes can easily be snapshotted to S3 for backup, if that's what you're worried about.
One can argue that properly designed, it doesn't matter how many users an instance can support. Ideally, when your instance is saturated, you fire up a new instance to manage the traffic.
Obviously, this grossly complicates the deployment and design.
But beyond that, an EC2 instance a low end Linux box, effectively (depending on which model you choose).
Let's rephrase the question, how many users do you want to support?