We have ArcGIS server 9.3.1 which already have load balancing (SOM) and F5 hardware load balancing. Is it good practices to have both of them working or to use only one of them.
SOM already have load balancing like round robin. F5 will never knows about Soc failuer, thus you need to use some F5 confguration to check if the SOC is working fine, see the F5 manual to see how this can be done
Related
I want to simulate high traffic load on my web app in the localhost. I found lots of simulators for java and python but none for PHP. Does somebody know such (open-source) simulators to help develop my application?
I don't see why you need a load tester "for PHP. Any load tester like apache bench, siege etc will simulate your desired request/connection rates and probably do a better job than anything written in PHP.
So my web app is hosted on amazon using Opswork.
Currently I have a 1 dedicated instance for Postgresql, 1 instance as my webserver, and another dedicated instance running Redis for caching purposes.
I would like to improve the performance by adding Varnish. Given my architecture where should I install varnish? and also taking into account I may soon outgrow this solucion and be using more webservers behind a loadbalancer.
Any help would be appreciated!
Bye
Varnish will always be quicker if you run it with memory storage - so the one with the most free memory would be a good pick. Even if you don't have enough to spare for the storage, it also uses quite some memory for the connection handling when you reach a bit more traffic.
Further along the road when you want a load balancer a good start would be to use a dedicated server for varnish that also can do load balancing just fine. It's not as effecient as a lightweight dedicated loadbalancer but until you need multiple varnish servers (way down the road) there is generally no point in using anything before it.
You should use Varnish in front of Apache Web Server. However it's fine to reside on Web Server itself and point Load Balancers to Varnish.
It takes about 20 seconds to login to the admin backoffice. During this time load would spike (from 0.02 to 0.20)
Yet I see no "slow queries" in the mysql log. I've optimized all the tables.
Is there some way to hunt down the source of this slow down?
Which browser are you using? The Magento admin can be very slow in some browsers i.e. Internet explorer. I recommend using Chrome or Firefox for admin work, especially adding/editing products. If you are generally looking at ways to speed up Magento then here is a good place to start:
https://web.archive.org/web/20140327173649/http://www.gxjansen.com/101-ways-to-speed-up-your-magento-e-commerce-website/
The admin panel pulls in a lot of files and is heavily javascripted. Turning on things like mod_deflate and mod_expires can help tell your browser to not download these files as often.
Magento is extremely resource intensive and the admin interface does not use any caching. A lot of it depends on the hardware your site is on. For any site is decent traffic and heavy admin use, the base line should be a dedicated dual quad core server with 24-32mb of ram, especially if you have the db and web on the same server. Any shared server will just not cut it. Cloud based solutions vary, we have had some good results with splitting db and load balanced web in a cloud environment. Make sure you have apache and mysql properly tuned based on the memory of the server and use a php accelerator like apc.
We've been fighting with HAProxy for a few days now in Amazon EC2; the experience has so far been great, but we're stuck on squeezing more performance out of the software load balancer. We're not exactly Linux networking whizzes (we're a .NET shop, normally), but we've so far held our own, attempting to set proper ulimits, inspecting kernel messages and tcpdumps for any irregularities.
So far though, we've reached a plateau of about 1,700 requests/sec, at which point client timeouts abound (we've been using and tweaking httperf for this purpose). A coworker and I were listening to the most recent Stack Overflow podcast, in which the Reddit founders note that their entire site runs off one HAProxy node, and that it so far hasn't become a bottleneck. Ack! Either there's somehow not seeing that many concurrent requests, we're doing something horribly wrong, or the shared nature of EC2 is limiting the network stack of the Ec2 instance (we're using a large instance type). Considering the fact that both Joel and the Reddit founders agree that network will likely be the limiting factor, is it possible that's the limitation we're seeing?
Any thoughts are greatly appreciated!
Edit It looks like the actual issue was not, in fact, with the load balancer node! The culprit was actually the nodes running httperf, in this instance. As httperf builds and tears down a socket for each request, it spends a good amount of CPU time in the kernel. As we bumped the request rate higher, the TCP FIN TTL (being 60s by default) was keeping sockets around too long, and the ip_local_port_range's default was too low for this usage scenario. Basically, after a few minutes of the client (httperf) node constantly creating and destroying new sockets, the number of unused ports ran out, and subsequent 'requests' errored-out at this stage, yielding low request/sec numbers and a large amount of errors.
We also had looked at nginx, but We've been working with RighScale, and they've got drop-in scripts for HAProxy. Oh, and we've got too tight a deadline [of course] to switch out components unless it proves absolutely necessary. Mercifully, being on AWS allows us to test out another setup using nginx in parallel (if warranted), and make the switch overnight later on.
This page describes each of the sysctl variables fairly well (ip_local_port_range and tcp_fin_timeout were tuned, in this case).
Not answering the question directly, but EC2 now supports load balancing through Elastic Load Balancing rather than running your own load balancer in an EC2 instance.
EDIT: Amazon's Route 53 DNS service now offers a way to point a top-level domain at an ELB with an "alias" record. Since Amazon knows the current IP address of the ELB, it can return an A record for that current IP rather than having to use a CNAME record, while still being free to change the IP from time to time.
Not really an answer to your question, but nginx and pound both have good reputations as load-balancers. Wordpress just switched to nginx with good results.
But more specifically, to debug your problem. If you aren't seeing 100% cpu usage (including I/O wait), then you are network bound, yes.
EC2 internally uses a gigabit network, try using an XL instance, so you have the underlying hardware to yourself, and don't have to share that gigabit network port.
Yes, You could use an off-site load balancer.. and on bare metal LVS is a great choice, but your latency will be awful! Rumour has it that Amazon is going to fix the CNAME issue. However they are unlikely to add https, indepth or custom health checks, feedback agents, url matching, cookie insertion (and some people with good architecture would say quite right too.) However thats why Scalr, RightScale and others are using HAProxy usually two of them behind a round robin DNS entry. Here at Loadbalancer.org we are just about to launch our own EC2 load balancing appaliance:
http://blog.loadbalancer.org/ec2-load-balancer-appliance-rocks-and-its-free-for-now-anyway/
We are planning on using SSH scripts to intergrate with autoscaling in the same way rightscale does, any comments appreciated on the blog.
Thanks
I would look at switching to a off-site load balancer, not in the cloud and run something like IPVS on top of it. [The reason why it would be off of amazon's cloud is because of kernel stuff] If Amazon doesn't limit the source IP of packets coming out of the you could go with a unidirectional load balancing mechanism. We do something like this, and it gets us about 800,000 simultaneous requests [though we don't deal with latency]. I also would say use "ab2" (apache bench), as it is a little more user friendly, and easier to use in my humble opinion.
Even though your issue solved. KEMP Technologies now have a fully blown load balancer for AWS. Might save you some hassle.
Can you suggest how to create a test environment to simulate various types of bandwidths and traffic in a web app?
Or maybe an open source program which does this against localhost?
I think this is a very important subject when programming web apps but it is not a usual topic, the only way i can imagine to create such kind of environment is to use some kind of proxy in a local network but before start looking into the squid documentation i would like to hear your suggestions.
if you're using apache you may want to take a look at apache ab
There are two approaches to shape network traffic to simulate a network link:
Run some software on the client or server that sits somewhere in the networking stack and shapes the traffic between the app and the network interface
Run the traffic shaping software on a dedicated machine with 2 network interfaces through which your traffic is routed
(2) is a better solution if you don't want to install software on the client or server (and possibly impact performance), but requires more hardware fiddling.
Some other features you might want to think about are what shaping parameters can be simulated. Most do delay and packet loss, some do jitter and bandwidth limiting as well. Some solutions can selectively filter traffic (for instance by port number, TCP or UDP etc).
Here is a list of some of the systems I've found:
Open Source or Freeware
DummyNet is an open source BSD Unix-based for dedicated devices. It is not clear if the software is being actively maintained
NistNet is an open source Linux-based system for dedicated devices. The software has not been actively maintained for several years.
Commercial
Apposite Technoligies sell dedicated hardware solutions for simulating WAN links, with a Web based GUI for configuring the settings and collecting traffic measurements
East Coast DataCom sell hardware dedicated simulators for simulating routers and modems
Itrinegy offer both dedicated device solutions, and solutions for running on clients or servers.
Network FX offer several dedicated device products for simulating network impairments between the client & server
NetLimiter is a client side system that allows throttling of individual applications, and includes a firewall.
Shunra Software offer a range of products, from high end enterprise WAN simulation and testing, to a simple client-resident emulator.
The closest I can think of is doing something similar with VEDekstop from Shunra..
Simulating High Latency and Low Bandwidth in Testing of Database Applications
Shunra VE Desktop Standard is a Windows-based client software solution that simulates a wide area network link so that you can test applications under a variety of current and potential network conditions – directly from your desktop.
I wrote a php script awhile back which used CURL to run a sequence of page requests against my server which represented a typical use scenario. I had it output the times that it took for the server to respond to each of the requests. I then had another script which spawned a bunch of these test case scripts simultaneously for a sustained period and correlated the results into a file which I could then look at in a spreadsheet to see average times. This way I could simulate the number of users hitting the site that I wanted. The limitations are that you need to run the test script on a different server to the web server and that the client machine can become too loaded to give meaningful results past a certain point. I've since left the job otherwise I would paste the scripts here.
If you are running a Linux box as your server, Linux box as your client, or have the capability to put (perhaps a VM) a Linux router between your client and server, you can use NetEm.
NetEm is a Linux TC (Traffic Control) discipline which can delay (i.e. add latency) packets leaving a host. Although it's tricky to set up clever rules (e.g. add latency to some traffic, not to others), it's easy to add a simple "delay everything leaving the interface by 50ms" type rules and some recipes are provided.
By sticking a Linux VM between your client and server, you can simulate as much latency as you like. And you can turn it on and off dynamically. Linux has other TC disciplines which can be combined with NetEm to restrict bandwidth (but the script to set this up can be somewhat complicated). NetEm can also randomly drop packets.
I use it and it works a treat :)
Web Application Stress Tool (WAST) from Microsoft is what you need.
http://www.microsoft.com/downloads/details.aspx?familyid=e2c0585a-062a-439e-a67d-75a89aa36495&displaylang=en
I haven't used it for years (lack of need, not because I'd found anything else), but xat webspeed would be the first thing I would point toward
As other people have mentioned, Apache's ab (comes with Apache, so you probably have it already) is good.
Other good options are:
HP's LoadRunner Apache
Jakarta's JMeter
Tsung (if you want to get your erlang on)
I personally like ab and JMeter the best.
We use Loadrunner to do bandwidth and traffic simulation in our App. Loadrunner is can start agents on various machines and you can simulate one machine as running on dialup modem v/s another on DSL v/s another on Cable internet.
We also use Loadrunner to simulate various kinds of traffic conditions from 10 user run to 500 user run. We can also insert think times in the script and simulate a real user executing the http request. The best part is that it comes with a recording studio where it will plug in with Internet explorer and you can record the whole scenario/Usecase that can be as simple as hitting one page to a full blown 50-60 page script or more.
i found this little java program that works great : sloppy
yet not a proffesional solution but it works for simple tests, i guess it uses java streams and buffers to slow down the connection .
Have you looked at Tsung? It's a great utility for seeing if your website will scale in event of attack, I mean massive popularity. We use it for our web frontend, and our internal systems too.
If you're interested in performing your tests out of your browser, there is also a really great Firefox plug-in.
Do not forget about Wanulator (http://www.wanulator.de/).
The name Wanulator comes from "WAN" and "simulator. This pretty much describes what the software does: It simulates different Internet conditions such as delay or packet loss. Furthermore it simulates user access line speeds e.g. modem, ISDN or ADSL.
Wanulator is currently packaged as a Linux boot CD based on SLAX. This will give you a full out of the box experience. You can turn any PC into a test-system within a blink - just by booting the Wanulator CD. The package already includes useful client SW such as web-browser and network sniffer (Wireshark). Nevertheless if the PC has 2 network interfaces the system can run as an intermediate system between your server and your client - as a switch - without any configuration hassles.