Testing Server & Client - bash

I want to start testing my server with a couple of clients before moving onto higher numbers.
The current plan right now is just to run tests to check/test latency by writing a script that takes in X amount of seconds that will determine how long the test runs. The script will launch the server and clients. On their end the ping testing code is already in place.
My problem is that while the server is easy to just launch it via a bash script, I don't know any easy way to also launch Chrome and have it go to the webpage. (the client)
Hoping this is an easy script, my scripting experience is limited :]

Chrome will accept URL's on its command-line, like so:
chrome http://www.google.com/

Related

Running selenium webdriver test in Remote Desktop Connection is taking very long time

I am running a Selenium WebDriver test in the Remote Desktop using maven command. The test is taking very long time to load the URL and login into the site whereas when I try to run the same test in my local both URL loading and user Login where very quick. Can someone please tell me what would be the reason for that slowness.
In my experience using Remote VM as UI tests host, has always been slower compared to local environment. Mainly because the dedicated VMs are missing the GPU and they try to render the requested browser(s) through the CPU. If you open your remote machine monitoring tool, most likely you'll see a lot of spikes when the browser launches. Similar to the one shown bellow.
In order to optimize performance, you can employ headless execution (HtmlUnitDriver, PhantomJS) or block certain content from loading, like images, animations, videos etc. However when doing this, try to keep their placeholders.

How can I programmatically visit a website without using curl?

I'm trying to send a large number of queries to my server. When I open a certain website (with certain parameters), it sends a query to my server, and computation is done on my server.
Right now I'm opening the website repeatedly using curl, but when I do that, the website contents are downloaded to my computer, which takes a long time and is not necessary. I was wondering how I could either open the website without using curl, or use curl without actually downloading the webpage.
Do the requests in parallel, like this:
#!/bin/bash
url="http://your.server.com/path/to/page"
for i in {1..1000} ; do
# Start curl in background, throw away results
curl -s "$url" > /dev/null &
# Probably sleep a bit (randomize if you want)
sleep 0.1 # Yes, GNU sleep can sleep less than a second!
done
# Wait for background workers to finish
wait
curl still downloads the contents to your computer, but basically a test where the client does not downloads the content would not be very realistic.
Obviously the above solution is limited by the network bandwith of the test server - which is, usually worse than the bandwith of the web server. For realistic bandwith tests you would need to use multiple test servers.
However, especially when it comes to dynamic web pages not the bandwith might be the bottleneck, but the memory or CPU. For such stress tests, a single test machine might be enough.

What about the server uptime when using CGI-binaries?

I have a need to convert some of my perl CGI-scripts to binaries.
But when I have a script of 100kb converted into binary it becomes about 2-3Mb. This is understood why, as compiler has to pack inside all the needed tools to execute the script.
The question is about the time of pages loading on the server, when they are binary. Say, if I have a binary perl-script "script", that answers on ajax requests and that binary weights about 3mb, will it reflect on AJAX requests? If, say, some users have low connection, will they wait for ages until all these 3Mb will be transferred? Or, the server WON'T send all the 3mb to a user, but just an answer (short XML/JSON whatsoever)?
Another case is when I have HTML page, that is generated by this binary perl-script on the server. User addresses his browser to the script, that weights 3Mb and after he has to get an HTML page. Will the user wait again, until the whole script is been loaded (every single byte form those 3Mb), or just wait the time that is needed to load EXACTLY the HTML page (say, 70Kb), and the rest mass will be run on the server-side only and won't make the user to wait for it?
Thanks!
Or, the server WON'T send all the 3mb to a user, but just an answer (short XML/JSON whatsoever)?
This.
The server executes the program. It sends the output of the program to the client.
There might be an impact on performance by bundling the script up (and it will probably be a negative one) but that has to do with how long it takes the server to run the program and nothing to do with how long it takes to send data back to the client over the network.
Wrapping/Packaging a perl script into a binary can be useful for ease of transport or installation. Some folks even use it as a (trivial) form of obfuscation. But in the end, the act of "Unpacking" the binary into usable components at the beginning of every CGI call will actually slow you down.
If you wish to improve performance in a CGI situation, you should seriously consider techniques that make your script persistent to eliminate startup time. mod_perl is an older solution to this problem. More modern solutions include FCGI or wrapping your script into it's own mini web server.
Now if you are delivering the script to a customer and a PHB requires wrapping for obfuscation purposes, then be comforted that the startup performance hit only occurs once if you write your script to be persistent.

Running multiple automated load tests of a site to see if code changes / server config make it quicker

Im trying to work out which lazy loading techniques and server setup allow my to server a page quickest, currently im using this workflow :
Test ping and download speed
open quicktime screen recorder (so i can review the network tab and load times if there are any anomalies in the data to see what caused them)
open a new incognito tab with cache disabled and network tab open
load website
save screencast
log ping, download speed, time, date, commit version from git, website loading time into spreadsheet
After i have another test i can the spreadsheet and make a quantified decision on what works.
Running this workflow currently takes about 4 minutes each time i run it (im doing all of these manually, generally i run the same test a couple of times to get an average and then change the variables, image loading js script tweaks and also try it on different VPSs, try it with / without CDN to allow sharding etc.)
Is there an automated approach to doing to ? I guess i could setup a selenium script and run the tests, but i was wandering if there was an of the shelf solution ?
Ideally i would be able to say test it with the following git commits (although i would have to do the server config changes manually) but it would even be quicker i could automate the running, screencasts and logging of the tests.

Scripting a major multi-file multi-server FTP upload: is smart interrupted transfer resuming possible?

I'm trying to upload several hundred files to 10+ different servers. I previously accomplished this using FileZilla, but I'm trying to make it go using just common command-line tools and shell scripts so that it isn't dependent on working from a particular host.
Right now I have a shell script that takes a list of servers (in ftp://user:pass#host.com format) and spawns a new background instance of 'ftp ftp://user:pass#host.com < batch.file' for each server.
This works in principle, but as soon as the connection to a given server times out/resets/gets interrupted, it breaks. While all the other transfers keep going, I have no way of resuming whichever transfer(s) have been interrupted. The only way to know if this has happened is to check each receiving server by hand. This sucks!
Right now I'm looking at wput and lftp, but these would require installation on whichever host I want to run the upload from. Any suggestions on how to accomplish this in a simpler way?
I would recommend using rsync. It's really good at only transferring just the data that's been changed during a transfer. Much more efficient than FTP! More info on how to resume interrupted connections with an example can be found here. Hope that helps!

Resources