I have exhausted my ability to find answers on the web for this one. I'm trying to install mod_perl on windows and there are many dead ends.
Is mod_perl even what I'm looking for?***
I have a collection of web apps used within my company's local network for database and file system interface. The web server runs Apache 2.2 and ActivePerl 5.16 using DBI, DBD::mysql, and CGI. The clients get their dynamic content via AJAX calls (jQuery.getJSON) to the Perl scripts using CGI parameters. The traffic is extremely light - only 4 or so users and only a few queries at a time here and there.
The issue I'm having is that the latency is unacceptable for the nature of these apps. The delay is typically around 400ms, all waiting time. I have experimented with increasingly simplistic Perl scripts and believe all of the delay is the Perl interpreter. I've looked into FastCGI but as I understand this deals mostly with high traffic which is not my problem: it's the overhead of each low-traffic call. So it seems like an Apache-embedded Perl interpreter (as I understand mod_perl to be) would solve my overhead-related latency issues.
How do you install it in a post Randy Kobes world?
All resources I've found for installing mod_perl for my setup involve a server theory5x.uwinnipeg.ca formerly run by him and now defunct after his passing. ActivePerl ppm does not have any mod_perl built in packages and the website shows all build failed listings.
Here is an ActiveState community post explaining why there is no ppm.
I did find this resource that seems to have all the missing pieces but for Strawberry Perl.
So I'm left to think the only way to do this is to install from source, but I have no understanding of how to do this. I have zero familiarity with Linux and it seems like most of this stuff is geared toward it. Worse yet I have a 64-bit Windows XP and a Windows Server to install it on.
The other thing that crossed my mind is maybe I need to install some kind of distribution like XAMPP instead of putting together all the pieces myself. I'd be quite nervous to change course now and risk breaking my working but slow apps
Is mod_perl even what I'm looking for?
I hope not.
There are issues with mod_perl. Your Apache, mod_perl and perl need to all be built with compatible compilers and architectures so they can all be linked at run time. There will be no running of a 32bit Apache with a 64bit perl when you are using mod_perl. In my experience mod_perl should also be compiled against header files for your specific versions of both Apache and perl. Presuming you get all this secret sauce mixed up correctly, you are now running a web server that can be crashed by a poorly written perl script. But on the bright side, this is more efficient than common CGI.
After a few years of this madness, FastCGI was invented. By running as a persistent, but separate, process, the web server was able to achieve mod_perl (or mod_PHP, or mod_python) efficiency without the need for binary compatibility or the stability risks. Just think of the freedom! An Apache module that cares only about binary compatability with it's Apache host and can farm out tasks to Perl, Python, C or even Visual Basic. ( I just had an evil thought about trying to do web services with Forth or Lisp, but that would just be crazy.)
Running on a linux distro (or other canned XAMPP stack) can make setup and maintenance of mod_perl easier because they will distribute it in a package that has been compiled to work with the packages they supply of both Apache & perl. Unfortunately if you want to run with a version of Apache or perl that is not "official" to your distro, get ready to DIY. Even so, a distro's packages do not mitigate the stability issues inherent in running mod_(language-of-choice).
In any case, before you're up and running in your new configuration, your existing CGI scripts will need to be modified. You can choose to rewrite them to mod_perl, FastCGI, or PSGI/Plack standards. If you choose to rewrite to PSGI/Plack standards, then you can care much less about the specifics of your web server's current or future configuration.
How do you install it in a post Randy Kobes world?
The last link in your question appears to be spot on. Do you have a religious or PHB based reason to prefer ActivePerl over StrawberryPerl? In the end, mod_perl requires that it be built against your specific version of Apache and your specific version of perl. This will either involve compiling it yourself, somebody else wrapping up versions for multiple Apache/perl version combos, or somebody else wrapping up a single version and asking you to use their preferred versions of Apache & perl.
If you choose the mod_perl route and believe even slightly that server software should be kept up to date (XP? Seriously?), then be prepared to either roll your own or trust your 3rd party to keep you up to date. Of course, if you're a hit-and-run developer, well that frees up your choices considerably...
tl-dr:
FastCGI is your friend. Particularly if you are running Windows and like to keep server software up to date.
mod_perl works best when supported by a responsible distro or a responsible developer who is comfortable building it from it's source. ...repeatedly.
It's been an eternity since I've installed mod_perl on Windows so I'm not sure I can help you with that.
But your understanding that FastCGI "deals mostly with high traffic" is not correct. Both FastCGI and mod_perl will offer very similar performance benefits, because both will execute your scripts with a persistent interpreter–eliminating the overhead of starting up perl and compiling your code on each request. Therefore, there is no reason not to give FastCGI a shot.
You might want to look at the PSGI/Plack API which allows you to write code agnostically that can run under vanilla CGI, FastCGI, mod_perl, or with a PSGI-aware server such as Starman, or uwsgi. All of these except for vanilla CGI offer a persistent environment that will reduce the overhead of executing your scripts.
Related
I'm currently developing a web-app, and I alternate between Windows and Mac dev machines for this.
My problem is that pages render extremely slowly on Windows, but it's not my Ruby code running slowly, but rather that static files are getting served slowly.
A typical page takes about 200ms to render and get served in dev (both Mac and Windows are similar here), but it includes about 50 static files (in production it's just 5 to 10, once they get minified and combined, but in dev they're still separate).
Those 50 files take about 1.5 seconds to serve on the Mac, but about 10 seconds on Windows. Which makes it quite tortuous to test things...
I've tried both Webrick and Thin, they are about the same.
Has anybody found this problem and know how to improve this?
I've tried changing the Webrick conf to ":DoNotReverseLookup => true", as suggested in this answer, but it's not helping.
Any help will be greatly appreciated
Thanks!
Daniel
You are running into two existential problems that have plagued Ruby developers for a long time:
Webrick is slow. Always. Just don't even bother.
Ruby is always slower on Windows. Sometimes by orders of magnitude as you've found.
So if you insist on doing development on Windows proper (as opposed to developing only on Linux or developing on a Linux VM running on Windows), then we need to figure out some ways of putting lipstick on a pig.
Some ideas:
Make sure you run the latest version of Ruby.
Try deploying nginx with Thin as shown in this helpful albeit dated tutorial. This will help you get the most out of Thin's multithreading and asynchronicity.
Use Capistrano to deploy to Windows via this also dated GitHub project.
If you do decide you've had enough of developing Rails on an environment it wasn't designed for, you can set up a VM in the manner described here. The author reports significant speedup.
Use an Ubuntu VM inside VirtualBox, it's probably much closer to your deployment environment then mac and windows which means less "but it worked in development" troubles in production.
Also, you will save yourself a lot of time dealing with quirks of different ruby/gems implementations and various levels of headache due to native extensions.
You can:
set up internal network so you can use browser under windows to browse the app running inside the VM
use something like putty to open console sessions to VM
share a Dropbox/Sparkleshare folder with your Ubuntu VM so you always have same code between Windows and Mac box and Ubuntu VM
and this enables you to use your favorite editor under windows/macos to edit files inside the VM
you can use that same VM under Mac too
Ubuntu installation under VirtualBox is fast, easy and well documented, it's pretty much just a wizard. Alternatively, you can try finding a good vagrant recipee (see http://www.vagrantup.com/) or ask around to see if a colleague of yours would be willing to share his/her vbox.
I experienced a performance decrease on development (due to real-time compilation) working with projects with a large number of assets, but I was not on Windows.
I assume the large performance difference may be caused by some inefficient asset compilation under Windows.
I don't any Windows development experience, I haven't been using a Windows machine for a long time, however I registered a noticeable performance increase in parallel asset processing (in development) when I switched to multi-thread servers, specifically Puma. Keep in mind that, in any case, the default Rails webserver (Webrick) is very inefficient.
As Konstantine explained in this answer, there are currently several choices available. You can group them by the processing mode.
Skipping all the background history about Ruby threads, multi-process etc, I'd invite you to try Puma in your machine and see if it improves the load.
Puma works better with Ruby implementations that offer real multi threading, but quoting the official readme
On MRI, there is a Global Interpreter Lock (GIL) that ensures only one thread can be run at a time. But if you're doing a lot of blocking IO (such as HTTP calls to external APIs like Twitter), Puma still improves MRI's throughput by allowing blocking IO to be run concurrently (EventMachine-based servers such as Thin turn off this ability, requiring you to use special libraries).
Puma is designed to provide a simple and high performance request/response pipeline to Rack apps.
I need to develop multi-user web site which also should be able to run locally (on Windows) for a single user. I'd like to use the same code for both sites. Local copy should work without installing any software, source code shouldn't be available, and web browser should be integrated into application. I may use SQLite as local database and I want to use ExtJS as front-end.
I'm aware of 2 products for PHP that claim to do this, but I'm not sure yet that they will work with the tools I'm interested in (ExtJS and Kohana PHP).
I know that Haskell provides such option, but I don't want to go that far :).
Flex doesn't seem too promising (at least what I remember), as its UI-building capabilities are limited, and I'd like to have online version in HTML, not in Flash.
Are there any other technologies available (.NET, Ruby, Python, etc.)?
Well, you can use PHP itself. It's not the neatest solution available, but you can use php-win.exe to avoid command-line popping out. You can then use some source obfuscators or byte-code compilers to avoid revealing source code. And AFAIK PHP works without instalation (it can run out of a directory if it has all the required files in it)
As I said, not the neatest solution, but will probably meet your requirements.
I'm probably missing something, because I don't hear anyone else mentioning this. But when I look at the process and file system modules I see a lot of Unix-isms that are unlikely to work on Windows. How is this going to work for Windows users? Windows users who never used Unix may not even realize which are Unix-isms, that are never going to work for them. I suppose this is really just a documentation issue, it would be nice to filter documentation based on Unix or Windows. Process.getuid() would be one example. Chmod would be another. Even SIGUSERn is there. (Vague memories of servers mysteriously shutting down.) I do have Unix experience from way back, but many will not have. I avoided Rails because it was slow on Windows, but I hear that node.js is smoking fast on Windows, so I'm hopeful!
Since version v0.6 (except the latest v0.6.9, big fs bug), Node.js has become a first class citizen on Windows. Because of this, it's my thought that most 3rd party modules will eventually support Windows. A lot of the major ones do now, Express.js most notably.
This will get better too. Some third party modules require the extra step of forcing the user to compile native C++ modules (NPM/node-waf automates this). However, in future versions Node.js will be ditching this for binary compatibility. The developers behind Node.js are trying very hard to emphasize a first class development experience that focuses on a pleasant developer experience across all platforms.
Admittedly, the Node.js experience will be better on OS X or Linux, but it's going to only get better on Windows. I highly encourage you to check it out if you haven't already.
I'm probably dreaming here, but am wondering if there's any possibility of completely embedding a minimal CouchDB engine within a Windows application, such that the app can be run without requiring installation (of CouchDB/Erlang) on the user's computer.
I already provide this slimmed down / bundled ability - check here https://github.com/dch/couchdb/downloads and specifically the lean bundle at 16MiB erlang + all couch love here https://github.com/downloads/dch/couchdb/couchdb-1.1.0+COUCHDB-1152_otp_R14B03_lean.7z
Some brief notes on bundling and embedding couchdb on windows at wiki.apache.org/couchdb/Quirks_on_Windows including how to hide the erlang window (erl.exe -detached) at startup.
Ask on CouchDB #user mailing list if you want more info or help while you have a crack at this.
While not a code solution, you could use one of the bundling applications that can embed files and other files into one executable. One example would be BoxedApp.
Why bother. It is so easy to install Erlang on Windows. Just bundle up the whole thing including the erl.exe binary and have your installer unzip it into a folder. The only thing that you would need to change would be the batch files, or better yet, discard them and write your own batch file to start up CouchDb. Also, it is a good idea to use a different port that either the normal Erlang port (or the usual CouchDB port) and maybe even get Erlang to use localhost as its "shortname".
The CouchDB wiki does provide at least a few tips for Integrating CouchDB into your Windows Applications. YMMV, from what I can tell it's more or less just tips on creating a relocatable build. You'll want to likely generate a solid random admin user/password into the local.ini file during the install process and set up proper permissions on all created databases (to protect against any potential cross-site scripting vulnerabilities) in addition to ensuring the socket binding only happens on the default localhost interface.
What are the advantages and disadvantages of using the built-in Apache for local web development on Mac OS X, specifically 10.6 Snow Leopard?
Instead of using the built-in Apache, I know that options such as MAMP and XAMPP exist. However, for some reason I just haven't wrapped my head around the benefits or potential pitfalls with using the built-in Apache versus using a MAMP/XAMPP-based (or other) solution.
Is the advantage of a MAMP/XAMPP-based solution simply ease of configuration?
When not using the built-in Apache are there other benefits besides ease of configuration? For instance, is there a benefit similar to using virtualenv to avoid tainting a pristine Python install?
If you're only developing static webpages and don't need PHP or MySQL, then why not use the built-in Apache with something like virtualhost-sh or VirtualHostX to ease configuration?
Configuration and Usage Considerations
I am interested in using virtual hosts in order to simultaneously develop multiple websites
I use git for version control and have a tendency to store source files in ~/developmentinstead of ~/Sites (this probably isn't material, but thought I'd mention it)
Related Research
The answers to the SuperUser Question What is the best Apache PHP Setup for a Mac Developer talk about different MAMP, XAMPP, and roll your own solutions
Advantages:
It's already there, you don't have to install anything
If all you are interpreting are .html files only, then it's fine.
Disadvantages:
You can't update it
(Well you shouldn't. You can, it just feels hacky modifying stock system components).
If you wanted to enable PHP/MySQL etc later on you will be changing things in paths on the system that may break between OS updates.
If this is your primary OS, you are now running extra daemons (PHP/MySQL/Apache) in the background that eat up CPU cycles.
Overall though I wouldn't do it. MAMP's daemons are easy to start/stop and your changes are confined to MAMP. If you mess something up or need to quickly get different sites running with different settings it's kinda easier to blast things away in MAMP and start again (not that MAMP is without it's hassles).
If you don't want to use MAMP i'd suggest getting a dedicated Linux box (or use a Linux Virtual Machine) to do this on having been down the OSX Apache path before. It's not pretty. OSX's built in stuff might seem easier at first, but it's inflexible and eventually as your requirements grow you'll wish you hadn't done it.
Update:
I would recommend going with XAMPP over MAMP. It has better performance and is updated more often. Plus XAMPP is Cross platform and Open Source :)
I've used the stock Apache 1.x in previous versions of OS X for both local development and production web sites and have never had a problem with system updates breaking anything. I've never done anything extremely fancy, but have had plenty of vhosts, regular and reverse proxies, PHP, Python and Perl CGIs, custom cgi-bin locations, custom logs, etc, without issues. It has always worked exactly as I expect Apache to work.
This has continued to be the case with Apache 2 under 10.6. So for local development and low-key production stuff, I'd trust it.
I've had the same experience with the stock Apache installs on OS X Server, with the exception that using the provided GUI tools to edit the httpd.conf files has always been a total disaster. They simply never worked for me, overwrote previous changes, or outright crashed.