Server-side E-mail scripting - cgi-bin

I have a project idea in my head that I am considering trying to execute, but this desktop programmer doesn't really know where to begin when it comes to web server development!
I would like to have a script respond when an e-mail is received at an address, which would query a database, make some decision, and then possibly forward this e-mail to another address. I know I can accomplish the actual parsing and database interaction in Perl or PHP (or really any scripting language) if I write the script and put it into my hosted CGI-BIN directory. But beyond there, I'm a little lost.
In summary:
My specific question is: How do I in essence 'hook in' to the mailbox and execute a script when an e-mail is received?
My more general question is: How do I respond to user events beyond just a requesting a URL? If I wanted to run an application for accepting License codes, for example, how would I accomplish that? (listening for messages on an open socket or port, executing actions against a database, etc.)
My (possibly too-broad) question is: What would my next step be in going deeper? What books or resource made it all click, and made you fell like you had the strength of 1000 IT (wo)men?
More about my background/what I already know:
I've never ventured too far in web development, but the world has grown so interconnected, I can't really continue to sit in the dark. I know how to code and script, write PHP/MySQL websites, and I can navigate a *nix machine with some competency.
Have spent the last 5-10 years focused in the C-family of languages. I replaced a crude knowledge of ASP with a crude knownledge of PHP, with very little JavaScript; lots of VBScript and more recently, Python.

You'll need to poll the POP3/IMAP mail server for messages. PHP has libraries for handling those protocols for you PHP IMAP. The next step is to get that script running. The most rudimentary way of doing that is to execute it as a cron task that checks the mail server and responds to the emails. On the other hand PHP is not necessarily the language for automating background tasks. Perl has those facilities also, as does Python.

Related

Make a software perform some action when a button in my site is clicked (theorical)

I have been thinking about this subject for a while now, but this is my first attempt to get some help. So maybe I am still not asking the right questions, and I apologize for this. Btw, this is a "theorical" question, I am not asking for codes... AND, english is not my native language (sorry again)...
Well, I need to (programatically) create a method to allow a software inside a LAN (behind router/firewall) perform some action when a button in my website is clicked. The software is in a completely remote location, not in same computer/server as the website.
I know I could create some kind of thread in the software to periodically listen to a boolean database field, or maybe a text file in the website. But I think today we might have some better (more efficient) tecnology for this necessity...
I would like to know what is (or are) the most efficient solution(s) for this approach. By efficient I mean lightweight (do not overload my website with periodical requests) and fast (users need to wait just few seconds when click the button to get a response).
I appreciate any kind of thoughts and ideais about a possible solution for this necessity, and thanks in advance!
First...you have an uphill struggle since doing what you want is VERY DANGEROUS and typically not allowed, since the point of a web app is to disengage from the hosting OS. Otherwise, hackers would have a huge front door into peoples machines.
Second...you're not clear on where the 'software' is located. Will it be executed on the web server that is hosing your web site, is it on a completely remote computer on the LAN, or is it to pull the exe back and execute on you're machine that happens to be on the LAN?
With that...you can check out these links for some guidance:
https://community.serif.com/forum/8612/x7-run-exe-eg-notepad-exe-by-clicking-button-in-website
How to launch an EXE from Web page (asp.net)
Finally, you might be able to use the website button to call into a web api that will execute the exe.
Theoretically you could create a Stored Procedure on a SQL server, that is started as a result of the button click. For example the button click could increment a counter by querying the server then, as a result, run the procedure. You can even pass arguments into it. Within the stored procedure on the SQL server you would then tell it to run the program (preferably a service), using PowerShell.
Your button will call a stored procedure.
This stored procedure will call 'xp_cmdshell' with parameters.
Powershell will then tell remote computer to run application.
BUTTON > T-SQL Stored Procedure > xp_cmdshell (PowerShell) > Remotely Start Executable.

How to create cleint/server applications

I have been trying to make multiplayer applications on a website for a while. I wanted to start with a basic chat system. I made one but it is really slow. On the HTML page it send the message through AJAX to a PHP application which saves it to a text file. Then back on the HTML file, it is constantly checking the text file every 3 seconds. This is very slow and unreliable. So i looked up better ways of doing this. I found Node.js and used it along with Socket.io and express to create a faster chat application. But it only works on local host and i have no idea how to implement it on a website. So I kept looking and discovered WebSockets. Which are so confusing and seem to have very little support. I am confused how websites out there have applications that can be real time with so few options. How is this done? Am i missing a way of doing this? If you can help me that would be great.
Socket.io is using websockets under the hood already. Using raw websockets isn't necessary for your chat app.
You're on the right track using socket.io and node.js server-side.
Building a multi-player game in the browser will be a very difficult task for a beginner. But that's why it's good to learn! I suggest using a library for graphics (a quick google gave me this: http://www.pixijs.com/ ).
The over all architecture should be something like:
Users go to your server and receive a web page (.html) which contains the javascript and a canvas they need to play the game. This is the "client-side" because it's all running on each users' web browser on their computer.
The web page runs the javascript which talks to the node.js server using socket.io. This is the "server-side". The job of the server is to coordinate player data (who is who, where are they in the game, who's doing what etc) and keep track of the game state. Basically, this is where the game actually is, kind of like having the Monopoly board on the server, while the client-side is really just responsible for showing the board to the players (drawing it on the HTML5 canvas) and sending player input to the server.
Tutorials:
USE GOOGLE. Try just literally searching for "javascript game tutorial". Try every tutorial that comes up. If something is taking a while to get up and running then ditch it and move to another one.
Do simple little things at first 'till you start to grok what the overall process is like. For example: https://developer.mozilla.org/en-US/docs/Games/Workflows/2D_Breakout_game_pure_JavaScript
Remember playing ultra-simple games like Pong? Try writing games like this first. Your chat system is a great start, by the way, because it covers the basics of how to get a server up and running, how to get a page, how to send data around.
As for getting something up and running on a server where others can connect to it... Check this out: have your buddies come to your house and bring their laptops, start your node.js chat server, tell them what your IP address is, and have them go to "http://YOURIPADDRESS:8000" in their browser-- they'll connect to your node.js server!
Getting it running on a hosting provider is a little more involved and probably not worth the trouble at this stage. You'll learn more about this later just by keeping on the way you're going.
Socket.IO does not only work on localhost. You will need to get a server to run your application on. I highly recommend not worrying about this piece of the puzzle just yet, as it is somewhat complicated if you are brand new at this. Come back to this part when you are ready.
As for the game development, I recommend using Phaser. It has everything you need to get started and great documentation.
http://phaser.io

Why should I avoid using CGI?

I was trying to create my website using CGI and ERB, but when I search on the web, I see people saying I should always avoid using CGI, and always use Rack.
I understand CGI will fork a lot of Ruby processes, but if I use FastCGI, only one persistent process will be created, and it is adopted by PHP websites too. Plus FastCGI interface only create one object for one request and has very good performance, as opposed to Rack which creates 7 objects at once.
Is there any specific reason I should not use CGI? Or it is just false assumption and it is entirely ok to use CGI/FastCGI?
CGI, by which I mean both the interface and the common programming libraries and practices around it, was written in a different time. It has a view of request handlers as distinct processes connected to the webserver via environment variables and standard I/O streams.
This was state-of-the-art in its day, when there were not really "web frameworks" and "embedded server modules" as we think of them today. Thus...
CGI tends to be slow
Again, the CGI model spawns one new process per connection. While spawning processes per se is cheap these days, heavy web app initialization — reading and parsing scores of modules, making database connections, etc. — makes this quite expensive.
CGI tends toward too-low-level (IMHO) design
Again, the CGI model explicitly mentions environment variables and standard input as the interface between request and handler. But ... who cares? That's much lower level than the app designer should generally be thinking about. If you look at libraries and code based on CGI, you'll see that the bulk of it encourages "business logic" right alongside form parsing and HTML generation, which is now widely seen as a dangerous mixing of concerns.
Contrast with something like Rack::Builder, where right away the coder is thinking of mapping a namespace to an action, and what that means for the broader web application. (Suddenly we are free to argue about the semantic web and the virtues of REST and this and that, because we're not thinking about generating radio buttons based off user-supplied input.)
Yes, something like Rack::Builder could be implemented on top of CGI, but, that's the point. It'd have to be a layer of abstraction built on top of CGI.
CGI tends to be sneeringly dismissed
Despite CGI working perfectly well within its limitations, despite it being simple and widely understood, CGI is often dismissed out of hand. You, too, might be dismissed out of hand if CGI is all you know.
Don't use CGI. Please. It's not worth it. Back in the 1990s when nobody knew better it seemed like a good idea, but that was when scripts were infrequent, used for special cases like handling form submissions, not driving entire sites.
FastCGI is an attempt at a "better CGI" but it's still deficient in a large number of ways, especially because you have to manage your FastCGI worker processes.
Rack is a much better system, and it works very well. If you use Rack, you have a wide variety of hosting systems to choose from, even Passenger which is really simple and reliable.
I don't know what mean when you say Rack creates "7 objects at once" unless you mean there are 7 different Rack processes running somehow or you've made a mistake in your implementation.
I can't think of a single instance where CGI would be better than a Rack equivalent.
There exists a lot of confusion about what CGI, Rack etc. really are. As I describe here, Rack is an API, and FastCGI is a protocol. CGI is also a protocol, but in its narrow sense also an implementation, and for what you're speaking of is not at all the same thing as FastCGI. So let's start with the background.
Back in the early 90s, web servers simply read files (HTML, images, whatever) off the disk and sent them to the client. People started to want to do some processing at the time of the request, and the early solution that came out was to run a program that would produce the result sent back to the client, rather than just reading the file. The "protocol" for this was for the web server to be given a URL that it was configured to execute as a program (e.g., /cgi-bin/my-script), where the web server would then set up a set of environment variables with various information about the request and run the program with the body of the request on the standard input. This was referred to as the "Common Gateway Interface."
Given that this forks off a new process for every request, it's clearly inefficient, and you almost certainly don't want to use this style of dynamic request handling on high-volume web sites. (Starting a whole new process is relatively expensive in computational resources.)
One solution to making this more efficient is to, rather than starting a new process, send the request information to an existing process that's already running. This is what FastCGI is all about; it maintains a very similar interface to CGI (you have a set of variables with most of the request information, and a stream of data for the body of the request). But instead of setting actual Unix environment variables and starting a new process with the body on stdin, it sends a request similar to an HTTP request to an FCGI server already running on the machine where it specifies the values of these variables and the request body contents.
If the web server can have the program code embedded in it somehow, this becomes even more efficient because it just runs the code itself. Two classic examples of how you might do this would be:
Have PHP embedded in Apache, so that the "Apache server code" just calls the "PHP server code" that's part of the same process; and
Not run Apache at all, but have the web server be written in Ruby (or Python, or whatever) and load and run more Ruby code that's been custom-written to handle the request.
So where does Rack come in to this? Rack is an API that lets code that handles web requests receive it in a common way, regardless of the web server. So given some Ruby code to process a request that uses the Rack API, the web server might:
Be a Ruby web server that simply makes function calls in its own process to the Rack-compliant code that it loaded;
Be a web server (written in any language) that uses the FastCGI protocol to talk to another process with FastCGI server code that, again, makes function calls to the Rack-compliant code that handles the request; or
Be a server that starts a brand new process that interprets the CGI environment variables and standard input passed to it and then calls the Rack-compliant code.
So whether you're using CGI, FastCGI, another inter-process protocol, or an intra-process protocol, makes no difference; you can do any of those using Rack so long as the server knows about it or is talking to a process that can understand CGI, FastCGI or whatever and call Rack-compliant code based on that request.
So:
For performance scaling, you definitely don't want to be using CGI; you want to be using FastCGI, a similar protocol (such as the Tomcat one), or direct in-process calling of the code.
If you use the Rack API, you don't need to worry at the early stages which protocol you're using between your web server and your program because the whole point of APIs like Rack is that you can change it later.

What is a good framework for deploying a portable HTML/JavaScript Windows application?

I need to deploy an application onto some Windows machines for purposes of data collection from a group of people (i.e. the application will be used to gather responses to a series of survey questions). The process is interactive, alternating between displays of text and images with specific timing requirements. I have put together a prototype application using HTML and JavaScript that implements the survey. However, there are some unique constraints on the deployment environment that have me stuck:
While the machine is Internet-connected, the client requires that the survey application must run fully local to the PC that it runs on. Therefore, sending the survey results to a remote server is not permissible. Obviously, saving to a local file from a Web browser is typically not permitted for security reasons.
Installation of applications onto the machines that will run the survey is not permitted.
The configuration of the machines is not known specifically a priori, but I can assume some recent version of Windows with IE8+.
The "no remote access" requirement was a late comer, and has thrown a wrench into the plan of just writing a simple Web application that could post results to an HTTP server. I'm now looking for the easiest way forward. Two main approaches come to mind:
Use a GUI framework that provides a control that can display HTML/JavaScript; running a full-blown application on the PC would allow me to save the results to the filesystem. I've never done this, but it seems like in this day and age it shouldn't be too difficult. This would allow me to reuse much of my existing prototype implementation, but I would need some way of transferring the results (which would be stored in a JavaScript data structure) outside of the Web control to where the rest of the application could access it.
Reimplement the entire application using some GUI framework (I've used PyQt successfully before, although not on Windows). This approach is obviously less desirable than #1 due to the lack of reuse. However, it may be necessary if #1 isn't feasible.
Any recommendations for the best way to go? Ideally, I'm looking for a solution that can be run in a "portable" manner from a USB thumbdrive or similar.
Have you looked at HTML Applications (HTA)? They work in IE5+ and can use Windows Scripting Host to write to local drives and UNC shares...
Maybe you can use a portable web server with a scripting language on the server side. http://code.google.com/p/mongoose/ Mongoose, for example, you can run PHP, CGI, etc. .. scripts. Then, simply create a script to save a file to your hard drive. And let the rest of the application in the same manner.
Use a script to start the web server, and perhaps a portable web browser like K-Meleon to start the application http://kmeleon.sourceforge.net/ This is highly configurable. Or start the system explorer to your localhost URL.
The only problem may be that the user has to modify the firewall for the first time you run the server?

eMail archiving with Ruby

I'm looking for information on any libraries or methods that would help me to build an email archiving system using Ruby (I'm open to other languages if suggested).
The application would need to do the following:
1) Sit on a incoming mail server, receiving and storing all incoming
email.
2) After storing email, push it out to our actual email server.
3) The Email archive should be searchable.
Any thoughts on this are appreciated, I can't seem to find an existing project that does this.
Even though I'm a big Ruby fan, Zed Shaw has written a very interesting and configurable SMTP server in Python, called Lamson:
http://lamsonproject.org/
I've never used Lamson, but I think with minimum tweaking you could make it store e-mails into most any DB you choose, and forward e-mails easily wherever you like.
Once you have all your emails in a DB, it should be a relatively easy task to build a front-end to the DB with Ruby (and/or Rails) if you wish.
Since processing e-mails can be fairly tricky stuff, using something purpose-built like Lamson as your intermediate processor might be worth a shot.
The lamson project looks pretty awesome. If you're looking to actually implement something yourself I posted a blog post a while back on some of the best methods to receive email in Ruby. There are also plenty of ways to push the mail back out again fairly easily, it's probably better to rely on a system that already has all of this functionality though.

Categories

Resources