I'm interested in having clients, on iOS/OS X platforms using Cocoa, having secure transaction with a dedicated server. I'm looking for the easiest and most 'proper' use of the fancy highly abstracted APIs that Apple has developed. An example of what I'm talking about with those "fancy" APIs is that https is implemented "for free" and could suit my purposes - except that I don't know how to implement the corresponding server portion of that?
The network messages basically need to be a secure session where a client can create an account, or log in with that account, can send a request to the server, and receive a response from the server. The traffic is low volume, latency is OK, most important thing is to implement confidentiality and to make my software effort as short as possible.
The server will be on FreeBSD and will either run Cocoa via Cocotron or can use some other technology you mention that would make development faster. The computation being done on the server is minimal, requires db intfc, etc.
On the client side, NSURLRequest and NSURLConection all support HTTPS mode. You could also try third party libraries such as ASIHTTPRequest.
On the server side, I'm not sure what you mean by "The server will be on FreeBSD and will either run Cocoa via Cocotron". Are you saying that your server will be written in Objective-C and using Cocoa API? I'm not really sure why you want to do something like that. If the code on server is minimal, why not use the Apache server combined with mod_ssl and perhaps PHP to get it done? PHP is excellent for quick and dirty server. You can also use django / rails and other established frameworks (all of which support HTTPS) if those suit your need better.
Related
I've been reading about websockets and SaaS like Pusher and Socket.io recently, while working on my Laravel chat practice application. What I don't understand is, why do we need external software to establish a websocket connection? Can't the server code like Laravel just directly establish the connection with the front-end like Vue.js? Why does it have to go through the middleman like Pusher and Socket.io? Sorry for the noob question.
It doesn't have to.
Those pieces of software just happen to make it trivial to work with the Websocket protocol.
Remember, Laravel is an opinionated framework. This means that it will pick and choose its own libraries to abstract away these kinds of concepts for you so that you don't have to worry so much about what's going on under the hood.
Basically, there are two components that you need in order to be able to work with Websockets:
A Websocket Server
A Websocket Client
The reason Laravel doesn't communicate directly with the front-end using Websockets is because Laravel itself isn't a Websocket server. At least, not really. And while PHP does have support for working with the Websocket protocol - and even some libraries to leverage it a little more nicely - it just isn't used to handle long-lived processes as often as other languages.
Instead, Laravel uses the Pub/Sub functionality that Redis provides to listen to events that occur through Redis and the Predis library. The reason why it does this is because Laravel is better-suited as a middle-man for the websocket server, and all connected clients.
In this way, Laravel can both pass information up through to the Websocket server using Broadcasting Events, as well as receive event information from the Websocket server and determine if users have the ability or authorization to receive them.
If you don't want to use Pusher, there is a library that will allow you to run your own Websocket Server specifically for Laravel called Laravel Echo Server.
Under the hood, this library still uses Socket.io and Redis in order for all moving parts to communicate with each other seamlessly in a Laravel web application. The benefit here is that you won't need to worry about the number of messages being sent by the server.
The downside is that you now have to know how to manage and maintain this process on your server so that the Websocket Server will know to turn on every time you restart your server, or if a failure happens, etc.
Check out PM2 to learn more about running and maintaining server daemons.
If you don't agree with Laravel's opinions on how to handle Websockets, then you could theoretically use any other server-side language to handle the websocket protocol. It will just require a greater working knowledge of the protocol itself; and if Laravel needs to work with it, you'll have to know how to write the appropriate Service and Provider classes to be able to handle it.
Short answer? You don't have to use them. Knock yourself out writing your own server and client side websocket implementation.
Longer answer.
Why use Laravel? I can do all that with straight up PHP.
Why use Vue? I can do all that with vanilla javascript.
We use libraries and frameworks because they abstract away the details of implementation and make it easier to build products. They handle edge cases you don't think of or things you don't even know that you don't know about because they are used by thousands or millions of developers and all the knowledge and bugs they have encountered and fixed are baked into the implementation.
This is one of the hallmarks of software engineering, code reuse. Don't repeat yourself and don't write any software you don't have to. It allows you to focus on building a solution for your particular requirements, and not focus on building a bunch of infrastructure before you can even build your solution.
I've never used Pusher, but it looks like, yes, it is a SaaS product. But Socket.io is open source.
Web applications frameworks such as sinatra (ruby), play (scala), lift (scala) produces a web server listening to a specific port.
I know there are some reasons like security, clustering and, in some cases, performance, that may lead me to use an apache web server in front of my web application one.
Do you have any reasons for this from your experience?
Part of any web application is fully standardized and commoditized functionality. The mature web servers like nginx or apache can do the following things. They can do the following things in a way that is very likely more correct, more efficient, more stable, more secure, more familiar to sysadmins, and more easy to configure than anything you could rewrite in your application server.
Serve static files such as HTML, images, CSS, javascript, fonts, etc
Handle virtual hosting (multiple domains on a single IP address)
URL rewriting
hostname rewriting/redirecting
TLS termination (thanks #emt14)
compression (thanks #JacobusR)
A separate web server provides the ability to serve a "down for maintenance" page while your application server restarts or crashes
Reverse proxies can provide load balancing and fault tolerance for you application framework
Web servers have built-in and tested mechanisms for binding to privileged ports (below 1024) as root and then executing as a non-privileged user. Most web application frameworks do not do this by default.
Mature web servers are battle hardened and stable. By stable, I mean that they quite literally almost never crash. Your web application is almost certainly far less stable. This gives you the ability to at least serve a pretty error page to the user saying your application is down instead of the web browser just displaying a generic "could not connect" error.
Anecdotal case in point: nginx handles attack that would otherwise DoS node.js: http://blog.nodejs.org/2013/10/22/cve-2013-4450-http-server-pipeline-flood-dos/
And just in case you want the semi-official answer from Isaac Schluetter at the Airbnb tech talk on January 30, 2013 around 40 minutes in he addresses the question of whether node is stable & secure enough to serve connections directly to the Internet. His answer is essentially "yes" it is fine. So you can do it and you will probably be fine from a stability and security standpoint (assuming you are using cluster to handle unexpected termination of an app server process), but as detailed above the reality of current operations is that still almost everybody runs node behind a separate web server or reverse proxy/cache.
I would add:
ssl handling
for some servers like apache lots of modules (i.e.
ntml/kerberos authentication)
Web servers are much better for some things compared to your application, like serving static.
Quite often the frameworks do everything you need, but sometimes, adding a layer on top of that can give you seemingly free functionality like compression, security, session management, load balancing, etc. Still, adding a web server may also introduce security issues, for example, chances are your web server security may be compromised easier than Lift by itself. Also, some of the web frameworks are extremely scalable and may even be hampered by an ill chosen web server.
In summary, if you require web server like functionality that is not provided by the framework, then a web server may be a very good option, but keep in mind that it's one more thing to configure properly and update regularly with security patches, etc.
If for example, you just need encryption, or compression, then you may find that adding the correct library or plug-in to your framework may do just that (and only that)
With a proxy http server, the framework doesn't need to keep an http connection open for giving the computed content and can then start serving some other request. It acts as a buffer.
It's an issue of reinventing the wheel. Most frameworks will give you a development environment but for production it's usually good practice to use a commercial/open source project that is able to deal with all issues that arise during production.
Guys building a Framework will have the framework to concentrate on whilst guys building a server are doing just the same(perfecting).
I'm building an application that will provide a service to other applications (let's pretend like it solves differential equations). So my DifEq service will be running all the time and a client application can send it requests to solve DifEqs at any point.
This would be trivial using sockets or pipes.
The problem is some applications nefariously want to send linear equations instead of differential equations, so I want to register applications that I know are sending proper DifEqs to my application.
Traditional sockets break down here, as far as I know.
Ideally, I'd like to be able to look at some information about the application that is making a request of me and (either through some meta-data on that application, through communication with my web site, or through some other, unkown method) determine it is an acceptable DifEq app. Furthermore, this ideal method would not be spoofable without a root/admin-level compromise of the underlying OS. If the linear equation app is also a root kit, I'll concede to being broken. :)
I need to be able to do this on Windows, OS X, and Linux (and maybe Android); but I recognize that it may not be the same solution on all platforms. So, how would you accomplish this (specify the platform you are focusing on, if appropriate)? I've done a lot of server-side development, but it's been way too many years since I've done any client-side development outside the browser and the world is very different today than it was then.
I think your question is a little confusing when it comes to talking about DifEQ vs LinearEQ.
It sounds to me like you are just looking for a routine way to verify that clients are authorized to connect. There is a lot to read up on this subject. Common methods would be to use SSL certificates to verify the identity of clients. You can also tunnel over SSH, or use OAUTH, etc, etc.
You'll have to do some more digging around the web to see what kind of authentication fits your scenario. You mention 'not spoofable'. I think that people generally end up compiling-in a certificate of private key into their application. This will stop all but the very dedicated and experienced hackers.
I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls.
The situation:
The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status.
The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database.
In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem.
What I think I want:
Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available).
C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding.
How do I do this?
Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents?
Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces?
Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch?
If I understand correctly, the way I approach this problem would be a 3-tier (Don't get hang up so much on the 3-tier buzz words that we all have heard about):
JavaScript (ExtJs) on browsers talks HTTP, Ajax using XmlHttpRequest, raw (naked) or wrapper doesn't really matter, to the web server (Lighttpd, Apache, ...).
Since the app on the embedded device can talk by socket, the web server would use socket to talk to the embedded device.
You can decide to put more business logic on the JavaScript, and keep the Apache/Lighttpd code really thin so it wont timeout.
In this way, you can leverage all technologies that you're already familiar with. Ajax between tier 1 and 2 is nothing new, and use socket between 2 and 3.
I did not mean that you did not know socket. I just proposed a way to take a desc of a problem where I hear a lots of words: XML/HTML/Ajax/XmlHttpRequest/Java/PHP/Perl/C++/CGI and more and offer a way to simplify into smaller, better understood problem. Let me clarify:
If you want to ultimately retrieve data from the embedded devices and render on the browsers, then have the browsers making a request to the web server, the web server uses socket to talk to the embedded device. How the data is passed between browser and server, that's normal HTTP, no more, no less. Same thing between web server and embedded device, except socket instead of HTTP.
So if just you take a simple problem, like doing an addition of 2 numbers. Except these 2 input numbers would get passed to the web server, and then the web server passes to the embedded device, where the addition is carried out. The result gets passed back to the web server, back to the browser for rendering. If you can do that much, you can already make the data flow everywhere you want to.
How to parse the data depends on how you design the structure of the data that might include container which wraps around a payload.
"... whatever HTTP is coming to the server into usable bits of information, and generate the proper HTTP response"
...but that's not any different than how you handle the HTTP request on the server using your server-side language.
...how to implement a backend process in C/C++, instead of installing a package like PHP
If the embedded device is programmed in C/C++, you are required to know how to do socket programming in C/C++. On your web server, you also have to know how to socket programming, except that will be in that server-side language.
Hope this helps.
Of course I am aware of Ajax, but the problem with Ajax is that the browser should poll the server frequently to find whether there is new data. This increases server load.
Is there any better method (even using Ajax) other than polling the server frequently?
Yes, what you're looking for is COMET http://en.wikipedia.org/wiki/Comet_(programming). Other good Google terms to search for are AJAX-push and reverse-ajax.
Yes, it's called Reverse Ajax or Comet. Comet is basically an umbrella term for different ways of opening long-lived HTTP requests in order to push data in real-time to a web browser. I'd recommend StreamHub Push Server, they have some cool demos and it's much easier to get started with than any of the other servers. Check out the Getting Started with Comet and StreamHub Tutorial for a quick intro. You can use the Community Edition which is available to download for free but is limited to 20 concurrent users. The commercial version is well worth it for the support alone plus you get SSL and Desktop .NET & Java client adapters. Help is available via the Google Group, there's a good bunch of tutorials on the net and there's a GWT Comet adapter too.
Nowadays you should use WebSockets.
This is 2011 standard that allows to initiate connections with HTTP and then upgrade them to two-directional client-server message-based communication.
You can easily initiate the connection from javascript:
var ws = new WebSocket("ws://your.domain.com/somePathIfYouNeed?args=any");
ws.onmessage = function (evt)
{
var message = evt.data;
//decode message (with JSON or something) and do the needed
};
The sever-side handling depend on your tenchnology stack.
Look into Comet (a spoof on the fact that Ajax is a cleaning agent and so is Comet) which is basically "reverse Ajax." Be aware that this requires a long-lived server connection for each user to receive notifications so be aware of the performance implications when writing your app.
http://en.wikipedia.org/wiki/Comet_(programming)
Comet is definitely what you want. Depending on your language/framework requirements, there are different server libraries available. For example, WebSync is an IIS-integrated comet server for ASP.NET/C#/IIS developers, and there are a bunch of other standalone servers as well if you need tighter integration with other languages.
I would strongly suggest to invest some time on Comet, but I dont know an actual implementation or library you could use.
For an sort of "callcenter control panel" of a web app that involved updating agent and call-queue status for a live Callcenter we developed an in-house solution that works, but is far away from a library you could use.
What we did was to implement a small service on the server that talks to the phone-system, waits for new events and maintains a photograph of the situation. This service provides a small webserver.
Our web-clients connects over HTTP to this webserver and ask for the last photo (coded in XML), displays it and then goes again, asking for the new photo. The webserver at this point can:
Return the new photo, if there is one
Block the client for some seconds (30 in our setup) waiting for some event to ocurr and change the photograph. If no event was generated at that point, it returns the same photo, only to allow the connection to stay alive and not timeout the client.
This way, when clients polls, it get a response in 0 to 30 seconds max. If a new event was already generated it gets it immediately), otherwise it blocks until new event is generated.
It's basically polling, but it somewhat smart polling to not overheat the webserver. If Comet is not your answer, I'm sure this could be implemented using the same idea but using more extensively AJAX or coding in JSON for better results. This was designed pre-AJAX era, so there are lots of room for improvement.
If someone can provide a actual lightweight implementation of this, great!
An interesting alternative to Comet is to use sockets in Flash.
Yet another, standard, way is SSE (Server-Sent Events, also known as EventSource, after the JavaScript object).
Comet was actually coined by Alex Russell from Dojo Toolkit ( http://www.dojotoolkit.org ). Here is a link to more infomration http://cometdproject.dojotoolkit.org/
There are other methods. Not sure if they are "better" in your situation. You could have a Java applet that connects to the server on page load and waits for stuff to be sent by the server. It would be a quite a bit slower on start-up, but would allow the browser to receive data from the server on an infrequent basis, without polling.
You can use a Flash/Flex application on the client with BlazeDS or LiveCycle on the server side. Data can be pushed to the client using an RTMP connection. Be aware that RTMP uses a non standard port. But you can easily fall back to polling if the port is blocked.
It's possible to achive what you're aiming at through the use of persistent http connections.
Check out the Comet article over at wikipedia, that's a good place to start.
You're not providing much info but if you're looking at building some kind of event-driven site (a'la digg spy) or something along the lines of that you'll probably be looking at implementing a hidden IFRAME that connects to a url where the connection never closes and then you'll push script-tags from the server to the client in order to perform the updates.
Might be worth checking out Meteor Server which is a web server designed for COMET. Nice demo and it also is used by twitterfall.
Once a connection is opened to the server it can be kept open and the server can Push content a long while ago I did with using multipart/x-mixed-replace but this didn't work in IE.
I think you can do clever stuff with polling that makes it work more like push by not sending content unchanged headers but leaving the connection open but I've never done this.
You could try out our Comet Component - though it's extremely experimental...!
please check this library https://github.com/SignalR/SignalR to know how to push data to clients dynamically as it becomes available
You can also look into Java Pushlets if you are using jsp pages.
Might want to look at ReverseHTTP also.