This is sort of a theoretical question, however, I need to add file sharing capabilities to my web socket powered chat application. I could use a service like Amazon S3 to upload a file to share by posting a link to the file, but that involves the uploading of a file that may already be accessible over the local network (sharing a file between co-workers for example).
So I had the idea that it might be possible to somehow tunnel the upload/download/transfer through the already existing web socket connection. However, I don't know enough about HTTP file transfer to know the next step of how to implement it. Is there a limitation to web sockets that would prevent this from being possible?
I'm using Ruby and EventMachine for my current web socket implementation. If you were able to provide a high level overview to get me started, that would be very much appreciated.
Here's an example of a project that uses only Web Sockets and the javascript File API for transferring files: http://www.github.com/thirtysixthspan/waterunderice
To allow files to be shared without the need to upload it to the server, (i.e Coworkers) you can now use the WebRTC DataChannel API, to create a peer to peer connection.
Related
I have a concox GT06 device from which I want to send tracking data to my AWS Server.
The coding protocol manual that comes with it only explains the data structure and protocol.
How does my server receive the GPS data collected by my tracker?
Verify if your server allows you to open sockets, which most low cost solutions do NOT allow for security reasons (i recommend using an Amazon EC2 virtual machine as your platform).
Choose a port on which your application will listen to incoming data, verify if it is open (if not open it) and code your application (i use C++) to listen to that port.
Compile and run your application on the server (and make sure that it stays alive).
Configure your tracker (usually by sending an sms to it) to send data to your server's IP and to the port which your application is listening to.
If you are, as i suspect you are, just beginning, consider that you will invest 2 to 3 weeks to develop this solution from scratch. You might also consider looking for a predeveloped tracking platform, which may or may not be acceptable in terms of data security.
You can find examples and tutorials online. I am usually very open with my coding and would gladly send a copy of the socket server, but, in this case, for security reasons, i cannot do so.
Instead of direct parsing of TCP or UDP packets you may use simplified solution putting in-between middleware backends specialized in data parsing e.g. flespi.
In such approach you may use HTTP REST API to fetch each new portion of data from trackers sent to you dedicated IP:port (called channel) or even send standardized commands with HTTP REST to connected devices.
At the same time it is possible to open MQTT connection using standard libraries and receive converted into JSON messages from devices as MQTT in real time, which is even better then REST due to almost zero latency.
If you are using python you may take a look at open-source flespi_receiver library. In this approach with 10 lines of code you may have on your EC2 whole parsed into JSON messages from Concox GT06.
While trying to setup an streaming server with my raspberry pi, the instructions seem to contain just installing an ftp server.
This made me wonder, what decides whether a file stored in the ftp server to be downloaded or streamed?
In other words, is the choice of downloading or streaming dependent on the client side and not the server side?
If using FTP, streaming is implemented client side using the REST command (for Start Position), as explained at How does a FTP server resume a download? and (in more detail) at http://cr.yp.to/ftp/retr.html .
Your server therefore needs to allow the REST verb (most do by default). Throttling (flow control) is also managed client side.
Long story:
This mechanism is similar to the strategy used by HTTP too. Streaming, however, is a wide subject. and there are other approaches to streaming. Some protocols provide extra verbs to signal other events like changes of bandwidth/resolution to account for unstable connections (like videoconference / desktop share protocols). Some are more suitable for live broadcasting and others for buffered/stored video.
Nowadays, most Streaming Players like YouTube are web based and built on top of the HTTP protocol. Streaming is achieved using the HTTP RANGE Header and by dividing the media in chunks that can be retrieved separately, as explained in this magnific video: https://www.youtube.com/watch?v=OqQk7kLuaK4 .
Need to be able to continuously receive calls when a Chrome webpage is open. How do I do that even for users who are inside a strict enterprise network?
WebSockets? (but there's the proxy problems that doesn't know what wss:// is)
HTTP? (but will I have to poll?)
Other?
Since you included the "vLine" tag, I'll reply with some information on how our WebRTC platform will behave in an enterprise network. vline.js will use a secure WebSocket by default if the browser supports it and fall back to HTTPS long polling. As described here, the secure WebSocket may work depending on the exact proxy configuration. Feel free to test it out by using GitTogether or creating your own vLine service for testing.
I want to implement a P2P photo sharing application.Scenario is like this:
A is online and he would like to share his photos with B. Through some server, B gets A's IP address and access A's photos directly.
Is it possible to implement using WebRTC or Websocket ? Please give me some inputs,
Thanks
I implemented P2P file transfer on websockets with very small nodejs server. But it works fine only on Chrome thanks to "download" tag. I also have to have middleware server so it's not STRICTLY P2P. My node.js server never has full file it just transfer current file chunk from one websocket connection to another.
I hope WebRTC will help you to implement what you wish more smouthly, without any middleware.
I'm trying to develop an extension that detects every connection made by the browser to figure out the URLs being accessed. I know that this is possible via writing an HTTP/SOCKS proxy and configuring the browser to flow traffic via that. However, that's kind of overkill for the application that I'm trying to develop and it's best done as a Firefox Add-on if that's possible. Any clues/pointers would be highly appreciated.
Use nsIHttpActivityDistributor and there is many information about the http transaction and socket transport through observeActivity callback.
Read the official documentation https://developer.mozilla.org/en/Monitoring_HTTP_activity.