Fast File transfer - zeromq

What are the different options available for fast file transfer? I know fast is relative term. Basically one of the component(Tibco) is used to transfer the files(as large as 500MB) from our local folder to client folder(B2B zone). This component internally uses SFTP protocol and we found the implementation its quite slow(It takes 3/4 Hrs).
What are the alternatives to it ?
1)Can Zero MQ be used to transfer the files?
2)Tibco has one more product- Managed File Transfer which internally uses RocketStream protocol(http://support.rocketstream.com/docs/RS12/server/index.html?the_rocketstream_protocols.html) which they claim is super fast.
3)Can Apache Camel(FTP component) be used in such cases?
Latency and security are the main concerns.

Have a look at the ZeroMQ FileMQ project, it may fit your needs (https://github.com/zeromq/filemq)

Related

Nifi processor batch insert - handle failure

I am currently in the process of writing an ElasticSearch Nifi processor. Individual inserts / writes to ES are not optimal, instead batching documents is preferred. What would be considered the optimal approach within a Nifi processor to track (batch) documents (FlowFiles) and when at a certain amount batch them in? The part I am most concerned about is if ES is unavailable, down, network partition, etc. prevents the batch from being successful. The primary point of the question, is given that Nifi has content storage for queuing / back-pressure, etc. is there a preferred method for using that to ensure no FlowFiles get lost if a destination is down? Maybe there is another processor I should look at for an example?
I have looked at the Mongo processor, Merge, etc. to try and get an idea of the preferred approach for batching inside of a processor, but can't seem to find anything specific. Any suggestions would be appreciated.
Good chance I am overlooking some basic functionality baked into Nifi. I am still fairly new to the platform.
Thanks!
Great question and a pretty common pattern. This is why we have the concept of a ProcessSession. It allows you to send zero or more things to an external endpoint and only commit once you know it has been ack'd by the recipient. In this sense it offers at least-once semantics. If the protocol you're using supports two-phase commit style semantics you can get pretty close to the ever elusive exactly-once semantic. Much of the details of what you're asking about here will depend on the destination systems API and behavior.
There are some examples in the apache codebase which reveal ways to do this. One way is if you can produce a merged collection of events prior to pushing to the destination system. Depends on its API. I think PutMongo and PutSolr operate this way (though the experts on that would need to weigh in). An example that might be more like what you're looking for can be found in PutSQL which operates on batches of flowfiles to send in a single transaction (on the destination DB).
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSQL.java
Will keep an eye here but can get the eye of a larger NiFi group at users#nifi.apache.org
Thanks
Joe

Techniques/algorithms used in WAN optimization

What are the techniques/algorithms used in WAN optimization? I am looking for a reference which can give a good theory supported with code examples, I have taken a look in Steelhead manual from Riverbed and I found the following main techniques used in:
SDR (Scalable Data Referencing): which breaks up TCP data into
unique data chunks, each chunk has a reference number, where when the
same byte sequence occurs in future transmission, the reference
number is only sent across the LAN instead of raw data chunks.
Connection pooling: The product creates pools of idle TCP
connection (for HTTP as example), where when a client tries to create
a new connection to a previously visited target, it uses one from its
pool, which, in turns, overcomes three-way TCP handshake.
The product reduces the number of round trips over WAN for common
actions (opening/editing remote shared files/folders), it supports
most of intended protocols: CIFS, MAPI, HTTP … etc.
Data compression.
Through my search I found 3 open source projects aim to do WAN optimization, these are:
TrafficSqueezer
WANProxy
OpenNOP
TrafficSqueezer seems to have more features but the comments in its page in sorceforge do not give a good sense about it. I tried to find a document within these projects with good info but I couldn't.
the techniques that can reduce the traffic amount most - are of course compression and data deduplication (both WAN optimisers built up the same data based on a algorithm on memory or HDD - as soon as there is again the same traffic pattern - the pattern is replaced with a pointer to the data and a length - therefore you can save up to 99% when you transfer the same file twice, but even different files have a lot of common data where deduplication can optimise a lot!).
(you will find a lot of sources on the web: e.g. http://www.computerweekly.com/feature/How-data-deduplication-works)
in your example this is technique called SDR.
Riverbed has also a lot of protocol support - which makes for e.g. CIFS, SMB and MAPI more delay aware (e.g. a lot of packages are buffered and sent once - so save roundtrips)
Also F5 does e.g. FTP and HTTP optimisations to get those more performant.
when there is a lot delay on the WAN link - of course you can also save time with connection pooling - so pre-established TCP sessions (you can save the time that would be needed for a tcp 3way handshake)
so at a glance:
-data deduplication
-connection pooling
-compression
-protocol optimisation
i am sure you can find a lot in the f5 doku (F5 WOM is the product), bluecoat does offer WAN optimisation as well and of course Riverbed. also silverpeak might be worth a try.
for the opensouce ones i only have experiences on traffic squeezer, but there hasn't been a comparable feature-set to commercial products this time.

centralized / distributed sharing

I would like to make a system whereby users can upload and download files. The system will have a centralized topography but will rely heavily on peers to transfer relevant data through the central node to other peers. Instead of peers holding entire files I would like for them to hold a compressed an encrypted portion of the whole data set.
Some client uploads file to server anonymously
I would like for the client to be able to upload using some sort of NAT (random ip), realizing that the server would not be able to send confirmation packets back to the client. Is ensuring data integrity feasible with a header relaying the total content length, and disregarding the entire upload if there is a mismatch?
Server indexes, compresses and splits the data into chunks adding identifying bytes to each chunk, encrypts it, and splits the data over the network while mapping the locations of each chunk.
The server will also update the file index for peers upon request. As more data is added to the system, I imagine that the compression can become more efficient. I would like to be able to push these new dictionary entries to peers so they can update both their chunks and the decompression system in the client software, without causing overt network strain. If encrypted, the chunks can be large without any client being aware of having part of x file.
Some client requests a file
The central node performs a lookup to determine the location of the chunks within the network and requests these chunks from peers. Once the chunks have been assembled, they are sent (still encrypted and compressed) to the client, who then translates the content into the decompressed file. It would be nice if an encrypted request could be made through a peer and relayed to a server, and onion routed through multiple paths with end-to-end encryption.
In the background, the server will be monitoring the stability and redundancy of the chunks, and if necessary will take on chunks that near extinction, and either hold them in it's own bank or redistribute them over the network if there are willing clients. In this way, the central node can shrink and grow as appropriate.
The goal is to have a network within which any client can upload or download data with no single other peer knowing who has done either, but with free and open access to all.
The system must be able to handle a massive amount of simultaneous connections while managing the peers and data library without loosing it's head.
What would be your optimal implementation?
Edit : Bounty opened.
Over the weekend, I implemented a system that does basically the above, minus part 1. For the upload, I just implemented SSL instead of forging the IP address. The system is weak in several areas. Files are split into 1MB chunks and encrypted, and sent to registered peers at random. The recipient(s) for each chunk are stored in the database. I fear that this will quickly grow too large to be manageable, but I also want to avoid having to flood the network with chunk requests. When a file is requested, the central node informs peers possessing the chunks that they need to send the chunks to x client (in p2p mode) or to the server (in direct mode), which then transfers the file down. The system is just one big hack, and written in ruby, which I imagine is not really up to the task. For the rewrite, I am considering using C++ with Boost.Asio.
I am looking for general suggestions regarding architecture and system design. I am not at all attached to my current implementation.
Current Topography
Server Handling client uploads,indexing, and content propagation
Server Handling client requests
Client for upload files and requesting files
Client Server accepting chunks and requests
I would like for the client not to have to have a persistent server running, but I can't think of a good way around it.
I would post some of the code but its embarassing. Thanks. Please ask any questions, the basic idea is to have a decent anonymous file sharing model combining the strengths of both the distributed and centralized model of content distribution. If you have a totally different idea, please feel free to post it if you want.
I would like for the client to be able
to upload using some sort of NAT
(random ip), realizing that the server
would not be able to send confirmation
packets back to the client. Is
ensuring data integrity feasible with
a header relaying the total content
length, and disregarding the entire
upload if there is a mismatch?
No, that's not feasible. If your packets are 1500 bytes, and you have 0.1% packetloss, the chance of a one megabyte file being uploaded without any lost packets is .999 ^ (1048576 / 1500) = 0.497, or under 50%. Further, it's not clear how the client would even know if the upload succeeded if the server has no way to send acknowledgements back to the client.
One way around the acknowledgement issue would be to use a rateless code, which allows the client to compute and send an effectively infinite number of unique blocks, such that any sufficiently large subset is enough to reconstruct the original file. This adds a large amount of complexity to both the client and server, however, and still requires some way to notify the client that the server has received the complete file.
It seems to me you're confusing several issues here. If your system has a centralized component to which your clients upload, why do you need to do NAT traversal at all?
For parts two and three of your question, you probably want to research Distributed Hash Tables and content-based addressing (but with major caveats explained here). Preventing the nodes from knowing the content of the files they store could be accomplished by, for example, encrypting the files with the first hash of their content, and storing them keyed by the second hash - this means that anyone who knows the hash of the file can retrieve it, but clients cannot decrypt the files they host.
In general, I would suggest starting by writing down a solid list of goals for the system you're designing, then looking for an architecture that suits those goals. In contrast, it sounds like you have some implicit goals, and have already picked a basic system architecture - which may not suit your full goals - based on that.
Sorry for arriving late at the generous 500 reputation party, but even if i am too late i would like to add a little of my research to your discussion.
Yes such a system would be nice, like Bittorrent but with encrypted files and hashes of the un-encrypted data. In BT you can add encrypted files of course, but then the hashes would be of the encrypted data and thus not possible to identify retrieval-sources without a centralized queryKey->hashCollection storage, i.e. a server that does all the work of identifying package-sources for every client. A similar system was attempted by Freenet (http://freenetproject.org/), although more limited than what you attempt.
For the NAT consideration let's first look at: aClient -> yourServer (and aClient->aClient later)
For the communication between a client and your server the NATs (and firewalls that shield the clients) are not an issue! Since the clients initiate the connection to your server (which has either fixed ip-address or a dns-entry (or dyndns)) you dont even have to think about NATs, the server can respond without an issue since, even if multiple clients are behind a single corporate firewall the firewall (its NAT) will look up with which client the server wants to communicate and forwards accordingly (without you having to tell it to).
Now the "hard" part: client -> client communication through firewalls/NAT: The central technique you can use is hole-punching (http://en.wikipedia.org/wiki/UDP_hole_punching). It works so well it is what Skype uses (from one client behind a corporate firewall to another; (if it does not succeed it uses a mirroring-server)). For this you need both clients to know the address of the other and then shoot some packets at eachother so how do they get eachother's addresses?: Your server gives the addresses to the clients (this requires that not only a requester but also every distributer open a connection to your server periodically).
Before i talk about your concern about data-integrity, the general distinction between packages and packets you could (and i guess should) make:
You just have to consider that you can separate between your (application-domain) packages (large) and the packets used for internet-transmission (small, limited by MTU among other things): It would be slow to have both be the same size, the size of a maximum tcp/ip packet is 576 (minus overhead; take a look here: http://www.comsci.us/datacom/ippacket.html ); you could do some experiments about what a good size for your packages is, my best guess is that 50k-1M would all be fine (but profiling would optimize that since we dont if most of the files you want to distribute are large or small).
About data-integrity: For your packages you definitely need a hash, i would recommend to directly take a cryptographic hash since this prevents tampering (in addition to corruption); you dont need to record the size of the packages since if the hash is faulty you have to re-transmit the package anyways. Bear in mind, that this kind of package-corruption is not very frequent if you use TCP/IP for packet transmission (yes, you can use TCP/IP even in your scenario), it automatically corrects (re-requests) transmission-errors. The huge advantage is that all computers and routers in between know TCP/IP and check for corruption automatically on every step in between the source and destination computer, so they can re-request the packet themselves which makes it very fast. They would not know about a packet-integrity-protocol you implement yourself so with that custom protocol the packet has to arrive at the destination before the re-request can even start.
For the next thought let's call the client which publishes a file the "publisher", i know this is kind of obvious, however it is important to distinguish this from "uploader", since the client does not need to upload the file to your server (just some info about it, see below).
Implementing the central indexing-server should be no problem, the problem would be that you plan to have it encrypt all the files itself instead of making the publisher do that heavy work (good encryption is very heavy lifting). The only problem with having the publisher (not the server) encrypt the data is, that you have to trust the publisher to give you reasonable search-keywords: theoretically it could give you a very attractive search-keyword every client desires together with a reference to bogus data (encrypted data is hard to distinguish from random data). But the solution to this problem is crowd-sourcing: make your server store a user-rating so downloaders can vote on files. The implementation of the table you need could be a regular-old hash-table of individual search-keywords to client-ID's (see below) who have that package. The publisher is at first the only client that holds the data, but every client that downloaded at least one of the packages should then be added to the hash-table's entry, so if the publisher goes offline and every package has been downloaded by at least one client everything continues working. Now critically the mapping client-ID->IP-Addresses is non-trivial because it changes often (e.g. every 24 hours for many clients); to compensate, you have to have another table on your server that makes this mapping and make the clients contact the server periodically (e.g. every hour) to tell it its IP-address. I would recommend using a crypto-hash for client-ID's so that it is impossible for one client to trash this table by telling you fake ID's.
For any questions and criticism, please comment.
I am not sure having one central point of attack (the central server) is a very good idea. That of course depends on the kind of problems you want to be able to handle. It also limits your scalability a lot

What serial file transfer protocol to use?

I'm looking for some input on witch file transfer protocol to use over a serial line. I want to be able to transfer files of max 200 Mb size over a serial line (RS232) in both directions, but only one of the machines needs to be able to initiate the get/put (think master-slave).
The protocol also needs to be:
Easy/simple to implement since I would need to write both client and server myself (limited, embedded hardware)
Fairly robust, fault checking/recovery etc
At least somewhat standardized, in case I need to get a third party to implement it on some other hardware
Kermit? TFTP? Simplest possible home brew? What do you think?
In the beginning was the Xmodem, which was very simple to implement. Chuck Forsberg looked at the xmodem and decided it was inefficient, so he begat the Ymodem, but it's implementations were buggy and both x and ymodem were replaced with Zmodem.
Kermit followed on later. Kermit would probably be the "Standard" way to implement this. Do you have access to libraries for Kermit that will run on your embedded platform? If not I would probably consider one of the other options.
If ease of implementation is your primary concern then Xmodem wins hands down.

High speed UDP receiver in MATLAB

I'd like to implement the receiving end of my system in MATLAB - this requires Gigabit Ethernet with sustained speeds of over 200Mb/sec.
Using MATLAB's built-in UDP from the Instrument Control Toolbox does not appear to be sufficient. Are there any good alternatives?
If you know Java, you can write the networking part of your code in Java classes, load those into your Matlab session with javaclasspath(), and invoke them from M-code. This could transform the problem from getting the data through Matlab's udp() function to getting the data across the Java/Matlab boundary.
If the data can be put into batches:
Use an external program to download the data to your computer, and save it to a file. Then, Matlab can read from that file whenever it needs more data. That way you partition the problem into two manageable pieces - and if you're using a decent OS, the file will never leave RAM so you won't have to worry about speed.
Theres an very good Example for the Java UDP implementation on the Mathworks site.
(Link below)
http://www.mathworks.com/matlabcentral/fileexchange/24525-a-simple-udp-communications-application/content/judp.m

Resources