Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I had some confusion regarding MPI,sockets and TCP/IP.
Are all these three communication protocols which can make use of
different interconnects like Infiniband,ethernet or is it something else?
Sorry for the trouble if the question sounds naive but I really get confused with these three terms.
TCP/IP is a family of networking protocols. IP is the lower-level protocol that's responsible for getting packets of data from place to place across the Internet. TCP sits on top of IP and adds virtual circuit/connection semantics. With IP alone you can only send and receive independent packets of data that are not organized into a stream or connection. It's possible to use virtually any physical transport mechanism to move IP packets around. For local networks it's usually Ethernet, but you can use anything. There's even an RFC specifying a way to send IP packets by carrier pigeon.
Sockets is a semi-standard API for accessing the networking features of the operating system. Your program can call various functions named socket, bind, listen, connect, etc., to send/receive data, connect to other computers, and listen for connections from other computers. You can theoretically use any family of networking protocols through the sockets API--the protocol family is a parameter that you pass in--but these days you pretty much always specify TCP/IP. (The other option that's in common use is local Unix sockets.)
MPI is an API for passing messages among processes running on a cluster of servers. MPI is higher level than both TCP/IP and sockets. It can theoretically use any family of networking protocols, and if it's using TCP/IP or some other family that's supported by the sockets API, then it probably uses the sockets API to communicate with the operating system.
If the purpose behind your question is to decide how you should write a parallel programming application, you should probably not be looking at TCP/IP or sockets as those things are going to be much lower level than you want. You'll probably want to look at something like MPI or any of the PGAS languages like UPC, Co-array Fortran, Global Arrays, Chapel, etc. They're going to be far easier to use than essentially writing your own networking layer.
When you use one of these higher level libraries, you get lots of nice abstractions like collective operations, remote memory access, and other features that make it easier to just write your parallel code instead of dealing with all of the OS stuff underneath. It also makes your code portable between different machines/architectures.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm developing a new product, and one of the design requirements is to implement an embedded web server on the microcontroller.
The web pages should be responsive and dynamic like single page application (SPA) web pages and there are 3 pages to be implemented with light images and graphics.
I plan to pick out a microcontroller from the STM32 range, and my questions are related to the hardware design part :
what are the minimum Microcontroller requirements to implement an embedded web server in terms of performance and memory?
what is the approximate size of the used memory for the lwIP stack, web server, and client-side code?
where to store the webpages? internal Flash, ROM, External Flash?
And finally, what is the complexity level of the implementation in comparison to the traditional HTTP request?
Thanks,
Network connectivity and sufficient RAM+Flash to run your server. If using TLS (i.e. HTTPS), some processing power (preferably a crypto accelerator) will come in handy.
Depends on what you're planning to serve :) Let's assume a single concurrent client connection and a web server serving simple dynamic pages implemented in C. You'll want at around 100-200 KiB of RAM for the network and HTTP server - maybe much more if doing anything non-trivial. Add around 50-100 KiB more for TLS. This will be enough to implement a few simple text-based config and status pages. As for the amount of Flash (code memory), depends on how much code you write and how big your web assets are :) Note that TLS libraries are rather large, perhaps around 300-500 KiB. These estimates don't include any server-side scripting languages (javascript, python, ...) - C only.
Unless you have specific requirements, your web assets should be few, small and fit (as text or binary blobs) into the same Flash as everything else.
It's more complex. Depends on what you compare it with. It's not like you're going to implement the HTTP protocol yourself - find a library for that. But almost nothing is free in a microcontroller environment. You manage your own memory, your own threads, your own everything.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am doing as much as I can to reduce the download time of the pages on my website and one of the free test that I tried complained about my website not being ipv6 ready or something like that. My hosting provider only support ipv4.
Should I change my hosting provider?
Not being ipv6 ready is actually a problem?
Just in case, this is the page that I am trying to improve and any suggestions will be highly appreciated:
http://www.hypnosisgate.com/past-life-regressions.html
Will lack of IPv6 be a short-term problem for you: probably not
But the internet is moving towards IPv6. Take a look at what Google sees. When writing this the graph has just peaked to 12.5%. That means that 1/8 of the internet has IPv6 connectivity. Because IPv4 addresses have run out [1] many of those users will be behind some central NAT device. Analysis from Facebook has shown that customers that can load their news feed over IPv6 see a 20% to 40% shorter load time than those that are forced to use IPv4.
So for a certain group of your users making your website accessible over IPv6 will give them a significant speed boost. And that group of users will grow very quickly. Even if you don't want to bother with IPv6 right now, it will become important soon.
Because all of this, when choosing a vendor (whether it is for equipment, services or hosting) it might not be very wise to pick one that doesn't support IPv6. At least make sure they are working on it and can give you a timeline in the contract. Otherwise you might need to migrate to a different one that does properly support IPv6 later.
[1] You can but some on the market, and some regions have a few addresses reserved for newcomers, but for all practical purposes it has run out
IPv4 vs IPv6 should not have any significant impact on the delivery speed of your content. However a quick page speed analysis shows you have a few simple fixes that would speed up the site.
https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.hypnosisgate.com%2Fpast-life-regressions.html
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
On my Windows server I have a port open listening to incoming TCP connections from multiple clients. Is there a limit to the number of unique clients that can concurrently establish a socket connection on that opened port on my Windows server? One of the threads What is the theoretical maximum number of open TCP connections that a modern Linux box can have talks about number of socket connections being limited by the allowed file descriptors on Unix platforms. Is there such a limitation on the latest available Windows servers? If so how to go about changing that limit?
Based on an answer by a MSFT employee:
It depends on the edition, Web and Foundation editions have connection limits while Standard, Enterprise, and Datacenter do not.
Though as Harry mentions in another answer, there is a setting for open TCP connections that has a limit of a bit over 16M.
But while technically you could have a large amount of connections, there are practical issues that limit the amount:
each connection is identified by server address and port as well as client address and port. Theoretically even the number of connections two machines can have between them is very large, but usually the server uses a single port which limits the number. Also rarely a single client would open thousands of connections, but in NAT cases it may seem like it
the server can only handle a certain amount of data and packets per second, so a high speed data transfer or lots of small packets may cause the number to go down
the network hardware might not be able to handle all the traffic coming in
the server has to have memory allocated for each connection, which again limits the number
also what the server does is an important issue. Is it a realtime game server or a system delivering chess moves between people thinking their moves for 15 minutes at a time
In addition to the licensing and practical limits outlined in Sami's answer, there is in fact a configurable limit to the number of simultaneous open connections, determined by the TcpNumConnections setting. The default value is also the maximum, which is just shy of 16M.
(The linked documentation is for Windows 2003. The corresponding documentation for later versions of Windows Server does not appear to exist. However, I can't find anything to suggest that the setting has been removed.)
In practice, however, you are likely to run into the practical issues outlined in Sami's answer long before you hit this. (Unless the system administrator has manually changed the setting, of course.)
I realise this is a somewhat open ended question...
In the context of low latency applications I've heard references to by-passing the TCP-IP stack.
What does this really mean and assuming you have two processes on a network that need to exchange messages what are the various options (and associated trade-offs) for doing so?
Typically the first steps are using a TCP offload engine, ToE, or a user-space TCP/IP stack such as OpenOnload.
Completely skipping TCP/IP means usually looking at InfiniBand and using RDMA verbs or even implementing custom protocols above raw Ethernet.
Generally you have latency due to using anything in the kernel and so user-space mechanisms are ideal, and then the TCP/IP stack is an overhead itself consider all of the layers and the complexity that in can be arranged: IP families, sub-networking, VLANs, IPSEC, etc.
This is not a direct answer to your question but i thought it might give you another view on this topic.
Before trying to bypass TCP-IP stack I would suggest researching proven real-time communication middleware.
One good solution for real-time communication is Data Distribution Service from OMG (Object Management Group)
DDS offers 12 or so quality attributes and has bindings for various languages.
It has LATENCY_BUDGET ,TRANSPORT_PRIORITY and many other quality of service attributes that makes data distribution very easy and fast.
Check out an implementation of DDS standard by PrismTech. It is called OpenSplice and
works well at LAN scale.
Depends on the nature of your protocol really.
If by low-latency applications you mean electronic trading systems, than they normally use IP or UDP multi-cast for market data, such as Pragmatic General Multicast. Mostly because there is one sender and many receivers of the data, so that using TCP would require sending copies of the data to each recipient individually requiring more bandwidth and increasing the latency.
Trading connections traditionally use TCP with application-level heartbeats because the connection needs to be reliable and connection loss must be detected promptly.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I understand that IPX and SPX both provide connection services similar to TCP/IP. Here, IPX is similar to IP and SPX is similar to TCP and hence eager to know about this.
How does the performance of IPX/SPX exceed that of TCP in LAN ?
Why is IPX/SPX not used in LAN alone if its performance is superior to that of TCP in the case of LAN ?
I searched the internet and landed up in few links but it did not seem to convey some clear reasons for this - http://en.wikipedia.org/wiki/IPX/SPX . Any ideas ?
IPX was optimized for LANs. For one thing, IPX addresses are formed using an Ethernet MAC addresses and a 32-bit network ID. This design allowed for "zero configuration" of the IPX nodes in most cases - just plug computer in and it's on the network. IPv6 with stateless autoconf has the same properties, btw.
SPX (analogue of TCP) was also highly optimized for LANs. For example, it had per-packet nacks instead of per-octet acks in TCP without any explicit window management functions. That allowed file servers to be very simple - just spew file contents into the Ethernet at the top speed. If a client misses a packet then you can re-read it from disk/cache and re-send it.
In contrast, with TCP you have to buffer all the unacknowledged data and re-send all of the data in the send buffer after a lost packet (in case you don't use selective acknowledgment feature).
However, IPX was not suitable for the WANs at all. For example, it couldn't cope with different frame sizes. I.e. two networks with different frames (say, Ethernet and Ethernet with jumbo frames) couldn't interoperate without a proxy server or some form of encapsulation.
Additionally, packet reordering on WANs is ubiquitous but it plays hell with SPX (at least with Novell's implementation) causing a lot of spurious NAKs.
And of course, IPX addresses were not hierarchical so not very suited for routing. Network ID in theory could be used for this, but even large IPX/SPX deployments were not complex enough to develop rich routing infrastructure.
Right now, IPX is interesting only as a historical curiosity and in maintenance of a small number of VERY legacy systems.
You're missing a critical distinction between SPX/IPX and TCP/IP. TCP/IP is the basis of the Internet. SPX/IPX is not.
SPX/IPX was an interesting protocol, but is now of interest only within a given corporation.
It's often the case in the real world that something technically superior loses due to business reasons. Consider Betamax video tape format vs. VHS. Betamax was considered technically superior, yet you can't buy a Betamax recorder today except maybe on eBay. One may argue that Windows won over Macintosh, despite the fact that the MacOS user interface was much nicer, due entirely to business decisions (mainly the decision by Apple not to permit clones).
Similarly, issues far beyond the control of Xerox destroyed SPX/IPX as a viable protocol - HTTP runs over TCP/IP, not over SPX/IPX. HTTP rules the world, therefore TCP/IP rules the world.
SPX/IPX has been left as an exercise for the reader.
BTW, I've been talking about SPX/IPX as though they were a Xerox protocol - not quite. They are a Novell protocol, but based on the Xerox Network System protocols. Interestingly, I found nothing about this protocol on the web site either of Xerox nor of Novell.
Also, see the Wikipedia article on IPX/SPX.
The disadvantage of the TCP/IP Protocol stack is a lower speed than IPX/SPX. However, the TCP/IP stack is now also used in local networks to simplify the negotiation of local and wide area network protocols. Currently, it is considered the main one in the most common operating systems.
IPX/SPX can coexist on a lan with TCP/IP. PC's that wish to be isolated from the web can still share files/printers by using IPX and not loading TCP. This is more secure than any firewall and second only to cutting wires.
IPX/SPX performed better than TCP/IP back in the day, on systems where you could compare the two. That is no longer true since TCP got all the developer effort from about 1993 onwards because of HTTP.
Essentially, IPX/SPX was obsoleted by TCP/IP, and so it is no longer relevant. Maintaining two sets of protocols is too much effort for network operators, so the less capable one dies out. Eventually this will happen to IPv4.