Microcontroller requirements to implement an embedded web server with using websocket, JS, and HTML5 [closed] - websocket

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm developing a new product, and one of the design requirements is to implement an embedded web server on the microcontroller.
The web pages should be responsive and dynamic like single page application (SPA) web pages and there are 3 pages to be implemented with light images and graphics.
I plan to pick out a microcontroller from the STM32 range, and my questions are related to the hardware design part :
what are the minimum Microcontroller requirements to implement an embedded web server in terms of performance and memory?
what is the approximate size of the used memory for the lwIP stack, web server, and client-side code?
where to store the webpages? internal Flash, ROM, External Flash?
And finally, what is the complexity level of the implementation in comparison to the traditional HTTP request?
Thanks,

Network connectivity and sufficient RAM+Flash to run your server. If using TLS (i.e. HTTPS), some processing power (preferably a crypto accelerator) will come in handy.
Depends on what you're planning to serve :) Let's assume a single concurrent client connection and a web server serving simple dynamic pages implemented in C. You'll want at around 100-200 KiB of RAM for the network and HTTP server - maybe much more if doing anything non-trivial. Add around 50-100 KiB more for TLS. This will be enough to implement a few simple text-based config and status pages. As for the amount of Flash (code memory), depends on how much code you write and how big your web assets are :) Note that TLS libraries are rather large, perhaps around 300-500 KiB. These estimates don't include any server-side scripting languages (javascript, python, ...) - C only.
Unless you have specific requirements, your web assets should be few, small and fit (as text or binary blobs) into the same Flash as everything else.
It's more complex. Depends on what you compare it with. It's not like you're going to implement the HTTP protocol yourself - find a library for that. But almost nothing is free in a microcontroller environment. You manage your own memory, your own threads, your own everything.

Related

Http proxy server tests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have implemented a http proxy client/server. Currently I intended to test this proxy client/server performance. Can anybody help me what approaches exists to make these tests?
If you are looking for some tools the following will be helpful for you:
RoboHydra is a web server designed precisely to help you write and test software that uses HTTP as a communication protocol. There are many ways to use
RoboHydra, but the most common use cases are as follows: RoboHydra
allows you to combine a locally stored website front end with a back
end sat on a remote server, allowing you to test your own local
front end installation with a fully functional back end, without
having to install the back end on your local machine.
If you write a program designed to talk to a server using HTTP, you
can use RoboHydra to imitate that server and pass custom responses
to the program. This can help you reproduce different bugs and
situations that might be otherwise hard, if not impossible, to test.
https://dev.opera.com/articles/robohydra-testing-client-server-interactions/
Webserver Stress Tool simulates large numbers of users accessing a website via HTTP/HTTPS. The software can simulate up to 10.000 users that independently click their way through a set of URLs. Simple URL patterns are supported as well as complex URL patterns (via a Script file).
Webserver Stress Tool supports a number of different testing types. For example
✓ Performance Tests—this test queries single URLs of a webserver or web application to identify and discover elements that may be responsible for slower than expected performance. This test provides a unique opportunity to optimize server settings or application configurations by testing various implementations of single web pages/script to identify the fastest code or settings.
✓ Load Tests—this tests your entire website at the normal (expected) load. For load testing you simply enter the URLs, the number of users, and the time between clicks of your website traffic. This is a “real world” test.
✓ Stress Tests—these are simulations of “brute force” attacks that apply excessive load to your webserver. This type of “brute force” situation can be caused by a massive spike in user activity (i.e., a new advertising campaign). This is a great test to find the traffic threshold for your webserver.
✓ Ramp Tests—this test uses escalating numbers of users over a given time frame to determine the maximum number of users the webserver can accommodate before producing error messages.
✓ Various other tests—working with Webserver Stress Tool simply gives you more insight about your website, e.g. to determine that web pages can be requested simultaneously without problems like database deadlocks, semaphores, etc.
http://www.paessler.com/tools/webstress/features
To better understand what is client-server and web based testing and how to test these applications you may read this post http://www.softwaretestinghelp.com/what-is-client-server-and-web-based-testing-and-how-to-test-these-applications/

Design approach for hosting multiple microservices on the same host [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a Web application that I decoupled it in multiple containerized microservices. I have now around 20 services, but the whole system will definitely need more than 300. Most of the services now and some in the future will not need an entire machine so I'll deploy multiple services on a same host. I'm wondering how others deal with interservice communication. My preferred way was to go with a REST based communication but...
Isn't it too heavy to have multiple web servers running on the same machine? I'm developing in Ruby, but even a lightweight web server like Puma can consume a good amount of memory
I started writing a custom communication channel using UNIX sockets. So, I'd start one web server and my "router" app would communicate with the currently running services on that host through UNIX sockets. But I don't know if it's worth the effort and on top of that, all services have to be written and customized to use this kind of communication. I believe it would be hard to use any framework like Ruby-on-Rails or others, even different languages which is the whole appeal with microservices architecture. I feel like I'm trying to reinventing the wheel.
So, can someone suggest a better approach or vote for one of my current ones?
I appreciate any help,
Thanks,
Looks like you may want to look into docker swarm, they're actively working on these use cases. I wouldn't recommend building your own communication channel, stick with http or maybe use spdy if you're really concerned about performance. Anything you introduce will make using these upcoming solutions more difficult. Also keep in mind you don't need a heavy-duty web server in most cases, you can always introduce a layer above one or more of your services using nginx or haproxy for example.

Use only for downloading: FTP vs HTTP in my server? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to get some files from my computer (text, video, images) and I'd like to download them to a folder on my Android device. I have been looking for alternatives and I think that there are two ways for doing that, but I don´t know if there is a great difference in using one or another.
Which protocol is better and why, FTP or HTTP, in my case? I don`t need uploading anything, and the size of the files is not too big. (I guess around 5M the biggest file)
I think HTTP is easier and FTP is fastest, could be? But I would like, thinking in programming, which is better.
In terms of speed, for file sizes larger than roughly 10kB both are equivalent. The difference is that FTP sends pure, raw data on its data channel without any headers so it has a slightly smaller overhead. But HTTP sends only around 12 or so lines of text as header for each file before blasting raw data onto the channel. So for files of around 10kB or less, yes HTTP overhead can be quite high - around 1% to 2% of the total bandwidth. For large files, the dozen or so lines of HTTP header becomes negligible.
FTP wastes one socket though for the control channel so for lots of users HTTP is twice more scalable. Remember, your OS has a limited number of sockets that it can open.
And finally, the most important consideration is that a lot of people access the internet through firewalls. Be it corporate, or school or dormitory or apartment building. And a lot of firewalls are configured to only allow HTTP access. You may find sometimes you can't get access to your files because of this. There are ways around this of course but it's one additional hassle you have to think about.
Additional answer:
I saw you asking about access restriction and security. The slight downside with HTTP is that you need to write your own web app to implement this. Web servers like Apache can be configured to do this just by writing a configuration file using HTTP basic authentication.
Luckily, people have had this problem before and some of them have written software to do this. Google around for "HTTP file server" and you'll find many implementations. Here's one fairly good open source web app: http://pfn.sourceforge.net/
Also, if you really want security you should set up SSL/TLS for your server regardless weather you end up using FTP or HTTP.
I would recommend HTTP. It allows you downloading file with multiple connections, you can easily share urls and you would also be able to download it in a restricted environment where all the ports except http are blocked.
FTP is more suitable if you want to control access to files on per user basis and require good amount of uploading also.
Addition:
You can implement security in http also using .htaccess files. However, it is not very scalable and not suitable for too many users with different access rights.
There are several other methods of protecting file on http. You will be able to find a lot of open source utilities on http://sourceforge.net which will let you do that. When speed is concerned, http is best. It allows you to fetch arbitrary part of the file and hence it is possible to have a multi thread download.
You will notice that most of file sharing sites use http and it is so for scalability reason.

Do you know a good tool to test API performance? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I need to simulate 3G network and http requests from different geographical locations
API: HTTP restful - JSON
Geo Location: Europe, US
Client: Mobile device
It sounds like you're asking two separate questions:
What are the performance characteristics of your API server? Most usefully expressed as: how many concurrent users can it serve before response times exceed your acceptable level?
what is the performance experience on your client devices?
I'd encourage you to split those two concerns up, and test them independently. A device operating over 3G would struggle to generate enough load to stress a well-configured web server, and it's usually not cost efficient to commission thousands of load testing nodes around the world. Also, once the traffic arrives at your web server, it shouldn't really care whether it came from the same city, country or continent, or whether it originated from a mobile device or a PC or a load test server.
So, I would use any load testing tool you like to test the performance of your web API. Apache JMeter is free, but has a bit of a learning curve; however, it's available from several cloud providers, which allows you to take your tests and run them from different continents. Google "Jmeter cloud" for more details.
If performance is a key concern, you probably want to have a continuous testing regime, where you subject your code to performance tests ever week or so, and optimize as you go - leaving optimization to the end of the project is usually rather risky...
The next question is "okay, so I know my API server can serve 1000 concurrent requests with an average response time < 1 second" (or whatever) - how does that translate to end user experience?
It only makes sense to attack this once you are really clear that your web server isn't the bottleneck - because most of the decisions you make to optimize performance beyond this point are pretty major.
Logically, if you know that your webserver is responding at a certain level, the end user performance is affected by network latency and throughput. Latency is usually measured through the crude statistic of ping times: how long does it take a network packet to travel between client and server? It's hard or impossible to improve on ping times without revisiting your entire hosting strategy; it depends on the speed of light, the connectivity between your hosting farm and the dozens of other network segments between your client and the server.
Throughput is typically measured in bytes per second. Often, this is something you can affect - for instance, by making your API less verbose, using compression, etc.
3G devices typically have relatively poor network latency characteristics, but pretty decent throughput under normal circumstances. However, there are many circumstances which affect both latency and throughput which are entirely unpredictable - busy locations are the classic example: a football stadium full of 3G devices means individual users often have poor connectivity.
Testing this is hard. I'd split it up into testing for 3G devices, and testing geographical variation. To test the performance characteristics on a 3G device, I'd simulate the network conditions using bandwidth throttlers in front of a dedicated test suite (probably based on JMeter).
The final part of the jigsaw could be expensive - there are specialist companies who can test web performance from around the world. They have nodes around the world, where they can execute test scripts; they often write the scripts for you, and provide a web interface for you to run and measure the tests. I've used Keynote in the past, and found them to be very good. This kind of testing is expensive, so I only use it right at the end of the project, once I have excluded all other performance aspects from consideration.
For JSON load testing,
json-simple
for REST in general you can use
loadUI - "tool for Load Testing numerous protocols, such as Web Services, REST, AMF, JMS, JDBC as well as Web Sites. LoadUI uses a highly graphic interface"

Why is p2p web hosting not widely used? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We can see the growth of systems using peer to peer principles.
But there is an area where peer to peer is not (yet) widely used: web hosting.
Several projects are already launched, but there is no big solution which would permit users to use and to contribute to a peer to peer webhosting.
I don't mean not-open projects (like Google Web Hosting, which use Google resources, not users'), but open projects, where each user contribute to the hosting of the global web hosting by letting its resources (cpu, bandwidth) be available.
I can think of several assets of such systems:
automatic load balancing
better locality
no storage limits
free
So, why such a system is not yet widely used ?
Edit
I think that the "97.2%, plz seed!!" problem occurs because all users do not seed all the files. But if a system where all users equally contribute to all the content is built, this problem does not occur any more. Peer to peer storage systems (like Wuala) are reliable, thanks to that.
The problem of proprietary code is pertinent, as well of the fact that a user might not know which content (possibly "bad") he is hosting.
I add another problem: the latency which may be higher than with a dedicated server.
Edit 2
The confidentiality of code and data can be achieved by encryption. For example, with Wuala, all files are encrypted, and I think there is no known security breach in this system (but I might be wrong).
It's true that seeders would not have many benefits, or few. But it would prevent people from being dependent of web hosting companies. And such a decentralized way to host websites is closer of the original idea of the internet, I think.
This is what Freenet basically is,
Freenet is free software which lets you publish and obtain information on the Internet without fear of censorship. To achieve this freedom, the network is entirely decentralized and publishers and consumers of information are anonymous. Without anonymity there can never be true freedom of speech, and without decentralization the network will be vulnerable to attack.
[...]
Users contribute to the network by giving bandwidth and a portion of their hard drive (called the "data store") for storing files. Unlike other peer-to-peer file sharing networks, Freenet does not let the user control what is stored in the data store. Instead, files are kept or deleted depending on how popular they are, with the least popular being discarded to make way for newer or more popular content. Files in the data store are encrypted to reduce the likelihood of prosecution by persons wishing to censor Freenet content.
The biggest problem is that it's slow. Both in transfer speed and (mainly) latency.. Even if you can get lots of people with decent upload throughput, it'll still never be as quick a dedicated servers or two.. The speed is fine for what Freenet is (publishing data without fear of censorship), but not for hosting your website..
A bigger problem is the content has to be static files, which rules out it's use for a majority of high-traffic websites.. To serve dynamic data each peer would have to execute code (scary), and would probably have to retrieve data from a database (which would be another big delay, again because of the latency)
I think "cloud computing" is about as close to P2P web-hosting as we'll see for the time being..
P2P website hosting is not yet widely used, because the companion technology allowing higher upstream rates for individual clients is not yet widely used, and this is something I want to look into*.
What is needed for this is called Wireless Mesh Networking, which should allow the average user to utilise the full upstream speed that their router is capable of, rather than just whatever some profiteering ISP rations out to them, while they relay information between other routers so that it eventually reaches its target.
In order to host a website P2P, a sort of combination of technology is required between wireless mesh communication, multiple-redundancy RAID storage, torrent sharing, and some kind of encryption key hierarchy that allows various users different abilities to change the data that is being transmitted, allowing something dynamic such as a forum to be hosted. The system would have to be self-updating to incorporate the latter, probably by time-stamping all distributed data packets.
There may be other possible catalysts that would cause the widespread use of p2p hosting, but I think anything that returns the physical architecture of hardware actually wiring up the internet back to its original theory of web communication is a good candidate.
Of course as always, the main reason this has not been implemented yet is because there is little or no money in it. The idea will be picked up much faster if either:
Someone finds a way to largely corrupt it towards consumerism
Router manufacturers realise there is a large demand for WiMesh-ready routers
There is a global paradigm shift away from the profit motive and towards creating things only to benefit all of humanity by creating abundance and striving for optimum efficiency
*see p2pint dot darkbb dot com if you're interested in developing this concept
For our business I can think of 2 reasons not to use peer hosting:
Responsiveness. Peer hosted solutions are often reliable because of the massive number of shared resources, but they are also nutoriously unstable. So the browsing experience will be intermittent.
Proprietary data/code. If I've written custom logic for my site I don't want everyone on the network having access. You also run into privacy issues with customer data.
If I were to donate some of my PCs CPU and bandwidth to some p2p web hosting service, how could I be sure that it wouldn't end up being used to serve child porn or other similarly disgusting content?
How many times have you seen "97.2%, please seed!!" for any random torrent?
Just imagine the havoc if even a small portion of the web became unavailable in this way.
It sounds like this idea would add a lot of cost to the individual seeder (bandwidth) without a lot of benefit.

Resources