Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are planning to introduce real time chat feature in our mobile apps. Ofcourse we would be going the XMPP way.
Can anybody shed some light on stats for maximum number of concurrent users Openfire has supported on EC2 instances (windows server) of different sizes in the real world?
We are looking at numbers ranging from 22500 concurrent users to 75000 concurrent users depending upon growth patterns predicted for app downloads and user adaptability for this new real time chat feature. time range = next 12 months.
From whatever googling I have done so far, it seems Openfire may not be the best bet when it comes to scaling out so can these numbers be supported on a single instance of ec2 over time? ie: we start hosting on smaller instances and keep increasing instance size as load demands.
Ejabbered seemed to be the best option when it comes to scaling out but since we would need to have erlang skills in order to extend it makes ejabbered a difficult choice for us. The other alternate is tigase which is java so we could extend it much easily but if Openfire can work for us for the next 12 months or so by scaling up versus scaling out, we would be happy to use it for now and see how well this new chat feature is embraced. Number one reason being ease of management.
Lastly, if you could help with links on SaaS / PaaS providers for XMPP chat + Push Notifications to mobile devices when user is offline, it would be awesome. We got in touch with quickblox.com but their enterprise offerings appear to be expensive for us at the moment. We want 100% ownership and portability of our data if we go the SaaS / PaaS way.
There are several references to Openfire handling those and larger numbers of concurrent users on a single server.
There is document on scalability from 2007 that shows 50000 users supported on version 3.2. The current release is 3.7.1. Don't forget that that also means a much slower machine than anything you are likely to run on today.
You also have to take into account what features of XMPP you will be using, but simple messaging should be able to easily handle the numbers you are referring to.
The numbers you mention should be easily handled by ejabberd.
I am unsure as to how you want to "extend" ejabberd. Multi-user chat and messaging are handled fine by all servers and of course ejabberd. Additionally, if you are thinking of custom protocols, these can be written in your language of choice and connect to ejabberd as an XMPP component.
The only thing you might miss is a web interface (which ejabberd has but it's rather limited), but then again if you expect to manage things through a web UI for an application, you will need to think again ;)
if you want to go with ejabberd, you can always get support from ProcessOne.
This is another plus for ejabberd, as it can be commercially supported if you want to / can afford.
Android Push Notification is a good solution.
With Android-Push services, you (Android developers) can send messages directly to the people who have installed your app. All you need is to include a code snippet into your app, and post to a specific URL to reach your app users, even if your app is inactive on their phone.
Feature:
Free
Free, unless you need extensive number of pushes for your app. Of course you can pay for more push and a quicker tech support.
Easy
Extremely easy to integrate into your app
Super simple to push to the app: just send a URL request
No C2DM limit, you don't have to have a gmail account to use the push service
Cloud service, no need to setup your own push server
Effective
Low battery and network consumption on the phone
Track user interaction, find out how users react to your push
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm friends with an owner of a small creative business (with multiple departments) and until now they have been using a dedicated server (via a 3rd party) for a lot of internal projects and they've been known to iframe a few small dev projects (like photo galleries, one page sites etc...) off and on for some of their clients (some with hi traffic sites).
They're looking to switch from the dedicated server to a cloud environment. The owner is enamored with amazon's cloud services, but still wanted some alternative options they also want the new environment to mirror the current one as much as possible (linux/centOS, PHP 5.3, mysql databases) but with the ability to scale when desired.
So the misconceptions I need cleared up and questions I have are:
1) I always assumed amazon's cloud service was more suitable for high end high traffic complex web application (Netflix, pinterest, instagram etc...) rather than the typical server use listed above. Is this correct?
2) Is it possible to mirror their current setup on amazon?
3) If number 1 is not true, but they instead chose rackspace, could they run heavy web apps like Netflix, pinterest, instagram on a rackspace cloud server if they ever decided to do something that advanced (is rackspace scaleable in the same way ec2 is)?
1) Amazon AWS is also suitable for this environment, or even smaller ones (they offer instances as small as "Micro", which are far less capable than what you are describing all the way up to GPU compute clusters).
2) Yes. That is a very common setup for an AWS-based solution. In fact, I recently migrated something similar from Rackspace to AWS.
3) #1 is true. However, you can certainly mix what runs on Rackspace and in the AWS cloud. Keep in mind latency and security issues if the two component solutions need to communicate with each other. Rackspace also has a cloud offering, but it is not as mature as Amazons.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have thought a lot recently about the different hosting types that are available out there. We can get pretty decent latency (average) from an EC2 instance in Europe (we're situated in Sweden) and the cost is pretty good. Obviously, the possibility of scaling up and down instances is amazing for us that's in a really expansive phase right now.
From a logical perspective, I also believe that Amazon probably can provide better availability and stability than most hosting companies on the market. Probably it will also outweigh the need of having a phone number to dial when we wonder anything and force us to google the things by ourselves :)
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
To clarify, we will run a pretty standard LAMP configuration with memcached added probably.
Thanks
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
The pros and cons of EC2 are somewhat dependent on your business. Below is a list of issues that I believe affect large organizations:
Separation of duties Your existing company probably has separate networking and server operations teams. With EC2 it may be difficult to separate these concerns. ie. The guy defining your Security Groups (firewall) is probably the same person who can spin up servers.
Home access to your servers Corporate environments are usually administered on-premise or through a Virtual Private Network (VPN) with two-factor authentication. Administrators with access to your EC2 control panel can likely make changes to your environment from home. Note further that your EC2 access keys/accounts may remain available to people who leave or get fired from your company, making home access an even bigger problem...
Difficulty in validating security Some security controls may inadvertently become weak. Within your premises you can be 99% certain that all servers are behind a firewall that restricts any admin access from outside your premises. When you're in the cloud it's a lot more difficult to ensure such controls are in place for all your systems.
Appliances and specialized tools do not go in the cloud Specialized tools cannot go into the cloud. This may impact your security posture. For example, you may have some sort of network intrusion detection appliances sitting in front of on-premise servers, and you will not be able to move these into the cloud.
Legislation and Regulations I am not sure about regulations in your country, but you should be aware of cross-border issues. For example, running European systems on American EC2 soil may open your up to Patriot Act regulations. If you're dealing with credit card numbers or personally identifiable information then you may also have various issues to deal with if infrastructure is outside of your organization.
Organizational processes Who has access to EC2 and what can they do? Can someone spin up an Extra Large machine and install their own software? (Side note: Our company http://LabSlice.com actually adds policies to stop this from happening). How do you backup and restore data? Will you start replicating processes within your company simply because you've got a separate cloud infrastructure?
Auditing challenges Any auditing activities that you normally undertake may be complicated if data is in the cloud. A good example is PCI -- Can you actually always prove data is within your control if it's hosted outside of your environment somewhere in the ether?
Public/private connectivity is a challenge Do you ever need to mix data between your public and private environments? It can become a challenge to send data between these two environments, and to do so securely.
Monitoring and logging You will likely have central systems monitoring your internal environment and collecting logs from your servers. Will you be able to achieve the monitoring and log collection activities if you run servers off-premise?
Penetration testing Some companies run periodic penetration testing activities directly on public infrastructure. I may be mistaken, but I think that running pen testing against Amazon infrastructure is against their contract (which make sense, as they would only see public hacking activity against infrastructure they own).
I believe that EC2 is definitely a good idea for small/medium businesses. They are rarely encumbered by the above issues, and usually Amazon can offer better services than an SMB could achieve themselves. For large organizations EC2 can obviously raise some concerns and issues that are not easily dealt with.
Simon # http://blog.LabSlice.com
The main negative is that you are fully responsible for ALL server administration. Such as : Security patches, Firewall, Backup, server configuration and optimization.
Amazon will not provide you with any OS or higher level support.
If you would be FULLY comfortable running your own hardware then it can be a great cost savings.
i work in a company and we are hosting with amazon ec2, we are running one high cpu instance and two small instances.
i won't say amazon ec2 is good or bad but just will give you a list of experiences of time
reliability: bad. they have a lot of outages. only segments mostly but yeah...
cost: expensive. its cloud computing and not server hosting! a friend works in a company and they do complex calculations that every day have to be finished at a certain time sharp and the calculation time depends on the amount of data they get... they run some servers themselves and if it gets scarce, they kick in a bunch of ec2's.
thats the perfect use case but if you run a server 24/7 anways, you are better of with a dedicated rootserver
a dedicated root server will give you as well better performance. e.g. disk reads will be faster as it has a local disk!
traffic is expensive too
support: good and fast and flexible, thats definately very ok.
we had a big launch of a product and had a lot of press stuff going on and there were problems with the reverse dns for email sending. the amazon guys got them set up all ripe conform and nice in not time.
amazon s3 hosting service is nice too, if you need it
in europe i would suggest going for a german hosting provider, they have very good connectivity as well.
for example here:
http://www.hetzner.de/de/hosting/produkte_rootserver/eq4/
http://www.ovh.de/produkte/superplan_mini.xml
http://www.server4you.de/root-server/server-details.php?products=0
http://www.hosteurope.de/produkt/Dedicated-Server-Linux-L
http://www.klein-edv.de/rootserver.php
i have hosted with all of them and made good experiences. the best was definately hosteurope, but they are a bit more expensive.
i ran a CDN and had like 40 servers for two years there and never experienced ANY outage on ANY of them.
amazon had 3 outages in the last two months on our segments.
One minus that forced me to move away from Amazon EC2:
spamhaus.org lists whole Amazon EC2 block on the Policy Block List (PBL)
This means that all mail servers using spamhaus.org will report "blocked using zen.dnsbl" in your /var/log/mail.info when sending email.
The server I run uses email to register and reset passwords for users; this does not work any more.
Read more about it at Spamhaus: http://www.spamhaus.org/pbl/query/PBL361340
Summary: Need to send email? Do not use Amazon EC2.
The other con no one has mentioned:
With a stock EC2 server, if an instance goes down, it "goes away." Any information on the local disk is gone, and gone forever. You have the added responsibility of ensuring that any information you want to survive a server restart is persisted off of the EC2 instance (into S3, RDS, EBS, or some other off-server service).
I haven't tried Amazon EC2 in production, but I understand the appeal of it. My main issue with EC2 is that while it does provide a great and affordable way to move all the blinking lights in your server room to the cloud, they don't provide you with a higher level architecture to scale your application as demand increases. That is all left to you to figure out on your own.
This is not an issue for more experienced shops that can maintain all the needed infrastructure by themselves, but I think smaller shops are better served by something more along the lines of Microsoft's Azure or Google's AppEngine: Platforms that enforce constraints on your architecture in return for one-click scalability when you need it.
And I think the importance of quality support cannot be underestimated. Look at the BitBucket blog. It seems that for a while there every other post was about the downtime they had and the long hours it took for Amazon to get back to them with a resolution to their issues.
Compare that to Github, which uses the Rackspace cloud hosting service. I don't use Github, but I understand that they also have their share of downtime. Yet it doesn't seem that any of that downtime is attributed to Rackspace's slow customer support.
Two big pluses come to mind:
1) Cost - With Amazon EC2 you only pay for what you use and the prices are hard to beat. Being able to scale up quickly to meet demands and then later scale down and "return" the unneeded capacity is a huge win depending on your needs / use case.
2) Integration with other Amazon web services - this advantage is often overlooked. Having integration with Amazon SimpleDB or Amazon Relational Data Store means that your data can live separate from the computing power that EC2 provides. This is a huge win that sets EC2 apart from others.
Amazon cloud monitoring service and support is charged extra - the first one is quite useful and you should consider that and the second one too if your app is mission critical.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been looking at systems such as txtlocal, esendex and clickatell. I need to send out a very large number of messages and ideally would like to go in at a lower level then using systems like these. Does anyone know how these SMS gateways like I've listed work in terms of actually sending out the messages? Will they have agreements with different carriers and be sending them out programmatically? I've tried contacting some UK carriers directly but as of yet haven't had any success getting any information from them.
Aggregators typically work by talking directly with a mobile carrier's internal SMSC using IP/X.25/frame relay and using a protocol like SMPP/CIMD to request a message send.
They will have connections to multiple networks SMSC's so they can do least cost routing (i.e sending a message to a user on their home network being cheaper).
Here are some contact details for Orange/Voda.
That said, MXTelecom as mentioned by Phill offer a good gateway service, as do mBlox - both of whom have already done all the hard (and expensive) work for you.
Working with an aggregator is definitely worth the effort. They handle the legal contracts with the providers as well as with the auditing services. You can go directly to a provider (e.g. AT&T, etc.) and broker the deal yourself but generally speaking you'll only need that if you have very specific program/campaign needs. Coke, for example, brokered their own deal to get the four-digit shortcode for COKE (2653).
Keep in mind, when working with an aggregator like MXTelecom you'll be signing a contractual agreement with them (usually for 6 to 12 months) and it'll take between 8-12 weeks (in the US) to get your shortcode provisioned and setup. It's not the funnest process, IMHO.
Oh, and don't forget, they will audit your system to make sure it does what it says it will do in your campaign document.
It is also possible to create your own system (at least in the US) and use a long code. One of our original prototype systems was built with Kannel using a mobile phone tethered to an Ubuntu box. With an unlimited plan it was quite nice. Usage is related to your carrier contract so be mindful.
Per your question of how they work... They generally work via an API (HTTP or SMPP are most common). Depending on your in/out volume you may want to put a queue in between your application and the aggregators API.
First if you're going to do any bulk SMS messaging you should get a Short Code. An aggregator will have all the necessary API's/SDK and documentation for you.
Try MXTelecom (AKA OpenMarket)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was brainstorming interesting usages of Twitter and came up with the following:
An application can use it as a call home mechanism
An application that has an invalid license could broadcast its location
A software company could use it as a remote shell like interface and issue commands to shutdown, restart and to publish patches
An application can use it for heartbeat purposes
Has anyone else came up with other non-standard usages of Twitter?
I fail to see the advantage of using a proprietary, third-party chat site in place of an appropriate networking protocol.
Matthew nailed the point that all these "applications" just represent a communications protocol between twitterer and remote host, and there are lots of mature protocols you could use instead right out of the box, rather than rolling your own on twitter.
But depending on your situation, of course there could be scenarios in which twitter is the easy way. I have written similar hacks that use e-mail as transport mechanism for automated tasks, simply because corporate red tape doesn't permit us other more conventional means. They can reboot machines, restart processes, post public messages, etc.
One of it is already available for Windows - "TweetMyPC v2.0 lets you shutdown/restart/LogOff and lots more in your windows PC.remotely."
I'm not sure this counts as a very practical use (a bit of fun mainly), but it certainly attracted my interest:
Twitter image encoding challenge
The idea of this challenge is to try to encode a picture into a 140 (Unicode) character Tweet. It's quite astounding how much information some of the algorithms posted there can fit into a message.
Scott Hanselman used Twitter to create an app for ordering a sandwich.
Check out his post
I think the main advantage of using twitter in instances like this is its SMS capabilities (and the fact they're free - whereas you can buy services that charge a monthly fee to allow you to receive SMS messages to a HTTP page or something like that).
I'd considered using it to make a little budget app for myself where I could SMS twitter things I'd bought to a private twitter account, similar for tracking petrol usage I was planning on smsing the odometer reading,cost etc in a certain format and capturing it at home to run statistics and stuff on it. There are limitations to it though - like you can only hook up an SMS number to 1 twitter account...
It's good to think outside the box, but don't be too focused on using just twitter because it's cool.
If you were comfortable setting up sensors and such, you could get a microcontroller, hook it up to a twitter feed, and then give it remote commands.
For instance, remote controlled house lights. You could then just tweet "Home lights on GXSDFXV" (The garbage at the end is to prevent real tweets from turning on and off your lights).
I wouldn't use Twitter in particular for transferring any private information (think about security if someone hacks the account and can shutdown your corporate servers or transfer fake licenses). For that I would setup a private server which implements the open microblogging protocol (like identi.ca) as long as - like others already said - there is another more suitable protocol.
For publishing PUBLIC information (heartbeat messages can be considered that, too) I like the idea pretty much. We recently had a very successfull (but unfortunately effectless) E-Petition in Germany where a Twitter account posted the number of signatures every couple of minutes.
Carsonified are using this to allow people to discover other people sitting in the same room at their conferences.
They label each chair with a tag and then you tweet that tag to an account they have and it registers you on a floorplan on the venue. Users are coloured in on the plan by their interests.
Clever but a bit overcomplicated for my tastes...
http://hello.carsonified.com/Home/Faq
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
This is slightly off topic of programming but still has to do with my programming project. I'm writing an app that uses a custom proxy server. I would like to write the server in C# since it would be easier to write and maintain, but I am concerned about the licensing cost of Windows Server + CALS vs a Linux server (obviously, no CALS). There could potentially be many client sites with their own server and 200-500 users at each site.
The proxy will work similar to a content filter. Take returning web pages, process based on the content, and either return the webpage, or redirect to a page on another webserver. There will not be any use of SQL server, user authentication, etc.
Will I need Cals for this? If so, about how much would it cost to setup a Windows Server with proper licensing (per server, in USA)?
This really is an OT question. In any case, there is nothing easier than contacting your local MS distributor. As stackoverflow is by nature an international site, asking a question like that, where the answer is most likely to vary by location (MS license prices really are highly variable and country-specific) is in my opinion not likely to receive an useful answer.
I realize this isn't exactly answering your question but if you want to use Linux, maybe you want to look into using Mono. .Net on Linux.
If users will not be actually connecting to any MS server apps (such as Exchange, SQL Server, etc) and won't be using any OS features directly (i.e. connecting to UNC paths) then all that should be required is the server license for the machine to run the OS. You need Windows Server CALs when clients connect to shares, Exchange CALs for mail clients, and SQL Server CALs for apps that connect to your databases. If the clients of your server won't be connecting to anything but the ports offered by your service, you should be in the clear, and it shouldn't cost any more to build a server for 100 users than 10.
You may not need any CALs for users depending on how you use the server. Certain functionality requires the purchase of CALs but some doesn't. There's no real good way to answer this question since the requirements are too vague. Does it use domain services? Does it use SQL server? Clustering? There are many variables.
If you are looking at what the most you could possibly pay, go to CDW and look at the Open License/Open Business products to get an estimate.
Like said above, if you are using your own connections and nothing else on the server you wont need the cals.
I would Google the ROI on Linux vs Windows for a commercial server, I have no option generally on this, but I have seen that long term they level out, in the grand scheme of things the initial cost of the Windows license is actually minimal and insignificant.
Choose the best technology to solve the end users problem, document why, provide an evaluation report, include maintenance costs, development costs etc. When you do this the answer will be clear to you and your customer.
If your users are not connecting to any other windows resources (Active Directory, SQL Server, File Shares, etc) then you shouldn't need CALs but you I believe there is something like an external connector license. There's also a 'web edition' which looks like it's in the range of ~$400.
Also it looks like Microsoft will be removing the CAL restrictions on web servers completely in Windows Server 2008
Microsoft should call their licensing division Enigma...