My boss asked me to do a research on available CMSes on market because cms we are using currently is rather a mess.
For me as a .NET developer it would be great to choose and implement Dynamics CRM because of extensibility and perfect integration with .NET environment and well-known tools.
All marketing sounds great but I'd like to know about common DISADVANTAGES, ISSUES concerning this system.
The most important is how it is performing in a company with about 150 concurrent and very active users. I heard that it's really slow comparing to competitors system.
The Dynamics CRM Product team has published an excellent whitepaper with guidance and benchmarks for 500 concurrent users. You can learn a lot by studying this paper. The link is here:
Microsoft Dynamics CRM 4.0 Suggested Hardware for Deployments of up to 500 Concurrent Users
I can't answer regarding the number of users/activity. I can refer you to the SDK article 'Performance Best Practices'. I'll speak to the side of you that would be writing plugins (to data access messages), custom pages accessing the CRM web services, and writing SSRS reports. A couple of points I can relate to:
Disable Plug-ins. This is an attractive and major integration point into CRM. The fact that they list it as a performance issue is disheartening. We have seen OutOfMemory exceptions stemming from the plugin cache. We got around this issue by deploying to disk rather than database. In the database they reload the assembly and confirm the signature every time a plugin is called. We believe this was eating up the Large Object Heap. Probably not an issue for your normal CRM implementation.
Limit Data Retrieved. Definitely. Avoid lookups/picklists/bits you don't need when you can as these cause an extra join. Not going to be a huge deal on smaller entities. But if you need entities with a large number of attributes it could be. Probably not an issue for normal CRM customization. A good design in other cases should avoid this issue.
I can't really offer any advice on how it compares to its leading competitors. I know the main thing is that its cheaper and very actively developed.
I can say a bit about performance though which might help.
We have about 400 - 600 concurrent users using the system. The system isn't particularly web server intensive. We have two for resliency - it would be a disaster if it went offline, but these servers are never taxed. They have a couple of virtual cores and 4 gig of ram.
Our database is 130GB in size and is hosted on a 24 core database server with 48GB of RAM. It is clustered but because SQL server can't handle two active nodes, only one server is ever active.
The database server really never gets maxed out. However, there is one very important change we needed to make and one that I think MS are advising all users of large CRM installs to do now. By default SQL Server has a locking mode that will block people writing to the database when a row is being read. In our system (and numerous others apparently) that was causing huge issues.
We switched on a different mode (I think its called "snapshot isolation") or something like that. To be fair though even if you did have 200 concurrent users, it won't be any issue until the more central tables like activitypointer and account get pretty large (in the millions)
So - there is no doubt that CRM 2011 can handle that many users as long as you have some suitable hardware and have someone who understands SQL Server
HTH
S
Related
AppHarbor looks very appealing for our .NET solution. But I have some questions I could not find on internet.
Our major concern is reliability of dedicated SQL Server:
Is it clustered / mirrored / replicated?
What happens when they upgrade / patch / maintain server or. hosted server and when hardware fails?
Are upgrades scheduled?
Can we set time interval when they do upgrades?
Which version and edition of Sql Server is used?
Can I use full text search?
Can I use Reporting service?
Is communication with SQL database reliable? For example in Azure SQL it is recommended to build in retry logic - if command does not succeed, retry.
Is AppHarbor reliable? Every cloud provider has occasionally some blackouts (Amazon, MS Azure ...). Is AppHarbor any less reliable compare to them? I know AppHarbor runs on top of Amazon.
Are there a lot of hidden issues you run into? What are the most common?
Did anybody decide to leave appHarbor for a good reason?
As far I can see Azure is a real cloud system with all the downside and upside - more scalable, but with modified infrastructure like customized SQL server .... AppHarbor mimics more on-premises solution. Is my understanding correct?
How is documentation?
How is support?
Thank you for your help.
Yes AppHarbor offers redundant/replicated dedicated SQL Server databases. These plans are available upon request.
This depends on the type of maintenance/update and your SQL Server database plan. If the database server is replicated, downtime can be minimized by failing over to the replica while performing maintenance. In the event of a server failure the database will be attached to a new instance and the application's configuration will be automatically updated. Should a hard drive fail leading to corrupted/lost data AppHarbor make daily backups that will be used to restore your database. It should be noted that hard drive failures are very rare.
We generally coordinate planned maintenance that requires downtime with customers whenever possible. Dedicated SQL Server customers can also select their own maintenance window.
Not really, but AppHarbor will reach out and coordinate with you when it is necessary.
Different SQL Server versions and editions are used depending on the plan. For single-instance dedicated SQL Servers we generally use SQL Server 2008 R2 Web Edition. Dedicated SQL Server 2012 instances are available upon request. Replicated setups require other and more expensive SQL Server editions. You may also want to consider our dedicated MySQL service if you'd like to reduce costs and don't rely on SQL Server specific features - since AppHarbor doesn't have to pay license costs these are less expensive, particularly for a replicated setup.
Yes.
Not by default, but we can work with you to support reporting services on your dedicated SQL Server instance.
Yes. In fact the primary reason customers upgrade from shared to dedicated SQL Server is for consistent, reliable performance.
I'd say so. The last major outage occurred on July 29th, 2012 due to an electrical storm that affecting multiple availability zones in AWS's North Virginia region. As an example, our blog has been available 99.997% of the time since then. In the event of an application instance failure applications are rapidly moved to healthy instances. We recommend running with at least two workers to ensure redundancy in those cases.
I'm admittedly not the best person to answer this question. The most common request/limitation we hear about is that you can't currently trigger a backup yourself. This will be available at a later time, but we do keep daily backups of your databases.
-
AppHarbor's cloud application platform is relatively similar to Azure in terms of scalability. We support rapid "elastic scaling" of application workers both vertically and horizontally. With regards to the dedicated SQL Server service your understanding is correct: It is very similar to an on-premise solution. While the scaling story is different compared to SQL Azure this allows for much greater flexibility. We can tailor a database plan and server that suits your requirements whether you need high CPU, RAM and/or I/O performance. Similarly we can offer database sizes that are 10x larger than SQL Azure's current 150GB database size limitation.
Most documentation is available in the knowledge base. We try and keep this as up-to-date and comprehensive as possible, but if you find yourself missing some information you're of course more than welcome to let us know and we'll add it. Third party add-on providers typically maintain their own AppHarbor-specific documentation.
This is another question where I may be a little biased, but I can tell a little about our goals: Our goal is to always answer non-critical support requests related to apps on both free and paid plans within the day. Critical support requests and supports requests related to applications or databases on paid plans take priority. Support is included in the plans, but we're working on offering premium support options as well. We generally try to exceed your expectations and are always happy to help out and give advice on issues you experience - whether they're related to the AppHarbor platform or not.
Disclaimer: I'm a co-founder of AppHarbor.
The following questions were posed by a customer who is about to write a large scale XPages application. While I think the questions are actually to broad to fit stackoverflow style, they are interesting and the collective knowledge of the experts here could yield better results than one person answering them:
How many concurent users can use XPages applications on 1 Lotus Domino server (There are several applications on Lotus Domino server. Not one)?
How can we define and analyze memory leaks on Lotus Domino server, when run XPages application?
How can we write XPages the right way for achiving the best performance and avoding memory leaks?
What code methods and objects should not be used?
What are typical errors when the Lotus Script developer begins to write the code for XPages? What are the best practises?
How can we build centralized, consolidated application on XPages for 10000 - 15000 users? How many servers we need? How to configure XPages application in that case?
How to balace users?
I will provide my insights, please share yours
How long is a string? It depends on how the server is configured. And "application" could be a single form or hundreds. Only a test can tell. In general: build a high performance server preferably with 64Bit architecture and lots of RAM. Make that RAM available for the JVM. If the applications use attachments, use DAOS, put it on a separate disk - and of course make sure you have the latest version of Domino (8.5.3FP1 at time of this writing)
There is the XPages Toolbox that includes a memory and CPU profiler.
It depends on the type of application. Clever use of the scopes for caching, Expression Language and beans instead of SSJS. You leak memory whey you forget .recycle. Hire an experienced lead developer and read the book also the other one and two. Consider threading off longer running code, so users don't need to wait.
Depends on your needs. The general lessons of Domino development apply when it comes to db operations, so FTSearch over DBSearch, scope usage over #DBColumn for parameters. EL over SSJS.
Typical errors include: all code in the XPages -> use script libraries. Too much #dblookup, #dbcolumn instead of scope. Validation in buttons instead of validators. Violation of decomposition principles. Forgetting to use .recycle(). Designing applications "like old Notes screens" instead of single page interaction. Too little use of partial refresh. No use of caching. Too little object orientation (crating function graves in script libraries).
This is a summary of question 1-5, nothing new to answer
When clustering Domino servers for XPages and putting a load balancer in front, the load balancer needs to be configured to keep a session on the same server, so partial refreshes and Ajax calls reach the server that has the component tree rendered for that user.
It depends on the server setup, I have i.e XPage extranet with 12000 registered users spanning over aprox 20 XPage applications. That runs on 1 Windows 2003 server with 4GB Ram and quad core cpu. Data aount is about 60GB over these 20 applications. No Daos, no beans just SSJS. Performance is excellent. So when I upgrade this installation to 64 bit and Daos the application will scale when more. So 64Bit and Lots of Ram is the key to alot of users.
I haven't done anything around this
Make sure to recyle when you do document loops, Use the openntf.org debug toolbar it will save alot of time before we have a debugger for XPages.
Always think when you are doing things this will be done by several users, so try to cut down number of lookup or getElementByKey. Try to use ViewNavigator when you can.
It all depends on how many users that uses the system concurrent. If you have 10000 - 15000 users concurrent then you have to look at what the applications does and how many users will use the same application at the same time.
Thats my insights into the question
I am very interested in using monetdb as a datamart, holding some huge data tables for querying and reporting
However, after some searching, I am unable to find any online posts / blogs regarding their use of Monetdb in any kind of production capacity.
Also, there seems to be little or next to no activity online regarding Monetdb.
Is this a bad sign for the future of Monetdb ?
I am very interested in using monetdb as a datamart, holding some huge data tables for >querying and reporting
My boss is also interested in MonetDB and I had the same reaction as you. No one is writing about MonetDB... is no one using MonetDB?
Regardless, I have been running performance tests on datasets of 500,000 to 1,000,000 records comparing MonetDB (column-oriented dbms) vs. MySQL (row-oriented dbms) and MonetDB beats MySQL in all regards- even in bulk inserts... which hypothetically it should not be as good at.
I can't speculate as to what all this means for MonetDB's future, but while it's around you might want to check it out because it performs well.
(I run Windows 7 and am communicating with each database using PHP)
I react a bit late to this post, but I'd like to add my voice to the ones using MonetDB in a production environment. We use it as the back-end of Spinque, a framework for designing complex search solutions. I've been using MonetDB for about 10 years, but only in the past 3 years in a production environment. Clearly, it has pros and cons and bugs like all other products, but it is being developed and improved very actively (I don't understand the low-activity signs that you refer to). If you want a DB that allows you to be ahead of the market standards, it's a good choice. Otherwise, just go for MS SQL ;)
I've been evaluating it lately for a client so I've had some time with it. My impression at this point is that it is just finishing "growing up" from being an academic experimental playground. It clearly has yet to be really discovered, though it does have some rough edges which might hinder certain applications.
As I write, I'm in the process of trying to load over 100 million rows into an instance (at 27mil presently). So far, it performs startlingly well in some areas (aggregates), but is oddly sluggish in others (most joins I've tried so far); that said, I've not yet run the recommended sampling process yet and I'm forcing it to live in just a single service with 32GB RAM.
I've found a few little glitches and one thing that caused a full service crash (obscure and reported), but I'm thinking that for many applications MonetDB could be just the ticket. Columnar storage (rather than NoSQL) seems to be the future IMO.
I'll update this if I find anything particularly interesting.
MonetDB is first and for all a research system, but has progressed far beyond the level of the average research prototype. It is the (only) relational column-store platform in open source that I know of that supports full SQL. I have used it myself at CWI in many research projects that are not core DB research, but do need advanced DB technology.
You can see on the user's mailing list that deployments happen in many different organisations. As Roberto Cornacchia stated in a different answer, it is the backend of all Spinque deployments and we are happy MonetDB users. MonetDB is also used at a variety of non-profit projects like open streetmap and open kvk.
More and more commercial parties deploy MonetDB for analytics. (They do not always like to advertise that their analyses depend on an open source system.) Recently, MonetDB Solutions has started to provide dedicated commercial support for these deployments.
We have been using MonetDB in our business. We analyse very large data sets with many millions of rows. Traditional methods of data warehousing on SQL databases became so slow. The problem we were facing was that the data was only going to get bigger! The only way forward was to go columnar.
The results have been amazing. When you have very few joins it is staggeringly quick. Even with joins on the data sets we are looking at it is still frightening how fast it comes back.
Having seen some of the commercial partnerships I think MonetDB is going to boom over the next few years. I believe some of the major BI suppliers are using Monet under their hood to perform the large data work.
I have reasonable experience to manage my own server, so gogrid style management is not a problem. But seems mosso is a tag cheaper somewhat- except the very difficult to access compute cycles terms. Anyone could share about this would be very welcomed.
Well, even at the current moment as correct answer is marked GoGrid choice, I think I need to share my experience with GoGrid.
It's been several weeks after we broke our commitment with them and I think I'm pretty calm now to write cons for them.
1) Images. We were trying to use Windows 2008 images and those were pretty old. To be up to date, you need to install 80+ updates and that takes a while. But that's not the worst thing. Worst thing is, that default image hdd size is 20gb and that was not enough to complete windows updating, at least in automatic way (not talking about installing additional software). There's no way to increase image size, so you need to make all kinds of workarounds (for example disable virtual memory, when installing).
2) Support. It's not fanatic. I would call it robotic. Although live chat is working, at least we were unable to solve by live chat most of the problems, because live chat support personel would always forward request to upper level, which is not accessible through live chat. Another thing is, that as I understood, engineers, that have real knowledge and access to infrastructure don't work at night and in weekends (I was working from Europe, so I had completely different time zone).
3) Service Level Agreement. You need to be careful about small print (for example I've missed that rule 1hour of non working is compensated 100x was working only for one month bill), but there are things, that are not mentioned - for example I was told, that SLA terms do not work for cloud storage, although I think you won't find this mentioned in SLA.
4) Reaction time. Although in SLA they say, that will solve any issue in two hours, we couldn't get solution in 10 days. Problem was clear: network speed between gogrid server instances, also between instance and cloud storage was 10-15kbps (measured using several tools, such as netio and etc., tested several instances and so on). That wasn't because they forgot or smth., we were checking status at various levels every day. My management talked with VP of technology or something and he promised that problem will be solved in nearest time, several days passed and no solution was proposed. And some of the emails about how they are investigating problem made me laugh.
5) Internet speeds. Sometimes they were really good (I've measured 550mbps download speed), but sometimes they are terrible (upload up to 0.05mbps).
If someone thinks, that this is some kind of competitors posting, I have chat and email logs about mentioned issues, also screen shots of internet speed tests and could provide under request.
Ok, and one good thing about their service - you can use several IP addresses on one instance (what our current hosting provider - Amazon EC2 is unable to do).
Stay away from GoGrid !
I don't have any experience with Mosso, but I do have (unfortunately) VERY bad experience with GoGrid.
As other people mentioned, their support is horrible. Most times you will get a live chat person that really is no help at all - doesn't really know their system or how it works so he can't really help with any problem beyond restarting your server.
Another issue is their performance which is at best unreliable and at worst just not there. Starting from I/O which can drop to < 1mb/s (measured by a few tools) - ranging to network connections that are very slow - load balancers which do not spread the load (2 servers on RoundRobin get 70/30)
Not to mention a very buggy portal - new server picks a free ip, which I am then told is in use...and not by me - even though I have the whole range "assigned" to me -
new cases which are saved without the text - buttons which say "upgrade to a new plan" but do nothing... etc... etc...
Their billing department which is not responsive and you have to argue about everything (why am I paying $0.5/gb traffic when the site states $0.29 ?????)
I have been using them for about a year now - and that's only because I don't have the time to move. Hopefully I will be able to get the hell out of there in a month.
As you can tell, I am very very frustrated with them. I know it's my fault I didn't run away sooner, but I really didn't expect such a low level of service and quality.
beware....
Yoav.
Mosso has way better service though, and the clients stay happy. The only issue I have experienced with them ever was installing DNN (which is a pain period) and a single client machine refused to allow for FTP access to their site... but again, Mosso techs did everything they could to get it going.
It's simple, Mosso is just like a "reseller" hosting. They provide you everything whitelabel from billing to control panel then you sell it back to customers.
If you are developer, I recommend you choose GoGrid. Firstly, Mosso doesn't provide SSH access. Secondly, if you are RoR/Mongrel user, you are capped to limited RAM (unless you pay extra in addition to $100). Moreover, GoGrid allows you to choose server image (CentOS, Redhat, Windows) with some out-of-the-box support for RoR and LAMP.
Somemore, GoGrid provides you initial credits ($50 or $95 if you use MS-WEBFWRD) for you to try out before actually paying for it.
Mosso does not give you Admin control over the "servers" anymore...
Disclosure: I am the Technology Evangelist for GoGrid.
I wanted to address some of the points above by #Giedrius and #Yoav. I'm sorry if your experience was lower than expected. We have and continue to make dramatic improvements and upgrades to both our product features as well as our service. That being said, I want to answer a few points that you listed above, specifically:
1) Images - Do note that the HD size (persistent storage) is tied to the RAM allocation. Our base images for the lowest RAM allocation (512 MB) is now 30 GBs. Also, because some users experienced some performance issues with low allocations of RAM on Windows servers, we have set a minimum allocation of 1 GB or higher for most Windows instances. Also, all of our Windows 2008 instances now have SP2 on them: wiki.gogrid.com/wiki/index.php/Server_Images#Windows_2008_Server
2) Support - We are always working on making our support team and processes even better. Remember that there are several public clouds that charge for support, something we don't do. Yes, it is available 24/7/365 and you are correct that there are typically more support personnel available during business hours (that is the norm for many companies). Be we are here to help 24x7. Also, every GoGrid account gets a dedicated service team which consists of a variety of personnel from our organization (acct mgmt, tech support, billing, etc.)
3) SLA - We offer one of the most robust SLAs in the marketplace. Also, Cloud Storage IS in fact covered in our SLA under Section VI here: www.gogrid.com/legal/sla.php .
4) Reaction time - I do not believe that we ever state in the SLA that any issue will be "resolved" within 2 hours. I doubt that ANY hosting provider can offer that, simply because of the nature of hosting and the complexity therein. We will acknowledge and respond to tickets (as stated within the SLA) within 2 hours or 30 minutes depending on the nature of the ticket. I'm sorry if that isn't clear so please let me know where it can be better explained.
5) Internet speeds - we have multiple bandwidth providers for our datacenter. It is not typical that there is latency, jitter or slow transfer speeds. If a situation is encountered where the speeds are not what you expect, I encourage you to open a support ticket so that we can investigate.
6) I/O - recently we have been benchmarked by an independent 3rd party, CloudHarmony.com, as having the best I/O of cloud providers: http://blog.cloudharmony.com/2010/06/disk-io-benchmarking-in-cloud.html
7) Network Connections - see #5 above
8) Load Balancers - if you are encountering balancing issues, we encourage you to report it. Details on our LB can be found on the wiki: wiki.gogrid.com/wiki/index.php/(F5)_Load_Balancer
9) Portal - We continue to make optimizations to the web portal including recently launching a "list view" for customers with larger environments. If the portal is "misbehaving", I recommend clearing your cache and using the latest browser version (I personally use Chrome and Firefox regularly on the portal w/o issue). Alternatively, you could use the API to manage your GoGrid infrastructure.
10) Transfer Plan - A few months ago, we released some new RAM and Transfer Plans. It seems that you are still on the old Transfer plan if you have $0.50/GB instead of $0.29. We don't automatically change customers' plans without their permission. So I recommend that you upgrade your plan to enjoy the new pricing.
Hope that helps answer the questions/concerns. I didn't mean for it to be a sales pitch (as I'm not a sales guy) but I wanted to be sure that other readers had "the other side of the story."
Please contact me should you have any questions: michael[at]gogrid.com
Thanks!
-Michael
Sharepoint isn't the speediest of server applications, and I've read about a few tips to speed it up. What steps do you think are necessary to increase performance so it can be used to host a high traffic site?
At the end of the day SharePoint is just a complicated web site with all the standard components.
In order to optimize performance you need to analyze each component and determine which one is a problem, and then adjust it accordingly.
We're in the process of implementing a 1000 concurrent user sharepoint website, which may or may not be large, however some steps we are taking are:
Implementing a detailed caching strategy, to cache webpart content intelligently.
Use load balanced servers to ensure all our hardware is utilised rather then lying idle.
We've undertaken capacity planning given the existing solution, so we have a good idea which component is the bottleneck for us. (The SQL Server), so we will ensure the server can cope with expected load and future growth of the site.
We're also using hardware load balancers which will ensure our network and the related servers operate as expected, and again this is something to investigate before you implement a sharepoint website.
We're also ensuring our webparts don't generate unnecessary html, and don't return unnecesary data, as this will slow down loading times.
Something which I definately think is a good idea is to have a goal to work towards, as you can spend a huge amount of money and time optimizing SharePoint, which may prove unnecessary.
My additional best bets are:
use x64 to allow more RAM on your server
Make the best use of your application pool recycling http://blogs.msdn.com/joelo/archive/2007/10/29/sharepoint-app-pool-settings.aspx
Make sure all custom code properly disposes SPWeb and SPSite objects using this http://blogs.msdn.com/rogerla/archive/2008/02/12/sharepoint-2007-and-wss-3-0-dispose-patterns-by-example.aspx
utilize MS Capacity Planning Tool http://technet.microsoft.com/en-us/library/bb961988.aspx
Plan your site collection and database sizes. Keeping your databases and site collections under control will be key
GOVERNANCE GOVERNANCE GOVERNANCE - Plan for site size limits and expiration strategy. Old data should be deleted or archived for better performance. http://technet.microsoft.com/en-us/office/sharepointserver/bb507202.aspx
I cannot emphasize enough that proper early planning is essential for a successful SharePoint implementation.
In addition to caching and hardware, try to make sure that your masterpages and page layouts are not ghosted in the database (requiring a database call to retrieve).
Do this by ensuring the files get released to the 12 hive in your solution.
Don't forget careful selection of the built-in cache settings (choose the right one for your situation).
Use the BLOBCache.
Use IIS Compression/caching (the defaults are not enough BTW).
Ensure your SQL box can keep up, especially during indexing/crawling. Splitting the Application roles (indexing vs search query and dedicated WFE for indexing/crawling) helps.
BTW if you're running VMWare VMs for your WFEs, Windows NLB breaks (though not consistently), so use hardware NLBs or DNS round-robin, etc.
If you don't need > 2gig RAM for the IIS Application Pool on a WFE, don't bother with 64bit on the WFE.
Just my 2c.