We want to install Dynamics CRM 2013 for 10 users. We are thinking about 2 approaches:
Install only one instance of CRM and SQL Server on two separate servers machines. CRM server machine will have front end server role and SQL Server machine will have back end server role. All 10 users will browse and work on same instance of CRM.
Install SQL Server on a separate machine and install CRM on the machines of all the 10 users. All 10 CRM instances will point to the same organization created on SQL Server. Each users will use CRM installed on their own system but their customizations will be published on one organisation since all CRMs are pointing to the same organisation.
Could anyone let me know which approach will be better in terms of performance.
Update after the reply of Draiden and Kye:
All 10 machines will be used only for development and IFD or NLB will never be required.
In one of our previous projects, we had used the approach of 1SQL-SSRS and 1CRM (Full server). During peak development periods when around 8 users were connected to CRM doing customization, memory usage of CRM server would go to around 85% - 95%. At this point, CRM used to become non-responsive.
In order to avoid the high memory usage, we are thinking of approach 2 where CRM memory usage will be distributed among multiple machines. Also if someone wants to debug a plugin, they will debug on their own CRM (and will not block others). Having one SQL Server in the backend will enable developers to share the same data. Also their customization changes will be published on one central organization.
The second solutions involves the creation of a front-end server for each user? I don't think that is a viable (really nice way) to install crm. Also If you will be in the situation of setup something else, like IFD you will need to install and setup a NLB and teach everyone to change the url.
The first approach you are suggesting is the better one, but usually you go with 2 servers, 1 sql and 1 crm full installation. Performance wise shouldn't make much of a difference since the user using the system will be just 10 people.
So I would say that solution 1 doesn't help you much, because you still keep the db an the backend on the same machine,
while solution 2 still has a bottleneck when you are doing SQL operations, plus CRM is quite demanding, and let run the server on a user machine will choke it.
Go with a more traditional approach.
1 SQL-SSRS and 1 CRM, or if you think that you will have performance issues go with 1 SQL-SSRS, 1 Back-End server a NLB and as many front-end you want/need.
Again for 10 users having multiple front end server doesn't make much sense.
Please refer to this TechNet article for supported configurations.
For best performance, you will want to use a multi-server architecture. Furthermore, in order to have the data be shared between the users, they would need to be using the same environment.
Could anyone let me know which approach will be better in terms of
performance.
I don't think option 2 is viable, as it means installing the CRM web server on 10 machines:
Running IIS on client machines will start using up memory your end
users should be using for desktop applications.
If you ever need to scale up the front end machines, you'll need to
do this 10 times.
Since your users may not be using CRM all day, IIS will eventually
recycle, making the first time a user access the site seem slower
then expected.
I would install the CRM webs server and database on separate machines, following the minimum recommended hardware requirements.
https://technet.microsoft.com/en-us/library/hh699840(v=crm.6).aspx
Update - If your requirement is around a development environment, I would use two servers for Production and two servers for Test (to mimic Production).
For the development environment - I'd ask developers to install CRM and SQL locally so that they can debug their own code, and then push their finished code to a central repository such as Github or TFS. It would then be someone's (or something's) role to pull down updated code, prepare and CRM solution and deploy to the next environment.
Related
I have a MS Access Database that I need to share with multiple users in the entire state. Right now I split the database and placed the backend on a shared network drive and distributed the front end, but the issue I'm having is that offices further away can't enter a record in a timely manner (one office took over 2 hours).
We do have SharePoint, but it's on a 2010 server and our MS Access is 2013 and I'm told because of this, access won't link up to SharePoint and this is not an option.
Someone in my office mentioned something about replicating a database...is this something that will work? If not, are there any suggestions?
Replication in Access was killed in Access 2007.
SharePoint is not an option except if you start from scratch, and the shared lists and/or various web apps you can create are seriously limited compared to your present desktop solution.
Basically, you have three options:
Upgrade your WAN to 100 Mbit/s low-latency quality fibre connection
Create a Terminal Server hosting your application. Remote users will access this via standard Remote Desktop Connection
Upgrade your backend to SQL Server Express (free) and set up an in-house or outsourced server hosting this
The first options require zero coding, while the last takes a little but not much, and that is well documentated (just bing/google on this).
Apart from Technology support , what are all the business benefits for oracle web logic server. For example in area of security,support etc.
What are all the new features supported by weblogic ?
TL;DR:
Support is great when you open ticket with Oracle Support (Weblogic strictly).
Great admin/read-only user implementation. We authenticate to Windows Active Directory. Developers get read-only accounts, reduces churn for them to wait for ops to transfer logs and validate settings.
Dashboard useful out-of-box to do real-time monitoring without additional tools or installs. Easily accessed by any one who is authenticated to login. We could give it to our CIO if he wanted in about 3 minutes by adding him to the right authorized group in AD.
Easier to clone environments.
I haven't worked with OC4J but I believe Oracle's roadmap is picking Weblogic as their preferred Java application server. You can see it is the base technology for some of their other products, such as Oracle Service Bus, Oracle Enterprise Manager (OEM), and Oracle Line Planning.
I have opened 3 Oracle tickets in the past month. I was surprised at how fast they answered. For a Severity 3 ticket (medium), they usually have responded in 2-3 days. I can't say the same for their other services (over 2 weeks for a ticket on OEM).
Security is a pretty broad scope... so you'd have to be a little more specific on some of the topics of security.
One thing that is pretty awesome is the Dashboard. http://docs.oracle.com/cd/E14571_01/web.1111/e13714/dashboard.htm You can obviously add read-only monitor accounts so other users can get insight to the performance. We add developers to this so that they can validate any settings, or see performance whenever there is a production issue.
We used Microsoft Active Directory authentication in our Weblogic domains. People are not using the default weblogic administrator user so configuration changes are audited. When someone's account gets disabled when leaving the company, it disables their access to Weblogic similarly. You don't have to change the password.
Other useful settings I like in it is the ability to automatically archive config changes. Each time someone makes a config change, a backup is automatically created. This allows me to go fix something when developers break their environment without having to majorly reverse-engineer what they did.
I also like the fact that you can pack and unpack the domains. I've used it to move entire domains from staging to production with some minor changes... i.e. change all stg to prod variables. This should likewise make it easier to 'clone' environments when you want to build out a new one.
Although not related, I should mention Oracle Enterprise Manager. We are an Oracle shop because they seem to have given us a good deal on licencing. So we get to run Oracle Enterprise Manager, which is a tool slowly becoming more and more useful. The agent also reports how our RedHat Linux hosts are behaving, network input/output, CPU utilization, memory utilization, java heap stacks. We are going to move to defining groups within that has all the targets related to an application stack. This will give our operations team the insight to see where the bottleneck might be... the Oracle Weblogic web layer, network, Oracle Service Bus, or Oracle Database performance.
Supposedly, you can add jBoss, other JMX monitoring as well to OEM. It's on our to-do list for non-Weblogic instance. We're slowly rolling OEM out.
It seems to me as a wise idea to test run my workflow on a local server before deploying in at the customer's. To be entirely sure, I'd like to copy all the data from their DB to my test organization (I have full access rights). The problem is that I can't see any straightforward way to export the whole shabang to a XML Spreadsheet.
What's the best way to export/import everything from/to a DB? The source and the target servers are not the same.
Of course I've got the option of backing up the clients DB and restore it, would the brown stuff hit the fan, but it'll far more professional if I won't have to.
The client's DB is in the cloud, which makes me suspect that perhaps I won't be able to access it at all and as far I can see, there's no way to back-up the data there. Am I missing it or is it that bad?
I fully agree that would be sensible. Usually we have a number development and test servers for all our work, generally we do not exactly mirror the data in the client database however.
We create a representative sample of data in our dev servers and then just move across the Crm solution for deployment.
As far as I know there is not straight forward way to get all the data, if you really want to do this I would suggest taking a back up of their database and importing to yours.
(As a side note, not all clients are happy for copies of their database - especially if its a live system - to be taken off site. Personally if it is a live database I wouldn’t put that risk on yourself, if the data gets lost or leaked you might suffer the consequences).
James raises good points about the business aspects of your request, however to get hold of the record-level data there are few options. The easiest by far is a wholesale export and import of the underlying SQL database. (For the record, the alternative is to do a data migration from live into a different db but this is no small task so I won't even entertain that any further here).
You mention that the client is using CRM Online ("...client's DB is in the cloud..."). You can raise a (free) support request with CRM Online Support who will provide you with a copy of the YourOrg_MSCRM database which can then be reimported into an on-premise deployment.
If you wish to simply have a test instance that has a copy of the Microsoft CRM Online organization, Microsoft does provide a means to do that. Depending on how many professional user licenses that the customer has, this may be free, but could be an extra cost and both instances would count against the storage limit for Microsoft CRM Online. You can see full details here - https://community.dynamics.com/crm/b/crmteamblog/archive/2014/03/20/introducing-sandbox-instances-in-crm-online.aspx . You can see steps on how to setup a sandbox instance here - https://technet.microsoft.com/en-us/library/dn467371.aspx "Add an instance to your subscription". This is something that I have used with one of our Microsoft CRM customers as it was a very good way to help validate the Scribe Online migration and customization changes we were making before moving those into production. The nice thing about doing it this way is that everything is still contained in the same Office 365 tenant and you can limit which users have access to the Sandbox organization, which is important for customers in knowing that their data is safe and not on some unknown server or machine.
I have a project and I'm planning to start the web app as an Azure Web Site and then migrate it to an Azure Cloud Service (also called Hosted Service) if it is needed as a scale strategy.
The decision is because I read that Azure Web Sites are more simple and fast to develop with almost no Azure-specific configurations or code. So starting fast and simple is a good starting point for the project.
But, is that a good starting point for you?
Is migrating an Azure Web Site to an Azure Cloud Service the same as you were migrating a normal ASP.NET Website to an Azure Cloud Service?
Would you start with an Azure Cloud Service right from the beginning? If yes, why?
Thanks for your time.
There are benefits to both deployment models, it will eventually come down to what you are trying to achieve and ultimately the success of your application.
Below I've outlined the Pros and Cons of each of the models to ensure that you're making the right choice for your applications goals.
Windows Azure Web Sites
You have properly identified that Windows Azure Web Sites is a great starting point for an application, however you could also consider that Web Sites does offer enough scalability for many solutions.
Pros
10 Free sites during preview [Free for 12 months]
Easy Deployment (use Git, TFS, Web Deploy or FTP)
Quick Scalability (You can move to your own dedicated cluster [aka reserved standard])
Simple Development (Supports Classic ASP, ASP.NET, Node.js, Python & PHP)
Persistent Environment (most people are used to this)
Cons
No SSL Support on Custom Domains
in Preview (currently no SLA)
Windows Azure Cloud Services
Cloud Services (formerly known as Hosted Services) is definitely the vision for the future of Web Applications. It is built with resiliency in mind to keep the cost of applications affordable by scaling to meet demand, and dial back capacity when your traffic slows.
Pros
Increased control over the cost of your application (if architected correctly)
Flexibility (You have full control over the environment)
SSL Support
Language Agnostic
Web Server Agnostic (although IIS is available by default)
Auto Management of Servers
Cons
Architecture should be carefully considered
Deployment time is slower (Slows development cycle)
Things to consider for Portability
The items above might have given you enough to plan the immediate future of the application and it is very likely that you might want to consider Cloud Services in the future (it fits a number of application scenarios better in the long run).
Here is a list of things to help portability between Web Sites to Cloud Services:
Start thinking Stateless
Windows Azure Web Sites is nice as it is a persistent environment, which means you are able to store things like session state and assets to the disk.
Although this is a good feature, it's best to start planning towards a stateless application, if your end goal is to be in Cloud Services. Here are a few things you can do to start thinking stateless:
Don't rely on Session State
If you need it, come up with a strategy to make it scale (Caching Service, SQL, or Storage)
Use the Storage Service
Assets such as Static HTML, css, javascript and images are better placed in Storage
Avoids additional bandwidth on your Web Site (potentially stay shared longer for lower cost)
Can be CDN Enabled, provides a better experience for International markets
Easier to update web assets when application is migrated to Cloud Services
Storing User content
If your application already stores to the Storage Service, there is one less code modification in the future when moving to cloud services.
Make it easy to discover patterns in your Data
The benefit of Cloud Services is it enables you to reduce cost by only scaling what needs scaled. Starting the process of identifying your scale units i.e. How you partition your database or Tables in Storage.
I read all post and all of them are very helpful.
In addition to all post , I found an info on msdn : Windows Azure Websites, Cloud Services, and VMs: When to use which?
With Windows Azure Websites you can:
Build highly scalable web sites on Windows Azure.
Quickly and easily deploy sites to a highly scalable cloud environment that allows you to start small and scale as needed.
Use the languages and open source applications of your choice then deploy with FTP, Git or TFS, and easily integrate Windows Azure services like SQL Database, Caching, CDN and Storage.
With Cloud Services you can:
Build or extend your enterprise applications on Windows Azure.
Create highly-available, scalable applications and services using a rich PaaS environment. Support advanced multi-tier scenarios, automated deployments and elastic scale. Deliver great SaaS solutions to customers anywhere around the world.
And also there is summarizes the option on msdn :
And comparing some features Web Sites and Cloud Services on msdn:
Azure is a great place to have your app, but there are some considerations you need to know before start migrating it.
Azure Websites and Hosted Services are really trivial to deploy. With
Visual studio you generate the package and simply upload it. Then you
have a Development environment to check it. If it's ok for you, swap
ips. If it's not ok for you, upgrade again.
Your instances have some properties that could be annoying. For
example, you cannot be sure about your IP. Then if your app works
with some provider using IP restriction, you will need to figure out
how to proceed.
More considerations. Your "server" could be reimaged at any moment.
If you store something on the local disc, that file could go away at any moment.
Azure works very nice if you have at least 2 instances or more for
each website. Maybe your app is not prepared for that. The first step
will be managing the sessions with the appFabric. Is really
easy, just a change on your web config. Be careful because this
session state doesn't work exactly as the "old one". You cannot store
non-serializable objects (should be easy to adapt) or a very large objects (more than 8MB).
If you are going to develop something from zero, I suggest you to start into azure from the beginning. The reason is simple: it's really cheap to start and you will not pay serious money until the app have lot's of visits. It's also very cheap to setup a SQLAzure and a storage account. One you have all in place, it's easy to add more instances or scale up.
Example:
Imagine you have an idea and you wish to show up to some possible investors.
You start setting up a little SQLAzure database (1GB ), $9,99 monthly.
Then you build a site and you put 2 extra small instances, $18,72 monthly.
Let's say you need 100 GB of space (images, backups, ...), $12,50 monthly.
At his point, you have all in place to start your business paying less than $50 monthly.
If you site have exit and the visits starts to come, you change your instances for small instances (it's really dangerous to have production environment with extra small instances, because do not have cpu reservation). Then you change the extra small cost ($18,71) up to $57,60. Maybe you need more space to that SQL Azure? etc...
prices calculated from here: http://www.windowsazure.com/en-us/pricing/calculator/?scenario=web .
Those are few tips, there is a lot more. My advice is to start a trial account and play with it.
Final advice: Its very easy to solve everything just purchasing more resources. Sometimes you need to refactor and optimize your code. If you simply add more resources each time you have a problem, you could end with a huge bill and a very messy code.
Hope it helps!
Another advantage of Windows Azure Cloud Services over Web Sites is that a cloud service can be added to an Azure Virtual Network. This can give it access to on-premises resources like databases. So if your requirements are such that you need the scalability offered by Azure but need to keep your data on-premises due to security restrictions, cloud services is a better choice.
Azure web sites cannot be part of an Azure virtual network. To access on-premises resources mechanisms such as Azure Service Bus Relay must be configured.
We've had our web site running on PHP on some hosting and at some point decided to move it to Azure (where sits main part of our service). We've started with Azure Web Sites which was great from development point of view (mainly integration with git). But after about a week of testing (when we've decided to actually move the production web site) we've found that currently
No SSL for custom domains
Custom domains are available only for reserved instances (no shared infrastructure)
SLA
So we moved to Hosted Service. The main problem for us was lack of ability of simple deployment (need to build package and upload whole package of the web site), and found solution was to use dropbox - as a startup task for role, we're installing dropbox service on the machine, which takes all the web site from dropbox, which in turn have SVN checked out folder, so site updates became very easy.
I have implemented Continuous Integration using TFS Version Control and TFS Build 2010. The compiled website project gets dropped in a shared folder with a version number.
Now I have a very basic question and may be a stupid question. When we normally deploy a website project from VS 2010 to a webserver it uploads App_Offline.htm file to the website folder so no requests are served to the user. After publish is completed that App_Offline.htm file is removed. During that period of time users see outage.
If we use CI on a live website then how can we eliminate that outage which appears to a user. I believe the whole point of CI is that users get to see newer features and the site is never down.
How is this accomplished? If we deploy website project to root folder then existing users will be affected and that is certainly no advisable.
I wanted to know what is the recommended practice with VS2010, TFS2010 Build & Version Control.
There's no real foolproof method for this, service up-time is never 100%, that's why people usually define it in 'nines'
But, if you had multiple web servers (Backup, fail-over, mirror etc.), you could roll out the update across them, so that as you update some servers, others will still be online (albeit with the old version) to serve users.
In general, only some of the largest websites have to worry so meticulously about being down for a few short minutes, so make sure you're focusing your energy in the right place ; )
Regarding taking down the site for the shortest time possible, the only way I've seen this done successfully is using multiple sites - either load balancing, or 2 sites on the same machine + swapping host headers after the release/warm up. But in most cases it's not worth the effort, releases shouldn't take down the site for more than a few seconds in which time there should be relatively few requests. You're better off trying a few things you can do to help your users live through a site release.
Move session out of proc.
If the users session lives in the app pool it will be lost when a new version is released, change the config to move it into a session server or the database.
Specify a machine key for the website
Viewstate (and cookies?) are encrypted using a key that is generated when a site starts, if a site restarts due to a release any users filling out a form will receive a invalid viewstate exception on postback. (Note: this may have other security implications)