Apart from Technology support , what are all the business benefits for oracle web logic server. For example in area of security,support etc.
What are all the new features supported by weblogic ?
TL;DR:
Support is great when you open ticket with Oracle Support (Weblogic strictly).
Great admin/read-only user implementation. We authenticate to Windows Active Directory. Developers get read-only accounts, reduces churn for them to wait for ops to transfer logs and validate settings.
Dashboard useful out-of-box to do real-time monitoring without additional tools or installs. Easily accessed by any one who is authenticated to login. We could give it to our CIO if he wanted in about 3 minutes by adding him to the right authorized group in AD.
Easier to clone environments.
I haven't worked with OC4J but I believe Oracle's roadmap is picking Weblogic as their preferred Java application server. You can see it is the base technology for some of their other products, such as Oracle Service Bus, Oracle Enterprise Manager (OEM), and Oracle Line Planning.
I have opened 3 Oracle tickets in the past month. I was surprised at how fast they answered. For a Severity 3 ticket (medium), they usually have responded in 2-3 days. I can't say the same for their other services (over 2 weeks for a ticket on OEM).
Security is a pretty broad scope... so you'd have to be a little more specific on some of the topics of security.
One thing that is pretty awesome is the Dashboard. http://docs.oracle.com/cd/E14571_01/web.1111/e13714/dashboard.htm You can obviously add read-only monitor accounts so other users can get insight to the performance. We add developers to this so that they can validate any settings, or see performance whenever there is a production issue.
We used Microsoft Active Directory authentication in our Weblogic domains. People are not using the default weblogic administrator user so configuration changes are audited. When someone's account gets disabled when leaving the company, it disables their access to Weblogic similarly. You don't have to change the password.
Other useful settings I like in it is the ability to automatically archive config changes. Each time someone makes a config change, a backup is automatically created. This allows me to go fix something when developers break their environment without having to majorly reverse-engineer what they did.
I also like the fact that you can pack and unpack the domains. I've used it to move entire domains from staging to production with some minor changes... i.e. change all stg to prod variables. This should likewise make it easier to 'clone' environments when you want to build out a new one.
Although not related, I should mention Oracle Enterprise Manager. We are an Oracle shop because they seem to have given us a good deal on licencing. So we get to run Oracle Enterprise Manager, which is a tool slowly becoming more and more useful. The agent also reports how our RedHat Linux hosts are behaving, network input/output, CPU utilization, memory utilization, java heap stacks. We are going to move to defining groups within that has all the targets related to an application stack. This will give our operations team the insight to see where the bottleneck might be... the Oracle Weblogic web layer, network, Oracle Service Bus, or Oracle Database performance.
Supposedly, you can add jBoss, other JMX monitoring as well to OEM. It's on our to-do list for non-Weblogic instance. We're slowly rolling OEM out.
Related
We want to install Dynamics CRM 2013 for 10 users. We are thinking about 2 approaches:
Install only one instance of CRM and SQL Server on two separate servers machines. CRM server machine will have front end server role and SQL Server machine will have back end server role. All 10 users will browse and work on same instance of CRM.
Install SQL Server on a separate machine and install CRM on the machines of all the 10 users. All 10 CRM instances will point to the same organization created on SQL Server. Each users will use CRM installed on their own system but their customizations will be published on one organisation since all CRMs are pointing to the same organisation.
Could anyone let me know which approach will be better in terms of performance.
Update after the reply of Draiden and Kye:
All 10 machines will be used only for development and IFD or NLB will never be required.
In one of our previous projects, we had used the approach of 1SQL-SSRS and 1CRM (Full server). During peak development periods when around 8 users were connected to CRM doing customization, memory usage of CRM server would go to around 85% - 95%. At this point, CRM used to become non-responsive.
In order to avoid the high memory usage, we are thinking of approach 2 where CRM memory usage will be distributed among multiple machines. Also if someone wants to debug a plugin, they will debug on their own CRM (and will not block others). Having one SQL Server in the backend will enable developers to share the same data. Also their customization changes will be published on one central organization.
The second solutions involves the creation of a front-end server for each user? I don't think that is a viable (really nice way) to install crm. Also If you will be in the situation of setup something else, like IFD you will need to install and setup a NLB and teach everyone to change the url.
The first approach you are suggesting is the better one, but usually you go with 2 servers, 1 sql and 1 crm full installation. Performance wise shouldn't make much of a difference since the user using the system will be just 10 people.
So I would say that solution 1 doesn't help you much, because you still keep the db an the backend on the same machine,
while solution 2 still has a bottleneck when you are doing SQL operations, plus CRM is quite demanding, and let run the server on a user machine will choke it.
Go with a more traditional approach.
1 SQL-SSRS and 1 CRM, or if you think that you will have performance issues go with 1 SQL-SSRS, 1 Back-End server a NLB and as many front-end you want/need.
Again for 10 users having multiple front end server doesn't make much sense.
Please refer to this TechNet article for supported configurations.
For best performance, you will want to use a multi-server architecture. Furthermore, in order to have the data be shared between the users, they would need to be using the same environment.
Could anyone let me know which approach will be better in terms of
performance.
I don't think option 2 is viable, as it means installing the CRM web server on 10 machines:
Running IIS on client machines will start using up memory your end
users should be using for desktop applications.
If you ever need to scale up the front end machines, you'll need to
do this 10 times.
Since your users may not be using CRM all day, IIS will eventually
recycle, making the first time a user access the site seem slower
then expected.
I would install the CRM webs server and database on separate machines, following the minimum recommended hardware requirements.
https://technet.microsoft.com/en-us/library/hh699840(v=crm.6).aspx
Update - If your requirement is around a development environment, I would use two servers for Production and two servers for Test (to mimic Production).
For the development environment - I'd ask developers to install CRM and SQL locally so that they can debug their own code, and then push their finished code to a central repository such as Github or TFS. It would then be someone's (or something's) role to pull down updated code, prepare and CRM solution and deploy to the next environment.
It seems to me as a wise idea to test run my workflow on a local server before deploying in at the customer's. To be entirely sure, I'd like to copy all the data from their DB to my test organization (I have full access rights). The problem is that I can't see any straightforward way to export the whole shabang to a XML Spreadsheet.
What's the best way to export/import everything from/to a DB? The source and the target servers are not the same.
Of course I've got the option of backing up the clients DB and restore it, would the brown stuff hit the fan, but it'll far more professional if I won't have to.
The client's DB is in the cloud, which makes me suspect that perhaps I won't be able to access it at all and as far I can see, there's no way to back-up the data there. Am I missing it or is it that bad?
I fully agree that would be sensible. Usually we have a number development and test servers for all our work, generally we do not exactly mirror the data in the client database however.
We create a representative sample of data in our dev servers and then just move across the Crm solution for deployment.
As far as I know there is not straight forward way to get all the data, if you really want to do this I would suggest taking a back up of their database and importing to yours.
(As a side note, not all clients are happy for copies of their database - especially if its a live system - to be taken off site. Personally if it is a live database I wouldn’t put that risk on yourself, if the data gets lost or leaked you might suffer the consequences).
James raises good points about the business aspects of your request, however to get hold of the record-level data there are few options. The easiest by far is a wholesale export and import of the underlying SQL database. (For the record, the alternative is to do a data migration from live into a different db but this is no small task so I won't even entertain that any further here).
You mention that the client is using CRM Online ("...client's DB is in the cloud..."). You can raise a (free) support request with CRM Online Support who will provide you with a copy of the YourOrg_MSCRM database which can then be reimported into an on-premise deployment.
If you wish to simply have a test instance that has a copy of the Microsoft CRM Online organization, Microsoft does provide a means to do that. Depending on how many professional user licenses that the customer has, this may be free, but could be an extra cost and both instances would count against the storage limit for Microsoft CRM Online. You can see full details here - https://community.dynamics.com/crm/b/crmteamblog/archive/2014/03/20/introducing-sandbox-instances-in-crm-online.aspx . You can see steps on how to setup a sandbox instance here - https://technet.microsoft.com/en-us/library/dn467371.aspx "Add an instance to your subscription". This is something that I have used with one of our Microsoft CRM customers as it was a very good way to help validate the Scribe Online migration and customization changes we were making before moving those into production. The nice thing about doing it this way is that everything is still contained in the same Office 365 tenant and you can limit which users have access to the Sandbox organization, which is important for customers in knowing that their data is safe and not on some unknown server or machine.
I have a project and I'm planning to start the web app as an Azure Web Site and then migrate it to an Azure Cloud Service (also called Hosted Service) if it is needed as a scale strategy.
The decision is because I read that Azure Web Sites are more simple and fast to develop with almost no Azure-specific configurations or code. So starting fast and simple is a good starting point for the project.
But, is that a good starting point for you?
Is migrating an Azure Web Site to an Azure Cloud Service the same as you were migrating a normal ASP.NET Website to an Azure Cloud Service?
Would you start with an Azure Cloud Service right from the beginning? If yes, why?
Thanks for your time.
There are benefits to both deployment models, it will eventually come down to what you are trying to achieve and ultimately the success of your application.
Below I've outlined the Pros and Cons of each of the models to ensure that you're making the right choice for your applications goals.
Windows Azure Web Sites
You have properly identified that Windows Azure Web Sites is a great starting point for an application, however you could also consider that Web Sites does offer enough scalability for many solutions.
Pros
10 Free sites during preview [Free for 12 months]
Easy Deployment (use Git, TFS, Web Deploy or FTP)
Quick Scalability (You can move to your own dedicated cluster [aka reserved standard])
Simple Development (Supports Classic ASP, ASP.NET, Node.js, Python & PHP)
Persistent Environment (most people are used to this)
Cons
No SSL Support on Custom Domains
in Preview (currently no SLA)
Windows Azure Cloud Services
Cloud Services (formerly known as Hosted Services) is definitely the vision for the future of Web Applications. It is built with resiliency in mind to keep the cost of applications affordable by scaling to meet demand, and dial back capacity when your traffic slows.
Pros
Increased control over the cost of your application (if architected correctly)
Flexibility (You have full control over the environment)
SSL Support
Language Agnostic
Web Server Agnostic (although IIS is available by default)
Auto Management of Servers
Cons
Architecture should be carefully considered
Deployment time is slower (Slows development cycle)
Things to consider for Portability
The items above might have given you enough to plan the immediate future of the application and it is very likely that you might want to consider Cloud Services in the future (it fits a number of application scenarios better in the long run).
Here is a list of things to help portability between Web Sites to Cloud Services:
Start thinking Stateless
Windows Azure Web Sites is nice as it is a persistent environment, which means you are able to store things like session state and assets to the disk.
Although this is a good feature, it's best to start planning towards a stateless application, if your end goal is to be in Cloud Services. Here are a few things you can do to start thinking stateless:
Don't rely on Session State
If you need it, come up with a strategy to make it scale (Caching Service, SQL, or Storage)
Use the Storage Service
Assets such as Static HTML, css, javascript and images are better placed in Storage
Avoids additional bandwidth on your Web Site (potentially stay shared longer for lower cost)
Can be CDN Enabled, provides a better experience for International markets
Easier to update web assets when application is migrated to Cloud Services
Storing User content
If your application already stores to the Storage Service, there is one less code modification in the future when moving to cloud services.
Make it easy to discover patterns in your Data
The benefit of Cloud Services is it enables you to reduce cost by only scaling what needs scaled. Starting the process of identifying your scale units i.e. How you partition your database or Tables in Storage.
I read all post and all of them are very helpful.
In addition to all post , I found an info on msdn : Windows Azure Websites, Cloud Services, and VMs: When to use which?
With Windows Azure Websites you can:
Build highly scalable web sites on Windows Azure.
Quickly and easily deploy sites to a highly scalable cloud environment that allows you to start small and scale as needed.
Use the languages and open source applications of your choice then deploy with FTP, Git or TFS, and easily integrate Windows Azure services like SQL Database, Caching, CDN and Storage.
With Cloud Services you can:
Build or extend your enterprise applications on Windows Azure.
Create highly-available, scalable applications and services using a rich PaaS environment. Support advanced multi-tier scenarios, automated deployments and elastic scale. Deliver great SaaS solutions to customers anywhere around the world.
And also there is summarizes the option on msdn :
And comparing some features Web Sites and Cloud Services on msdn:
Azure is a great place to have your app, but there are some considerations you need to know before start migrating it.
Azure Websites and Hosted Services are really trivial to deploy. With
Visual studio you generate the package and simply upload it. Then you
have a Development environment to check it. If it's ok for you, swap
ips. If it's not ok for you, upgrade again.
Your instances have some properties that could be annoying. For
example, you cannot be sure about your IP. Then if your app works
with some provider using IP restriction, you will need to figure out
how to proceed.
More considerations. Your "server" could be reimaged at any moment.
If you store something on the local disc, that file could go away at any moment.
Azure works very nice if you have at least 2 instances or more for
each website. Maybe your app is not prepared for that. The first step
will be managing the sessions with the appFabric. Is really
easy, just a change on your web config. Be careful because this
session state doesn't work exactly as the "old one". You cannot store
non-serializable objects (should be easy to adapt) or a very large objects (more than 8MB).
If you are going to develop something from zero, I suggest you to start into azure from the beginning. The reason is simple: it's really cheap to start and you will not pay serious money until the app have lot's of visits. It's also very cheap to setup a SQLAzure and a storage account. One you have all in place, it's easy to add more instances or scale up.
Example:
Imagine you have an idea and you wish to show up to some possible investors.
You start setting up a little SQLAzure database (1GB ), $9,99 monthly.
Then you build a site and you put 2 extra small instances, $18,72 monthly.
Let's say you need 100 GB of space (images, backups, ...), $12,50 monthly.
At his point, you have all in place to start your business paying less than $50 monthly.
If you site have exit and the visits starts to come, you change your instances for small instances (it's really dangerous to have production environment with extra small instances, because do not have cpu reservation). Then you change the extra small cost ($18,71) up to $57,60. Maybe you need more space to that SQL Azure? etc...
prices calculated from here: http://www.windowsazure.com/en-us/pricing/calculator/?scenario=web .
Those are few tips, there is a lot more. My advice is to start a trial account and play with it.
Final advice: Its very easy to solve everything just purchasing more resources. Sometimes you need to refactor and optimize your code. If you simply add more resources each time you have a problem, you could end with a huge bill and a very messy code.
Hope it helps!
Another advantage of Windows Azure Cloud Services over Web Sites is that a cloud service can be added to an Azure Virtual Network. This can give it access to on-premises resources like databases. So if your requirements are such that you need the scalability offered by Azure but need to keep your data on-premises due to security restrictions, cloud services is a better choice.
Azure web sites cannot be part of an Azure virtual network. To access on-premises resources mechanisms such as Azure Service Bus Relay must be configured.
We've had our web site running on PHP on some hosting and at some point decided to move it to Azure (where sits main part of our service). We've started with Azure Web Sites which was great from development point of view (mainly integration with git). But after about a week of testing (when we've decided to actually move the production web site) we've found that currently
No SSL for custom domains
Custom domains are available only for reserved instances (no shared infrastructure)
SLA
So we moved to Hosted Service. The main problem for us was lack of ability of simple deployment (need to build package and upload whole package of the web site), and found solution was to use dropbox - as a startup task for role, we're installing dropbox service on the machine, which takes all the web site from dropbox, which in turn have SVN checked out folder, so site updates became very easy.
In medium to large organizations what team or group typically support middle tier components like Oracle Application Servers?
(Unix Team, DBA Team, Or Application Development/Support Team)
In a client server application design the delineation of ownership between the server and the client is very clear. In the client server case the Unix Administrators manage the servers and the development support team manage and support the clients. (and the DBA's support/manage the database)
Recently at our shop the lines have become blurred; the introduction of an Oracle application Server (OAS) has popped up;
OAS seems to require a very unique set of skills but also show some similarity to the client server skills. (part Unix Admin, Part Dba, Part Application Developer/Client Support)
What have others done when confronted with this kind of challenge......??
Does a completely new team form that exclusively supports the Middle Tier??
Our It Group has 3 Unix Admins; 3 Application Support staff; 3 Dba's to give the perspective of the size of the teams....
There are a couple of different options, to my mind:
1) Roll it into the application development/support team as this is part of an application that isn't necessarily where only Admins are useful. There should be a separation between development and support to some extent as different tools may be used and some may have a stronger skill set for one over the other such as if one prefers investigating things then support may be a better fit.
2) Platform management team which is a separate group where there is a separation of the layers involved in the applications the company produces. I used to work for a company where the middle tier and back-end were managed by one team that was separate from the Applications group which seems appropriate if there is the plan of having that middle and back-end tiers become a platform for the company to pitch to other companies to use how they see fit in terms of making their own applications on top of this API.
I can see a logic in using either method depending on how one sees what the IT arm offers in a sense.
For large organizations, you generally eventually get to a point where there are dedicated teams to manage the middle tier web servers and application servers.
The problem for smaller organizations generally comes that when you first deploy the app servers, there may not be enough admin work to justify a separate person in that role, at which point you have to cobble together time from other teams. It's not particularly unusual for DBAs to manage the app server (particularly for Oracle DBAs managing Oracle Application Servers). It's also not particularly unusual for the Unix admins to manage the app server. Either way, though, some of the work will inevitably benefit from input from the other team.
IMHO there should be a single "Oracle" team, comprising DBA's, unix admins, application admins, and even a network person for big installations. There is really only one system, although it has multiple tiers and technologies. You do not want four teams all passing the buck round when a system fault occurs. Ask me how I know ;)
We are looking at a standard way of configuring the various "endpoints" of our application. Our application is a distributed system with Windows Desktop applications, Windows Server "services" and databases.
We currently configure each piece using XML files. This is getting a little out of hands as we work with larger customers who can have dozens of Servers running our application and hundreds of desktop clients.
Can anyone recommend a Microsoft technology or a third party that would allow us to centralize all that configuration information and manage it in a one place for all our applications? Any changes would be "pushed" to the endpoint(s) that are interested.
For example, if we were to change the login for one of our database, we would make that change on the database, then reflect that change in our centralized system. Following that last step, any service that needs to connect to the database would be notified of the change (and potentially receive the new data). How and what each endpoint does with that information is outside the scope of the system.
Our primary business is not "Centralized Configuration Services". We are a GIS company that provides solutions for various utilities worldwide.
I've done a couple of things to give myself this functionality over the years. I build enterprise applicatons that may be distributed across many servers. I don't want to bury config settings in each services config file or each web server's web.config file. For application specific stuff I usually create an application settings table in the app's database. The table only has two fields. SettingName and SettingValue. I then write a web or wcf service whose sole function it is to retrieve these settings. I write a function called GetSetting where you pass "SettingName" and it returns SettingValue or an empty string if your setting is not found. This way I can store all application settings for all components of the application in one spot. Maintenance and troubleshooting for this is really easy, I'm not hunting through scads of config files spread across a dozen web and app servers.
For larger scale apps I might create a separate AppSettings database where I add a new field to my table mentioned above. ApplicationName. My web or wcf service for this approach has the same method call (GetSetting) only at this scope I pass ApplicationName and SettingName and it returns SettingValue or an empty string.
Doing either of these things allows you to centralize all app settings for any size application or IT shop. It has worked really well for us.
You could use RSS together with BitTorrent to distribute changes. See Wikipedia. It is not MS specific however, but should provide the flexibility you need - a configuration server holding the configuration and providing the feeds needed to configure the clients and possibly servers.
Any VCS through a secure channel?
For example, git through ssh (both available in cygwin).
I think the first step is to have the secure channel (if you want the push ability, pulling might be different).
As for managing the "versions" in different "branches", what's better than a version control system?
As it goes for the Microsoft requirement, well the Microsoft sofwares in that exists in that area would suck pretty bad in your case (as in not the best tool for the job).