Sharing MS Database with Multiple Users - ms-access-2013

I have a MS Access Database that I need to share with multiple users in the entire state. Right now I split the database and placed the backend on a shared network drive and distributed the front end, but the issue I'm having is that offices further away can't enter a record in a timely manner (one office took over 2 hours).
We do have SharePoint, but it's on a 2010 server and our MS Access is 2013 and I'm told because of this, access won't link up to SharePoint and this is not an option.
Someone in my office mentioned something about replicating a database...is this something that will work? If not, are there any suggestions?

Replication in Access was killed in Access 2007.
SharePoint is not an option except if you start from scratch, and the shared lists and/or various web apps you can create are seriously limited compared to your present desktop solution.
Basically, you have three options:
Upgrade your WAN to 100 Mbit/s low-latency quality fibre connection
Create a Terminal Server hosting your application. Remote users will access this via standard Remote Desktop Connection
Upgrade your backend to SQL Server Express (free) and set up an in-house or outsourced server hosting this
The first options require zero coding, while the last takes a little but not much, and that is well documentated (just bing/google on this).

Related

ASP.NET shared hosting ping - server not available

I've noticed on my dev env. timeout sql connection errors when i'm using connection string to remote db.
I've developed a small tool to ping domain and db server based on these answers test if a website is alive from a C# applicaiton
and test SQL Server connection programmatically
When i noticed Failed pings i looked into site management console and caught that Sql Server is unavailable, the site was down for about 5 minutes.
Since i started monitoring the issue repeated 3 times for the last couple of days. It means that my DB server withing a shared hosting plan is not reliable 24/7, i opened a ticket and got a reply from support:
As this is a shared server, the activities on the server always varies from time to time. We apologize if there is a slight issue earlier
Is this a common situation for any shared asp.net hosting? or it is a bad luck and i need to search for another hosting?
Sometimes when the hosting providers update some service or software it could be down for a few minutes, but this should not happen very often. You could continue monitor the services and if the results are not good you could try another hosting provider.
You may experience little slowness or lagging in I/O in shared database servers while database backup script is running in background or any other maintenance are carried out by the web host. But in most cases, they don't affect the server availability.
In fact, shared database servers are really high end servers (mostly SSD base) and are meant to host thousands of databases without single hiccup. They must be capable to handle millions of queries at any point of time. If you face this problem more often then it's straight indication that your web host is overly utilizing the database server resources, or server is no longer capable to handle the load in peak hours.

MS Dynamics CRM 2013 installation approach

We want to install Dynamics CRM 2013 for 10 users. We are thinking about 2 approaches:
Install only one instance of CRM and SQL Server on two separate servers machines. CRM server machine will have front end server role and SQL Server machine will have back end server role. All 10 users will browse and work on same instance of CRM.
Install SQL Server on a separate machine and install CRM on the machines of all the 10 users. All 10 CRM instances will point to the same organization created on SQL Server. Each users will use CRM installed on their own system but their customizations will be published on one organisation since all CRMs are pointing to the same organisation.
Could anyone let me know which approach will be better in terms of performance.
Update after the reply of Draiden and Kye:
All 10 machines will be used only for development and IFD or NLB will never be required.
In one of our previous projects, we had used the approach of 1SQL-SSRS and 1CRM (Full server). During peak development periods when around 8 users were connected to CRM doing customization, memory usage of CRM server would go to around 85% - 95%. At this point, CRM used to become non-responsive.
In order to avoid the high memory usage, we are thinking of approach 2 where CRM memory usage will be distributed among multiple machines. Also if someone wants to debug a plugin, they will debug on their own CRM (and will not block others). Having one SQL Server in the backend will enable developers to share the same data. Also their customization changes will be published on one central organization.
The second solutions involves the creation of a front-end server for each user? I don't think that is a viable (really nice way) to install crm. Also If you will be in the situation of setup something else, like IFD you will need to install and setup a NLB and teach everyone to change the url.
The first approach you are suggesting is the better one, but usually you go with 2 servers, 1 sql and 1 crm full installation. Performance wise shouldn't make much of a difference since the user using the system will be just 10 people.
So I would say that solution 1 doesn't help you much, because you still keep the db an the backend on the same machine,
while solution 2 still has a bottleneck when you are doing SQL operations, plus CRM is quite demanding, and let run the server on a user machine will choke it.
Go with a more traditional approach.
1 SQL-SSRS and 1 CRM, or if you think that you will have performance issues go with 1 SQL-SSRS, 1 Back-End server a NLB and as many front-end you want/need.
Again for 10 users having multiple front end server doesn't make much sense.
Please refer to this TechNet article for supported configurations.
For best performance, you will want to use a multi-server architecture. Furthermore, in order to have the data be shared between the users, they would need to be using the same environment.
Could anyone let me know which approach will be better in terms of
performance.
I don't think option 2 is viable, as it means installing the CRM web server on 10 machines:
Running IIS on client machines will start using up memory your end
users should be using for desktop applications.
If you ever need to scale up the front end machines, you'll need to
do this 10 times.
Since your users may not be using CRM all day, IIS will eventually
recycle, making the first time a user access the site seem slower
then expected.
I would install the CRM webs server and database on separate machines, following the minimum recommended hardware requirements.
https://technet.microsoft.com/en-us/library/hh699840(v=crm.6).aspx
Update - If your requirement is around a development environment, I would use two servers for Production and two servers for Test (to mimic Production).
For the development environment - I'd ask developers to install CRM and SQL locally so that they can debug their own code, and then push their finished code to a central repository such as Github or TFS. It would then be someone's (or something's) role to pull down updated code, prepare and CRM solution and deploy to the next environment.

What is the business benefit for Oracle Weblogic Server over OC4J?

Apart from Technology support , what are all the business benefits for oracle web logic server. For example in area of security,support etc.
What are all the new features supported by weblogic ?
TL;DR:
Support is great when you open ticket with Oracle Support (Weblogic strictly).
Great admin/read-only user implementation. We authenticate to Windows Active Directory. Developers get read-only accounts, reduces churn for them to wait for ops to transfer logs and validate settings.
Dashboard useful out-of-box to do real-time monitoring without additional tools or installs. Easily accessed by any one who is authenticated to login. We could give it to our CIO if he wanted in about 3 minutes by adding him to the right authorized group in AD.
Easier to clone environments.
I haven't worked with OC4J but I believe Oracle's roadmap is picking Weblogic as their preferred Java application server. You can see it is the base technology for some of their other products, such as Oracle Service Bus, Oracle Enterprise Manager (OEM), and Oracle Line Planning.
I have opened 3 Oracle tickets in the past month. I was surprised at how fast they answered. For a Severity 3 ticket (medium), they usually have responded in 2-3 days. I can't say the same for their other services (over 2 weeks for a ticket on OEM).
Security is a pretty broad scope... so you'd have to be a little more specific on some of the topics of security.
One thing that is pretty awesome is the Dashboard. http://docs.oracle.com/cd/E14571_01/web.1111/e13714/dashboard.htm You can obviously add read-only monitor accounts so other users can get insight to the performance. We add developers to this so that they can validate any settings, or see performance whenever there is a production issue.
We used Microsoft Active Directory authentication in our Weblogic domains. People are not using the default weblogic administrator user so configuration changes are audited. When someone's account gets disabled when leaving the company, it disables their access to Weblogic similarly. You don't have to change the password.
Other useful settings I like in it is the ability to automatically archive config changes. Each time someone makes a config change, a backup is automatically created. This allows me to go fix something when developers break their environment without having to majorly reverse-engineer what they did.
I also like the fact that you can pack and unpack the domains. I've used it to move entire domains from staging to production with some minor changes... i.e. change all stg to prod variables. This should likewise make it easier to 'clone' environments when you want to build out a new one.
Although not related, I should mention Oracle Enterprise Manager. We are an Oracle shop because they seem to have given us a good deal on licencing. So we get to run Oracle Enterprise Manager, which is a tool slowly becoming more and more useful. The agent also reports how our RedHat Linux hosts are behaving, network input/output, CPU utilization, memory utilization, java heap stacks. We are going to move to defining groups within that has all the targets related to an application stack. This will give our operations team the insight to see where the bottleneck might be... the Oracle Weblogic web layer, network, Oracle Service Bus, or Oracle Database performance.
Supposedly, you can add jBoss, other JMX monitoring as well to OEM. It's on our to-do list for non-Weblogic instance. We're slowly rolling OEM out.

What is Exactly an AppFabric in Windows Azure?

I am trying to understand exactly an AppFabric in Windows Azure, What is the difference with Worker Role and Web Role and How to create a project of AppFaric in Visual Studio 2010, i mean which kind of project ?
Thx.
Adding a bit to vtortola's answer:
There are three core areas of the Windows Azure platform:
Windows Azure (which provides virtual machines and massively-scalable storage through Blobs, Tables, and Queues
SQL Azure (which is a large subset of SQL Server), offering a full relational database up to 50GB
Windows Azure AppFabric (a set of services that you can opt into, currently comprising access control, connectivity, and caching)
When you construct your Windows Azure application, you can really pick and choose what pieces of the platform you're interested in. For instance, Windows Azure provides Web and Worker roles (both essentially identical virtual machines running Windows Server 2008 or R2, but Web roles have IIS enabled). If you need a relational database, you can very easily set up a database. And, then there's AppFabric:
If you need to connect to a set of web services on premises, for instance, you can use the AppFabric Service Bus (a secure way to connect without having to open up a firewall)
If you need to actually connect to an entire computer on-premise, use Azure Connect (a software VPN).
If you want to cache data (such as asp.net session state) between instances of your virtual machines, enable and use the AppFabric Cache (currently a Community Technology Preview, so no pricing yet).
If you need to add access control to your application, use AppFabric's Access Control Service, which essentially lets you outsource your identity management.
There are quite detailed examples in the Platform Training Kit that vtortola referenced. Additionally, there's a complete Identity Management training kit.
Azure AppFabric is a suite of middleware services and technologies to help you develop and manage services/applications that use Windows Azure. Middleware is typically defined as software that helps connect other pieces of software, and this definition is pretty accurate for the services appFabric provides.
You don't create an App Fabric per say. AppFabric services are used by your other applications as needed, so setup is typically configuring certain items in the Azure Portal, then implementing libraries of config entires in your web/worker roles that leverage the resources.
Essentially AppFabric provides certain resources that you need when composing complex applications as services, vs. you having to implement and maintain these resources yourself.
The basic offerings are:
Service Bus: A message relay that can be consumed by other .NET technologies (and others). SB helps you connect different cloud services as well as "hybrid" services. The hybrid is a big deal, as SB helps you easily connect on-premise web services with services you run in the cloud, w/o having to mess around with VPN, protocols, server setups, certificates, etc etc.
Access Control: An authentication and authorization service, helping you manage user-level access without having to extend/implement Active Directory, LDAP, and custom user authentication modules throughout Azure.
Caching: an in-memory distributed caching layer for your applications. This is typical to memcached or the Windows Server version of AppFabric
Integration: a PaaS service of EDI/transport technology like BizTalk server
Composite App: allows the composition of complex applications using a compistion language versus just putting a bunch of code together. You basically define your application using a designer like you would a EF.Net data model or a Windows Workflow
So basically AppFabric provides you with a lot of services that you likely need, but the typical cloud developer may not want to "mess with" at least at first. This way you have these great building blocks to help you focus on your core logic/needs during development cycles while not limiting what your application can ultimately do. This "focus" is one of the core benefits to cloud computing, especially Platform as a Service, and is one area where Azure really shines compared to other offerings.
Some of these technologies are still in beta. The AppFabric site makes this very clear, but its important to be aware of.
Great place to start is the Azure AppFabric site itself, which breaks a lot of this down, gives you great examples of how to use, and some sample code for you to get your feet wet.
http://www.microsoft.com/windowsazure/AppFabric/Overview/default.aspx#top
Basically:
WebRole : similar to a web
application.
WorkerRole: similar to a Windows
service.
AppFabric: Group of services that
allow you interconnect applications inside and outside Azure.
Download and read/do the Azure training kit, it will solve those questions and tell you how to create that project in Visual Studio step-by-step.

Enough bandwidth to support

I have a client that is paying $1500 per month for hosting of 1 website (1 domain name, email is hosted elsewhere). The website is pretty low traffic. Like, 100 unique visitors a week. The only catch (and why it is so expensive) is that their database is 15 GB, and is replicated from the hosting company to inside my small companies office.
Inside the office, there is a desktop application that hits the internal database quite a bit. From the website, some data is entered into THAT version of the database. Replication keeps both databases in synch on a schedule of every 5 minutes.
My client has a T1 that runs into their office. I want to knock out the hosting provider altogether, host their website from a server they already have (more than capable of handling this website), and dump the replication altogether. This would save them $1500 per month, and for a company of 5, it would really make a difference to them.
Assuming I already have a backup strategy in place (way to move a copy of the DB offsite every day), what are the problems with this?
Support? they can reboot their server as easily as the hosting provider can.
What if server goes down for good? There is a duplicate that I can bring up in a couple of hours, and that is all the level of service they really require.
What am I missing here? I want to save them money, but I don't want to screw them over...
EDIT: Some of the answers and comments make it clear that I myself wasn't clear. My client (company A, not a hosting provider) is paying company B to host their website. The website has a database (MS SQL Server 2000) that is 15 GB. That SQL Server DB is being replicated back to a server # company A.
Company B is charging Company A $1500 per month for this service.
Company A already has a T1 for connectivity to the internet. They are located inside of a run of the mill business park.
I am proposing doing away with any outside hosting, getting a DNS provider to point the website to Company A's static IP and hosting the website on a server inside Company A. Then there would be no need for any replication at all, and they wouldn't be paying company B $1500 per month.
I hope that explains it. I'm going to re-read and comment on all the current answers.
Really, any advice is very appreciated.
Sounds to me like your only risk in moving the server in-house is if your T1 goes down. If you have a backup strategy in place for that, go for it.
The other option is to co-loc your own server with your own SQL Server licence on it. Hosting companies charge a lot for hosting SQL Server databases because they have to pay per-CPU licencing for it. So they build up a powerful server to serve lots of client's databases, but then SQL Server offers no way to do useage accounting so they only way they can bill/screw you is on database size.
Sounds like the traffic is low enough on your site you can get a dual core server, a 1 CPU licence of SQL Server for a one-off cost of a few thousand dollars and then you're only paying the monthly co-loc price.
A hosting provider can monitor the server 24x7. What if the server crashes at 8 pm? I the people at the small company are not working around the clock?
Depends on the service this DB is providing. What are the requirements to its uptime?
Database replication isn;t that expensive for bandwidth - well, assuming you're not doing a hotcopy of the entire DB files across the link that is.
Check out log shipping, or any of the supported replication options that will replicate the DB using minimal bandwidth. (you never said what the DB was, so I can't comment further there)
I would move to the new server and keep replication. At the very least, if you're really worried about data loss, then get another server in the same facility and copy across to that one - even if you copy 15Gb every 5 minutes, it'll be using non-chargeable bandwidth without even going outside the switch they're connected to.

Resources