ASP.NET shared hosting ping - server not available - hosting

I've noticed on my dev env. timeout sql connection errors when i'm using connection string to remote db.
I've developed a small tool to ping domain and db server based on these answers test if a website is alive from a C# applicaiton
and test SQL Server connection programmatically
When i noticed Failed pings i looked into site management console and caught that Sql Server is unavailable, the site was down for about 5 minutes.
Since i started monitoring the issue repeated 3 times for the last couple of days. It means that my DB server withing a shared hosting plan is not reliable 24/7, i opened a ticket and got a reply from support:
As this is a shared server, the activities on the server always varies from time to time. We apologize if there is a slight issue earlier
Is this a common situation for any shared asp.net hosting? or it is a bad luck and i need to search for another hosting?

Sometimes when the hosting providers update some service or software it could be down for a few minutes, but this should not happen very often. You could continue monitor the services and if the results are not good you could try another hosting provider.

You may experience little slowness or lagging in I/O in shared database servers while database backup script is running in background or any other maintenance are carried out by the web host. But in most cases, they don't affect the server availability.
In fact, shared database servers are really high end servers (mostly SSD base) and are meant to host thousands of databases without single hiccup. They must be capable to handle millions of queries at any point of time. If you face this problem more often then it's straight indication that your web host is overly utilizing the database server resources, or server is no longer capable to handle the load in peak hours.

Related

Sharing MS Database with Multiple Users

I have a MS Access Database that I need to share with multiple users in the entire state. Right now I split the database and placed the backend on a shared network drive and distributed the front end, but the issue I'm having is that offices further away can't enter a record in a timely manner (one office took over 2 hours).
We do have SharePoint, but it's on a 2010 server and our MS Access is 2013 and I'm told because of this, access won't link up to SharePoint and this is not an option.
Someone in my office mentioned something about replicating a database...is this something that will work? If not, are there any suggestions?
Replication in Access was killed in Access 2007.
SharePoint is not an option except if you start from scratch, and the shared lists and/or various web apps you can create are seriously limited compared to your present desktop solution.
Basically, you have three options:
Upgrade your WAN to 100 Mbit/s low-latency quality fibre connection
Create a Terminal Server hosting your application. Remote users will access this via standard Remote Desktop Connection
Upgrade your backend to SQL Server Express (free) and set up an in-house or outsourced server hosting this
The first options require zero coding, while the last takes a little but not much, and that is well documentated (just bing/google on this).

IBM MQ performance over Secure Client

I work on a large C++ application and often get the opportunity to continue this while at home. The IBM MQ configuration is using some kind of domain group for authentication so the application won't run unless I'm connected to the office VPN via Secure Client.
Why does the application run so much slower when connected to the VPN than in the office?
As background info, I should state the application also needs a database (Oracle) etc but all of this is locally hosted, so shouldn't be affected by the change in location.
I'm using a local MQ server as well, in case that wasn't clear. Essentially, beyond the MQ domain authentication (which is at the start of the process as far as I can tell), application behaviour is dramatically reduced. A process which takes 30 minutes in the office takes >2hrs at home. I have noticed the filesystem is generally slower (although this is a SSD drive laptop). Could Clearcase / Sophos be conflicting?
Is there a 'good way' I can monitor windows to see what exactly, if anything, is slowing the machine down out of the office?
If I get to May with no useful responses I think I'll nuke this message. FYI, I tried server overflow as well but to no avail (they complained and said the question should live on stackoverflow instead!)
Well, if your internet speed is not very fast then that would explain the issue.

Services extremely slow when deployed on Azure

I have a rather strange scenario. We have a range of WEBAPIs hosted on the cloud. We consume those services in our Windows 8 application. The problem is when the services are running locally it takes less than 400ms but when hosted on Windows azure it takes upto 20 seconds for some requests. I have checked the indexes of our database tables and its fine. I have no clue so as to what to profile and how to improve the performance.
Thanks!
Everyone Thanks a lot!
But I found a way to use dottrace(Excellent profiling tool) on the azure deployment. Here is the link
http://blog.maartenballiauw.be/post/2013/03/13/Remote-profiling-Windows-Azure-Cloud-Services-with-dotTrace.aspx
You can also use windows azure diagnostics and stopwatch class to log all times to the wad tables.
Also found out that the first request to the azure service is always slow in another thread. Have just copied it here below
Serkan, you would need to first make sure in your post, weather you have published a Cloud Service or a Website to Windows Azure. Based on Cloud Service (A Web Role) or a WebSite the answer to your question will be different. As you want to learn more I would explain what goes on behind.
As you suggested that your first connection is slow, I can see that happen with Windows Azure Websites. Windows Azure Websites are running in shared pool of resources and uses the concept of hot (active) and cold (inactive) sites in which if a websites has no active connection for x amount of time, the site goes into cold state means the host IIS process exits. When a new connection is made to that websites it takes a few seconds to get the site ready and working. Depend on how your first page code is, the time to load the site for the first time varies. Similar discussion is logged: Very slow opening MySQL connection using MySQL Connector for .net
With Windows Azure Cloud Service the overall application model is different. Your webrole has its own IIS server which is fully dedicated to your application and above Website limitation does not occur however there could be other reasons which could have slower page load. If you are using WebRole, then what you could do is run a page load profiler first and RD to your Azure Instance to collect the page load data to see what else you could do to boost the performance.
You'll obviously need to profile your app to find the real cause. Check out these two articles which should get you started:
http://msdn.microsoft.com/en-us/library/windowsazure/hh369930.aspx
http://www.windowsazure.com/en-us/develop/net/common-tasks/profiling-in-visual-studio/

windows azure website load time

Sometimes when I access my windows azure website, the initial response time is very slow. After the first page load the website is fast. Some background: The website is not that often visited at the moment. Further, I am using a keepalivecontroller to keep the website running and the website is running in shared mode. I am wondering: are websites that are not that active removed from memory in windows azure? Or is it just that background tasks on the operational level of windows azure are interfering sometimes? It is not transparent for me what is happening, so is there some sla of something for windows azure websites?
There is now a new feature available for Windows Azure Websites in 'Reserved' mode that will keep your website warm. You can now turn on "Always-on" under the "Configuration"-tab on your Azure Website. As explained in this blog post:
When the new “Always On” feature is is enabled on a site, “Windows
Azure will automatically ping your website regularly to ensure that
the website is always active and in a warm/running state,” Guthrie
writes. “This is useful to ensure that a site is always responsive
(and that the app domain or worker process has not paged out due to
lack of external HTTP requests).”
Easiest way to keep a website warm is to call it regularly using the Scheduler feature in Windows Azure Mobile Services.
You simply write a script in the Scheduler that pings your website every x minutes.
Here's a post covering how to do that: http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure/
The Windows Azure Web Sites are still in preview, so there is currently no SLA with that service.
The Web Sites do idle out when in free or in Shared mode, which is likely what you are seeing. When the site idles out it actually is removed from memory, and indeed the IIS process host running the site is shut down. This is how they can get the density of hosting 100 sites on the same VM.
You can find a lot of info on the Channel9 site about why this is the case, or, as a shameless plug, here is an article that talks about how the process is handled.
Now, you mentioned that you were using a keepalivecontroller, but what exactly do you mean by that? I use pingdom.com to contantly request data for one of my websites, and that seems to do pretty well. It is still possible that a request doesn't come in and the idle time is met which then cycles the site. It is also possible that even if you always have the site running that the VM the site sites on needs to have the underlying OS updated, in which case Azure would then move the site process to another VM, which could also cause the slow start up on the next request.
I'd start logging your application start ups and then look through your logs to see how often that is happening.
If you only need to warm it up once (vs keeping it warm) and are mostly trying to prevent your customers experience page cold starts, I believe the correct tool is IIS Application Initialization. You can configure it with a list of urls to hit before it deems the app ready for action.
My site is suffering from page cold starts and that is severely magnified in Azure Websites (even on an S3), but it is absolutely speedy after its served that first time thanks to several layers of caching (our inefficient use of Umbraco's dynamic nodes query language creates a lot of database churn--which we're cleaning up opportunistically).
From what I've read and my own web.config attempts this is still not available in Azure Websites. I've asked Microsoft for it here: MS IDEA: Application Initialization to warm up specific pages when app pool starts. Please consider voting for it.
For each service/site you need to go to "Configure", then switch "Always On" to ON. Also make sure you click Save; it took my website about 2 minutes before noticing the change.
Why this is not the default is kind of mind boggling, because my setup on HostGator was running much faster than Azure. I guess Microsoft is figuring if nobody is accessing your site, it's okay if it has a long load time.

Enough bandwidth to support

I have a client that is paying $1500 per month for hosting of 1 website (1 domain name, email is hosted elsewhere). The website is pretty low traffic. Like, 100 unique visitors a week. The only catch (and why it is so expensive) is that their database is 15 GB, and is replicated from the hosting company to inside my small companies office.
Inside the office, there is a desktop application that hits the internal database quite a bit. From the website, some data is entered into THAT version of the database. Replication keeps both databases in synch on a schedule of every 5 minutes.
My client has a T1 that runs into their office. I want to knock out the hosting provider altogether, host their website from a server they already have (more than capable of handling this website), and dump the replication altogether. This would save them $1500 per month, and for a company of 5, it would really make a difference to them.
Assuming I already have a backup strategy in place (way to move a copy of the DB offsite every day), what are the problems with this?
Support? they can reboot their server as easily as the hosting provider can.
What if server goes down for good? There is a duplicate that I can bring up in a couple of hours, and that is all the level of service they really require.
What am I missing here? I want to save them money, but I don't want to screw them over...
EDIT: Some of the answers and comments make it clear that I myself wasn't clear. My client (company A, not a hosting provider) is paying company B to host their website. The website has a database (MS SQL Server 2000) that is 15 GB. That SQL Server DB is being replicated back to a server # company A.
Company B is charging Company A $1500 per month for this service.
Company A already has a T1 for connectivity to the internet. They are located inside of a run of the mill business park.
I am proposing doing away with any outside hosting, getting a DNS provider to point the website to Company A's static IP and hosting the website on a server inside Company A. Then there would be no need for any replication at all, and they wouldn't be paying company B $1500 per month.
I hope that explains it. I'm going to re-read and comment on all the current answers.
Really, any advice is very appreciated.
Sounds to me like your only risk in moving the server in-house is if your T1 goes down. If you have a backup strategy in place for that, go for it.
The other option is to co-loc your own server with your own SQL Server licence on it. Hosting companies charge a lot for hosting SQL Server databases because they have to pay per-CPU licencing for it. So they build up a powerful server to serve lots of client's databases, but then SQL Server offers no way to do useage accounting so they only way they can bill/screw you is on database size.
Sounds like the traffic is low enough on your site you can get a dual core server, a 1 CPU licence of SQL Server for a one-off cost of a few thousand dollars and then you're only paying the monthly co-loc price.
A hosting provider can monitor the server 24x7. What if the server crashes at 8 pm? I the people at the small company are not working around the clock?
Depends on the service this DB is providing. What are the requirements to its uptime?
Database replication isn;t that expensive for bandwidth - well, assuming you're not doing a hotcopy of the entire DB files across the link that is.
Check out log shipping, or any of the supported replication options that will replicate the DB using minimal bandwidth. (you never said what the DB was, so I can't comment further there)
I would move to the new server and keep replication. At the very least, if you're really worried about data loss, then get another server in the same facility and copy across to that one - even if you copy 15Gb every 5 minutes, it'll be using non-chargeable bandwidth without even going outside the switch they're connected to.

Resources