numsessions limit hits on parallel - parallel-processing

I hope someone can help me figure out this issue.
I have a windows based VPS with 6GB of ram and enough disk space.
I have only 3 websites hosted and all three are not advertised publicly, so no one could access.
The issue is the server is slow in response whenever we try to load the sites in browser or in RDP or thru Parallel Plesk Control. Everything slow to response.
I have every 1 minute to 3 , from green zone to red zone a lot of numsessions limit hits.
I have browsed SO and read Parallel doc and even browse their forum and no one has mentioned a real solution. They say that numsessions is hit when many sessions of rdp or Parallel Plesk Control are left open.In my case no one has access to the server and no one is logging to the server either. I have rebooted the server many times and only one session was open and that was to control server via virtuoso (Parallel Power Control) and same the numsessions is hit again within 3 min of reboot.
I have talk to the idiots at 1and1 (where we bought the VPS xxl) and they have no clue saying it is not our problem but yours or MS Windows! I have not installed any third party or even proprietary software on server which could cause the issue. The server is brand new and only created new sites via Parallel Plesk control. Emails are not working either.
Windows Event viewer doesnt show much information either.
Last resort is to re-image the server which may solve issue but I doubt since the issue seems to be from the server when we bought it.
anyone could shed their wisdom light on this please?
Thanks

Just noticed my resource log full of these as well. I think the issue is that a session is counted as soon as a RDP connection is made - so bots trying common admin passwords count towards this.
The real issue is as there is no way I can find to filter these from the resource alerts you basically can't find the real problem you have as the logs are just full of numsesssions.

Related

Programmatically sync time on Windows via NTP

A Windows system in a domain synchronizes its time from a domain controller.
Is it possible to synchronize it from an external server via NTP?
Thank you
Yes, you don't have to sync the time via the domain controller - but you do have to ensure that if your going to split things up in that way you need to make sure that the DC keeps correct time as it will have implications with things like Kerberos and you may find you start to get odd auth errors or failures.
This question and my answer should give you enough details to get it working for workstations of Windows 7, and 8 - I've not tested Windows 10 yet but it should be very similar. Server versions may have slightly different settings, and certainly in Server 2012 or higher there is a group policy section which allows the setting and management of ntp specific stuff which can be applied to Windows 7 & above machines.
There are some details & screen shots here which show the server group policy side of things (2012).

IBM MQ performance over Secure Client

I work on a large C++ application and often get the opportunity to continue this while at home. The IBM MQ configuration is using some kind of domain group for authentication so the application won't run unless I'm connected to the office VPN via Secure Client.
Why does the application run so much slower when connected to the VPN than in the office?
As background info, I should state the application also needs a database (Oracle) etc but all of this is locally hosted, so shouldn't be affected by the change in location.
I'm using a local MQ server as well, in case that wasn't clear. Essentially, beyond the MQ domain authentication (which is at the start of the process as far as I can tell), application behaviour is dramatically reduced. A process which takes 30 minutes in the office takes >2hrs at home. I have noticed the filesystem is generally slower (although this is a SSD drive laptop). Could Clearcase / Sophos be conflicting?
Is there a 'good way' I can monitor windows to see what exactly, if anything, is slowing the machine down out of the office?
If I get to May with no useful responses I think I'll nuke this message. FYI, I tried server overflow as well but to no avail (they complained and said the question should live on stackoverflow instead!)
Well, if your internet speed is not very fast then that would explain the issue.

windows azure website load time

Sometimes when I access my windows azure website, the initial response time is very slow. After the first page load the website is fast. Some background: The website is not that often visited at the moment. Further, I am using a keepalivecontroller to keep the website running and the website is running in shared mode. I am wondering: are websites that are not that active removed from memory in windows azure? Or is it just that background tasks on the operational level of windows azure are interfering sometimes? It is not transparent for me what is happening, so is there some sla of something for windows azure websites?
There is now a new feature available for Windows Azure Websites in 'Reserved' mode that will keep your website warm. You can now turn on "Always-on" under the "Configuration"-tab on your Azure Website. As explained in this blog post:
When the new “Always On” feature is is enabled on a site, “Windows
Azure will automatically ping your website regularly to ensure that
the website is always active and in a warm/running state,” Guthrie
writes. “This is useful to ensure that a site is always responsive
(and that the app domain or worker process has not paged out due to
lack of external HTTP requests).”
Easiest way to keep a website warm is to call it regularly using the Scheduler feature in Windows Azure Mobile Services.
You simply write a script in the Scheduler that pings your website every x minutes.
Here's a post covering how to do that: http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure/
The Windows Azure Web Sites are still in preview, so there is currently no SLA with that service.
The Web Sites do idle out when in free or in Shared mode, which is likely what you are seeing. When the site idles out it actually is removed from memory, and indeed the IIS process host running the site is shut down. This is how they can get the density of hosting 100 sites on the same VM.
You can find a lot of info on the Channel9 site about why this is the case, or, as a shameless plug, here is an article that talks about how the process is handled.
Now, you mentioned that you were using a keepalivecontroller, but what exactly do you mean by that? I use pingdom.com to contantly request data for one of my websites, and that seems to do pretty well. It is still possible that a request doesn't come in and the idle time is met which then cycles the site. It is also possible that even if you always have the site running that the VM the site sites on needs to have the underlying OS updated, in which case Azure would then move the site process to another VM, which could also cause the slow start up on the next request.
I'd start logging your application start ups and then look through your logs to see how often that is happening.
If you only need to warm it up once (vs keeping it warm) and are mostly trying to prevent your customers experience page cold starts, I believe the correct tool is IIS Application Initialization. You can configure it with a list of urls to hit before it deems the app ready for action.
My site is suffering from page cold starts and that is severely magnified in Azure Websites (even on an S3), but it is absolutely speedy after its served that first time thanks to several layers of caching (our inefficient use of Umbraco's dynamic nodes query language creates a lot of database churn--which we're cleaning up opportunistically).
From what I've read and my own web.config attempts this is still not available in Azure Websites. I've asked Microsoft for it here: MS IDEA: Application Initialization to warm up specific pages when app pool starts. Please consider voting for it.
For each service/site you need to go to "Configure", then switch "Always On" to ON. Also make sure you click Save; it took my website about 2 minutes before noticing the change.
Why this is not the default is kind of mind boggling, because my setup on HostGator was running much faster than Azure. I guess Microsoft is figuring if nobody is accessing your site, it's okay if it has a long load time.

Joomla 1.5 site keeps getting blocked by host

I have a Joomla 1.5.26 site which I have had since Aug 2012. It has been in a stable condition since Aug, with no changes to components etc. It is firewalled with RS Firewall and all the other security precautions have been taken.
During the past few weeks the site has started to be blocked by the hosting company that holds the site, who claim that there are too many active connections. I have hunted through the sites, disabled various components etc and am still getting the same problems.
Has anyone experienced any similar issues? I am thinking of moving the site to a more reputable host for Joomla sites to see if it is more robust elsehwere. I just can't undertand why this keeps happening. The Hosts, are placing the IP address of any machines we use to administer the site if the connections get too many, and then we are locked out for about fifteen minutes. As I said previously, nothing has changed on the site, and I cannot find any evidence of the files or database being hacked.
Any ideas?
Much appreciated
James
If your host is telling you that there have been too many connection then it will most likely mean 1 of the following 3:
You site has reached the maximum monthly bandwidth allowance
Your site has too much traffic for your current host and might need to be moved to a VPS server or something more powerful
You host is plain crap
Check to if you have reached your maximum monthly bandwidth allowance (if you have one) else I would probably recommend transferring to a different host, if you site isn't an extremely popular site, generating thousands of users a day.

Windows Azure Caching (Preview) ErrorCode<ERRCA0017>:SubStatus<ES0006>:

I'm using the role-based caching feature for a windows azure web role.
Configured as co-located. I've followed the steps given by windows azure docs for caching (preview). I get the following error:
ErrorCode <ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server: net.tcp://127.255.0.4:20010/.
I'm running everything as localhost, using the local development storage, my cache client is in the same role as the server. Changed many configuration attributes, but I always get that excpection or similar like "cannot connect to tcp....".
I'd appreciate some help. Thanks.
There are couple of things which could go wrong with your application.
Very first thing to make sure that you have SDK 1.7 in your machine even with Windows Azure Caching Services and then verify that you have reference set from Windows Azure Cache (not from Windows Server App Fabric SDK). I have seen such misconfiguration in past which lead to such errors.
Now have you changed your dataCacheClient, identifier to your ROLE Name as described in the documentation link here. If you follow the documentation as described to you should not hit any error so for the sake of checking what could be wrong, you can create exact same application as described in this link and see if that works or not.
To get more details error, please be sure to increase the DataCacheFactoryConfiguration.ChannelOpenTimeout value to longer i.e. 2 minutes then default 20 seconds as described here. This step will help you to get details about inner exception which may lead to actual root cause to your problem.
We use Azure co-located caching (not in preview anymore) as our session backer and have fairly regular outages. About once a month.
We tried using the Enterprise library Transient Fault Handling but our instances still hang when caching experiences problems. I think that the transient fault code would work for data caching, but for session backing there is some activity closer to the metal that we can't seem to code against.
The error codes have become more informative over the last year and go something like...
ErrorCode:SubStatus:The request timed out..
Additional Information : The client was trying to communicate with the
server: net.tcp://10.xx.xxx.xx:xxxxx/.
Our best guess so far from experimenting and MS support is that each, or at least one co-located cache role/instance needs to know about all the other instance's IPs, since Azure can destroy and re-up instances whenever they want, this sometimes fails to update the dependent instances. This is secret sauce for Azure, but it is not a secret when our site goes down. I'm looking for any more information on this and to see how others are working around this issue.
One possible work-around. One of our talented platform administrators found that resetting IIS on the instances and scaling up two more instances seem to help the problem. This makes sense to me because it gives caching another chance to gather all the required info about the other instances. This is NOT CONFIRMED to solve the problem but if we repeat this during the next outage it could be a valid work around.

Resources