I could not find this one on the site. It may be trivial, but the error message is pretty misleading.
When you try out things with the azure sdk and the local emulators (storage and compute emulators) while debugging, you may get the following error during initialization of those emulators:
The process cannot access the file because it is used by an another process.
Moreover, if you want to test things from the code and you want to access the blob storage emulator, you may get 400 Bad request as a result.
You can configure this. Open the "DSServiceLDB.exe.config" file under "C:\Program Files\Microsoft SDKs\Windows Azure\Emulator\devstore".
Inside the file, tweak the values in this section:
<services>
<service name="Blob" url="http://127.0.0.1:10000/"/>
<service name="Queue" url="http://127.0.0.1:10001/"/>
<service name="Table" url="http://127.0.0.1:10002/"/>
</services>
Related
Application Web Service Control Manager detected AWEBSVC is not responding to HTTP requests. The http status code and text is 500, Internal Server Error.
I am getting this critical error in component Status.I tried below link to check the issue
http://localhost/CMApplicationCatalogSvc/applicationofferService.svc
This page gave me the following info
Memory gates checking failed because the free memory (270270464 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element.
When i checked the Sever have enough space in all drive.How i can resolve this issue.
The problem here is not with drive space, it's with the Memory.
In the .NET Framework 4.5.1, an exception is thrown if the available memory on the web server is less than the percentage specified by the minFreeMemoryPercentageToActivateService configuration setting. (In the .NET Framework 4.5, this setting was ignored.)
To revert to the previous behavior where the minFreeMemoryPercentageToActivateService setting was ignored, modify the web.config file as follows:
<serviceHostingEnvironment multipleSiteBindingsEnabled="true"
minFreeMemoryPercentageToActivateService="0">
<serviceActivations>
...
</serviceActivations>
</serviceHostingEnvironment>
The web.config file location can be:
32-bit:
%windir%\Microsoft.NET\Framework[version]\config\machine.config
64-bit:
%windir%\Microsoft.NET\Framework64[version]\config\machine.config
Locate the file and edit, search (ctrl+F) element serviceHostingEnvironment and edit property value as above.
Review here for reference.
I went to deploy over an existing Cloud Service (in staging) and received the following message:
"Error: No deployments were found. Http Status Code: NotFound"
Does anyone know what this means?
I am looking at the Cloud Service, and it surely exists.
UPDATE:
Been using the same deploy method as prior (successful) efforts. However, I simply right click the cloud service in Visual Studio 2013. In the Windows Azure Publish Summary, I set to: the correct cloud service name, to staging, to realease ... and press publish. Nothing special really...which is why I am perplexed
You may have exceeded the maximum number of cores allowed on your Azure subscription. Either remove unneeded deployments or ask Microsoft to increase the maximum allowed cores on your Azure subscription.
Since I had this problem and none of the answers above were the cause... I had to dig a little bit more. The RoleName specified in the Role tag must of course match the one in the EndpointAcl tag.
<Role name="TheRoleName">
<Instances count="1" />
</Role>
<NetworkConfiguration>
<AccessControls>
<AccessControl name="ac-name-1">
<Rule action="deny" description="TheWorld" order="100" remoteSubnet="0.0.0.0/32" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="TheRoleName" endPoint="HTTP" accessControl="ac-name-1" />
<EndpointAcl role="TheRoleName" endPoint="HTTPS" accessControl="ac-name-1" />
</EndpointAcls>
</NetworkConfiguration>
UPDATE
It seems that the previous situation is not the only one causing this error.
I ran into it again now due to a related but still different mismatch.
In the file ServiceDefinition.csdef the <WebRole name="TheRoleName" vmsize="Standard_D1"> tag must have a vmsize that exists (of course!) but according to Microsoft here (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-sizes-specs/) the value Standard_D1_v2 should also be accepted.
At the moment it was causing this same error... once I removed the _v2 it worked fine.
Conclusion: everytime something is wrong in the Azure cfgs this error message might come along... it is then necessary to find out where it came from.
Just to add some info.
The same occured to me, my WM Size was setted to a size that was "Wrong".
I have multiple subscriptions, I was pointing one of them, and using a machine "D2", I don't know what happened, the information was refreshed and this machine disappeared as an option. I then selected "Large" (old), and worked well.
Lost 6 hours trying to upload this #$%#$% package.
I think the problem can be related to any VM Size problem
I hit this problem after resizing my role from small to extra-small. I still had the Local Storage set to the default of 20GB, which an extra-small instance can't hold. I ended up reducing it to 100MB and the deployment worked (the role I'm deploying is in maintenance mode only for a couple of months, so I don't care much about getting diagnostics from it).
A quick tip: I was getting nowhere debugging this with Visual Studio's error message. On a whim, I switched to the azure website and manually uploaded the package. That finally gave me a useful error: that VM size was too small for the resources I had requested.
I encountered this error during the initial deployment of a Cloud Service that required a specific SSL Certificate... that was missing from Azure.
Corrected the certificate - deploy succeeded.
(After the first deployment Visual Studio provides a meaningful error in this case.)
I've followed the instructions found in Powershell.org's DSC Book to set up an http Pull Server (Windows 2012 server) to use with DSC. I set up the http Pull Server, then crafted a configuration to be pulled, then set up my node's LCM to pull and run the configuration.
I can see a Scheduled task on the node under Task Scheduler/Microsoft/Windows/Desired State Configuration, so I know at least something worked. However, my configuration is not being run. When I look at the Event Logs under Apps&Svcs/Microsoft/Windows/Desired State Configuration/Operational Log, I see the following event:
Job {E0B6977A-E34F-4EDD-8455-E555063CD3DD} :
This event indicates that failure happens when LCM is trying to get the configuration from pull server using download manager WebDownloadManager. ErrorId is 0x1. ErrorDetail is The attempt to get the action from server http://pullserver.local:8080/PSDSCPullServer/Action(ConfigurationId='adaba4f6-b2b6-420d-a1dd-3714106451d6')/GetAction returned unexpected response code InternalServerError.
When I manually hit that URL, after enabling CustomErrors, here is the error:
Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Isam.Esent.Interop, Version=6.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
I tried googling for this error (no luck) and I can't find helpful information on this DLL. It looks like it's supposed to come with some parts of Windows, but I don't see it on my system. I'm reluctant to download it from one of those "DLL Downloader" sites.
Any ideas why the DSC Pull Server seems to require this DLL and I don't have it?
It seems that the PSDSCPullServer resource out of xPSDesiredStateConfiguration defaults to using Esent as a database provider, which only works with Windows 8.1 (not Server 2012). I found some documentation here with some code I could copy. I just had to edit the web.config for my pull server and change this:
<add key="dbprovider" value="ESENT" />
<add key="dbconnectionstr" value="C:\Program Files\WindowsPowerShell\DscService\Devices.edb" />
with this:
<add key="dbprovider" value="System.Data.OleDb" />
<add key="dbconnectionstr" value="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Program Files\WindowsPowerShell\DscService\Devices.mdb;"/>
The fact that the original configuration tried to point to a Devices.edb (which did not exist on my system, .mdb did) is further evidence that something funky was going on.
What method have you used? The xPSDesiredConfiguration module from the resource kit or manual steps? I have not gone through the DSC book myself. So, I wouldn't know what they are recommending.
The Microsoft.Isam.Esent.Interop assembly is the ESE database provider. However, you need to use this provider only for Blue OS (Windows 8.1). Which OS are you using for the Pull Server? For all supported OS other than the Blue OS, you are supposed to use the Jet provider for the devices.mdb.
We have a Proxy that is taking messages from a JMS queue and sends them to an FTP folder. We discovered now, that the sending to the FTP is very slow when the target directory on the FTP already contains a lot of files. (i.e. when I have around 2000 files in a directory, it already takes several seconds)
Here the code of our Proxy (get messages (plain-text) from a JMS and writes them to FTP):
<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse" name="myProxy" statistics="disable" trace="disable" transports="jms">
<parameter name="transport.jms.Destination">myQueue</parameter>
<parameter name="transport.jms.ConnectionFactory">myQueueConnectionFactory</parameter>
<parameter name="transport.jms.DestinationType">queue</parameter>
<parameter name="transport.jms.ContentType">
<rules>
<jmsProperty>contentType</jmsProperty>
<default>text/plain</default>
</rules>
</parameter>
<target faultSequence="rollbackSequence">
<inSequence>
<log level="custom">
<property name="STATUS" value="myProxy called"/>
</log>
<property name="ClientApiNonBlocking" scope="axis2" action="remove"/>
<property name="OUT_ONLY" value="true"/>
<property name="transport.vfs.ReplyFileName" expression="fn:concat(get-property('SYSTEM_DATE','yyyyMMddHHmmss_SSS'), '_result.txt')" scope="transport"/>
<send>
<endpoint key="myFTPendpoint"/>
</send>
</inSequence>
</target>
And the FTPEndpoint lookes like this:
<?xml version="1.0" encoding="UTF-8"?>
<endpoint xmlns="http://ws.apache.org/ns/synapse" name="myFTPendpoint">
<address uri="vfs:ftp://USER:PASSWORD#SERVER.com/path/toSomewhere?vfs.passive=true"/>
</endpoint>
My analysis for now:
It is only slow when using FTP with VFS. When using the local file system - it is fast.
The files are tiny - so it's not the upload time
The network is fast
!Speed depends on the number of files already in the directory on the FTP!
Possible solutions?
Fix the problem of the speed. Disable the directory listing?
Workaraound: Create new folders at the output (that not one folder gets filled too much)
Does someone also discovered the same issue? And how can the FTP speed to big directories be improved?
Thanks for any help
I believe regardless of whether you do an explicit Directory listing there will be always an inferred Directory listing to determine whether the file write operation will be an overwrite or a create.
This leaves you with the other workaround.
You should create new folders at the output. Implement a hashing scheme to aid in the folder naming so that you know that the folders will not get filled too much. For example, instead of file1234.ext consider file/1/2/3/4.ext.
Generally, if you have performance issues you should benchmark.
Try performing the same action from a command line FTP client and see where the slow point is. Running each of the commands one by one will allow you to see which exact step(s) perform differently when putting to a folder with many files vs an empty folder.
You should also consider that the performance issue may not be with FTP. Just because that's the channel you're seeing the issue on, doesn't mean (purely as an example) that the OS isn't just slow when handling large folders (like NT used to be). FTP is the way you're seeing this error, that doesn't mean it's related to the cause.
To test this, I would access the server directly and access the folder that contains the files.
Finally, if none of those give you any clues, I'd probably try doing the same thing on a different end-point to see if there's a persistent problem.
You will always have issues on FTP with that amount of files, this is a common problem, and is not related with JMS, to confirm that, use a ftp client like filezilla and try to list the directory where the 2000 files exists...
I am trying to get my application an installer via WiX 3.0. The exact code is:
<File Id="ServiceComponentMain" Name="$(var.myProgramService.TargetFileName)" Source="$(var.myProgramService.TargetPath)" DiskId="1" Vital="yes"/>
<!-- service will need to be installed under Local Service -->
<ServiceInstall
Id="MyProgramServiceInstaller"
Type="ownProcess"
Vital="yes"
Name="MyProgramAddon"
DisplayName="[removed]"
Description="[removed]"
Start="auto"
Account="LocalService"
ErrorControl="ignore"
Interactive="no"/>
<ServiceControl Id="StartDDService" Name="MyProgramServiceInstaller" Start="install" Wait="no" />
<ServiceControl Id="StopDDService" Name="MyProgramServiceInstaller" Stop="both" Wait="yes" Remove="uninstall" />
Thing is, for some reason LocalService fails on the "Installing services" step, and if I change it to "LocalSystem" then the installer times out while trying to start the service.
The service starts fine manually and at system startup, and for all intents and purposes works great. I've heard there are issues getting services to work right under LocalService, but Google isnt really helping as everyone's responses have been "got it to work kthx".
Just looking to get this service set up and started during installation, that's all. Any help? Thanks!
Make sure the services.msc window is closed when you install
Have you tried ...
NT AUTHORITY\LocalService
Per this doc ...
... but the name of the account must be NT
AUTHORITY\LocalService when you call
CreateService, regardless of the
locale, or unexpected results can
occur.
reference: ServiceControl Table
The MSI documentation for ServiceControl Table states that the 'Name' is the string name of the service. In your code snipet, your ServiceControl 'Name' is set to the 'ID' for the ServiceInstall and not its 'Name'. So, your ServiceControl elements should read:
<ServiceControl Id="StartDDService" Name="MyProgramAddon" Start="install" Wait="no" />
<ServiceControl Id="StopDDService" Name="MyProgramAddon" Stop="both" Wait="yes" Remove="uninstall" />
Here is another case where a localsystem service can fail to install with error 1923: if you have another service already installed with the same DisplayName (but different internal service name, path, etc). I just had this happen to me.
I spent a while looking into this and discovered it was because I had the keypath attribute set on the the component not on the file. My wix file now looks like:
<Component Id="comp_WF_HOST_18" DiskId="1" KeyPath="no" Guid="3343967A-7DF8-4464-90CA-7126C555A254">
<File Id="file_WF_HOST_18" Checksum="yes" Source="C:\Projects\GouldTechnology\Infrastructure\WorkflowHost\WorkflowHost\bin\Release\WorkflowHost.exe" KeyPath="yes"/>
<ServiceInstall
Id="workflowHostInstaller"
Type="ownProcess"
Vital="yes"
Name="WorkflowHost"
DisplayName="Workflow Host"
Start="demand"
Account="[WORKFLOW_HOST_USER_ACCOUNT]"
Password="[WORKFLOW_HOST_USER_PASSWORD]"
ErrorControl="critical"
Interactive="no"/>
<ServiceControl Id="StartWFService" Name="workflowHostInstaller" Start="install" Stop="both" Remove="both" Wait="no" />
</Component>
Now I just need to work out how to give it the correct permissions...
I had the same problem. It turns out that I had a typo in the <ServiceControl Id="StartService" Name="MyServiceName" where my Name did not match the service name I specified in the service project when I created it.
This was also the problem with my service not uninstalling.
Had the same problem but with specified accounts, got bored of it and created a CA to start the service after the install was completed instead. Just don't bother trying to start it with MSI, just leave it to a CA, unless you get some quality info from somewhere.
BTW using LocalSystem and a manually started service works fine. Never got any other variations to work.
I'll just echo aristippus303's advice: Don't try to start a service with Windows Installer, and don't set any account, just accept the default of LocalSystem during installation. Trying to do anything else is too problematic. Windows Installer waits for the service to indicate it has started, and there are too many things that can go wrong, what with permissions and rights and firewall settings and missing files and so on, so then Windows Installer ends up frozen or terminating with an error and your install has failed.
What you want to do is to specify in your documentation that the user should manually change the service's account (if necessary) and manually start the service after the install is done, and to trouble-shoot any problems that turn up at that point. Or just tell the user to reboot so the auto-start option will start the service if you're fairly sure that there won't be problems.
Please pay attention that in the documentation for ServiceInstall element it is written about the Account attribute that "The account under which to start the service. Valid only when ServiceType is ownProcess.". In your example you did not specify the ownProcess service type which may be the problem.
We had the same problem occuring only on Windows XP machines were the service could not be installed. In the end we found that on XP the name setting from the WiX file is ignored and it instead used the service name set in the C# code. We happened to have a name in the code that contained white-space, i. e. "Blah Blah Service", when this was set to the same name as the WiX file used on Windows 7 it worked well.