Elmah errorMail not working on production - asp.net-mvc-3

I am using ELMAH in my MVC3 application and logging the errors to a SQL Database and sending emails.
Locally everything is working fine, and emails are being delivered (using casini). But on the production server (IIS7) no emails are being delivered. I am able however to send emails through code!
<elmah>
<errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="Elmah.Sql" applicationName="qatar" />
<errorMail from="errors#gmail.com"
to="someone#gmail.com"
subject="Error"
async="true"
smtpPort="587"
smtpServer="smtp.gmail.com"
enablessl ="true"
userName="myuser#gmail.com"
password="mypassword" />
Any clues or direction?

I found the reason, but still can't understand why it works locally:
enablessl should be changed to useSsl

Related

.Net Core 2.1 Error 413 on multipart/form-data POST only on IIS not IIS Express

I'm hitting a problem with multipart/form-data POST uploads on IIS. My client is an Angular SPA and my backend is on .Net Core 2.1 (I know it's old).
The backend project is published as Self-Contained win-x64. I'm not sure how it's configured exactly on IIS / Kestrel but the IIS App runs under a specific Application Pool (No managed Code / Integrated). My web.config looks like this:
<configuration>
<location path="." inheritInChildApplications="false">
<system.web>
<httpRuntime maxRequestLength="1048576" />
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath=".\my.app.exe" stdoutLogEnabled="false" stdoutLogFile=".\log\path" />
</system.webServer>
</location>
</configuration>
In my development environment instead I'm using IIS Expres.
Now I added a multipart/form-data upload, sending form data together with a blob/image. This worked out of the box in development settings. However when I publish to staging environment with real IIS and the above web.config, I always get 413 Request entity too large.
My controller looks like this:
[HttpPost]
[DisableRequestSizeLimit]
[RequestFormLimits(MultipartBodyLengthLimit = int.MaxValue, ValueLengthLimit = int.MaxValue)]
[RequestSizeLimit(int.MaxValue)]
[Route("my/route")]
public ActionResult MyHandler()
I also added limits for Kestrel in Program.cs:
.UseKestrel(options =>
{
options.Limits.MaxRequestBodySize = 104857600; // 100MB
}
)
And to make the weirdness complete the 413 in staging environment only happens in Firefox. I have no idea what else I can do. I also cleared cache in firefox.
After longer search I finally found the necessary setting in IIS to make this work in Firefox. And it has indeed be mentioned in a few sources as the 'last option'. For me it was necessary in this case.
In 'IIS Manager' I selected the backend application and opened 'Configuration Editor'
system.webServer/serverRuntime -> uploadReadAheadSize=2147483647
That maked it work.

Visual Studio debugging with windows auth not working

I have an ASP.NET webforms application that uses windows authentication when developing locally. The first time that I debug the webapp, IIS Express starts up and the pages work as expected. If I stop debugging and then start it again, I get in this endless cycle that no matter what page I go to, it always forwards to the login.aspx page and then to my Default.aspx page.
I can keep clicking on different pages, but it still keeps going to the Login then Default. I should be authenticated at this point and not be forwared to login.aspx. I believe it is because IIS express thinks that I am not authenticated. However, when I look at my cookies, I see that there is a ASP.NET_SessionId cookie so I don't think this should be happening.
If this helps, I have this in my page_load for login.aspx
if (authSection.Mode == AuthenticationMode.Windows)
{
//stuff happens here
Response.Redirect("Default.aspx");
return;
}
To fix the problem, I have to kill IIS express and start debugging again. I'm not really sure why it thinks that I am not authenticated. Even though this isn't the same question, I tried the answer provided here: https://stackoverflow.com/a/19515891/888617 and it did not help.
Edit: This actually doesn't appear to solve my issue.
It turns out the issue was specific to Chrome. I was getting this issue when debugging and had to kill the IIS express process for it to resolve itself. However, I found a more perminate solution by doing the following.
In %userprofile%\documents\IISExpress\config\applicationhost.config insure that the overrideModeDefault is set to allowed for windowsAuthentication and anonymousAuthentication like this:
<section name="anonymousAuthentication" overrideModeDefault="Allow" />
<section name="windowsAuthentication" overrideModeDefault="Allow" />
Your authentication section should also look like this. Note the only provider is NTLM.
<authentication>
<anonymousAuthentication enabled="false" userName="" />
<basicAuthentication enabled="false" />
<clientCertificateMappingAuthentication enabled="false" />
<digestAuthentication enabled="false" />
<iisClientCertificateMappingAuthentication enabled="false">
</iisClientCertificateMappingAuthentication>
<windowsAuthentication enabled="true">
<providers>
<add value="NTLM" />
</providers>
</windowsAuthentication>
</authentication>
This section should also be in the web.config.
<authentication>
<windowsAuthentication enabled="true">
<providers>
<clear />
<add value="NTLM" />
</providers>
</windowsAuthentication>
</authentication>

HttpWebRequest "keep-alive" header gets dropped

I am in NTLM hell here, hope you can help indentify what I am missing.
I am ultimately trying to deliver SSRS reports to a frame in a browser, and only the images within the reports are giving me much grief. They don't appear unless the user has Firefox and enters their credentials 2 times, first for the report, then a second time for the images in the report.
I am using HttpWebRequest to obtain the SSRS reports.
I am sending the webserver (IIS 7.5) a credential cache with "NTLM" and valid credentials to try and obtain an images from SSRS after I receive the HTML stream so that I can store them locally and refer to those, which would alleviate the users from having to re-enter credentials again and again.
I see in Fiddler that Type 1, 2 and 3 challenges are properly met during the NTLM hand shaking, however, the final response is 500 internal service error. The response text also indicates rsStreamNotFound, however, I find next to no info on what that means and I think it's misleading as to what the real problem may be.
When I use Firefox, Firefox prompts me for my network credentials for the report and then again for the images, and it gets through bringing the images back. My HttpWebRequest fails with 500 Internal Server Error, and rsStreamNotFound.
The only difference I can see in the request headers between Firefox requests and my requests is that the "keep-alive" property gets dropped from my programmatic request, and the Firefox requests have it in there.
Why do my "keep-alive" get dropped?
At this point, that is the only difference between my request and the request from Firefox, so I would like to eliminate that difference before jumping to any other conclusion.
I tried variations of:
req.KeepAlive = true;
req.PreAuthenticate = true;
and this gem:
var sp = req.ServicePoint;
var prop = sp.GetType().GetProperty( "HttpBehaviour", BindingFlags.Instance | BindingFlags.NonPublic );
prop.SetValue( sp, (byte)0, null );
Here is the CredentialCache:
CredentialCache credentialCache = new CredentialCache();
credentialCache.Add( new Uri( path ), "NTLM", NetCredentials );
... and "keep-alive" is not present in the request for my HttpWebRequest, and Firefox has it - why does mine get dropped?
Update:
I tried:
using (WebClient client = new WebClient()) {
client.DownloadFile(url, filePath);
}
...and I got 401 Unauthorized, so I tried with credentials and get 500 Internal Server Error
It is not clear from your question where you are running the code. Is it running on the desktop as an executable? Or is it running inside firefox as some sort of activex control or something similar?
Anyway, I suggest using .net tracelog facility to get a log of your transaction, and look at the logfile.
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<system.diagnostics>
<trace autoflush="true" />
<sources>
<source name="System.Net">
<listeners>
<add name="System.Net"/>
</listeners>
</source>
<source name="System.Net.Sockets">
<listeners>
<add name="System.Net"/>
</listeners>
</source>
<source name="System.Net.Cache">
<listeners>
<add name="System.Net"/>
</listeners>
</source>
</sources>
<sharedListeners>
<add
name="System.Net"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="System.Net.trace.log"
/>
</sharedListeners>
<switches>
<add name="System.Net" value="Verbose" />
<add name="System.Net.Sockets" value="Verbose" />
<add name="System.Net.Cache" value="Verbose" />
</switches>
</system.diagnostics>
</configuration>
If your app name is app.exe, create a file called app.exe.config in the same directory as the exe, and put the above contents into it. Then run the app, and a logfile should be created.
This link should have more information on getting logfiles, in case you have a problem.
creating a system.net trace log
You can put the logfile on pastebin after deleting personal information like hostnames, ipaddresses etc.
Also, give us a snippet of code that reproduces the problem. Then it will be easier to help.

How to set EnableSsl=True while sending emails using ActionMailer.Net?

I am using ActionMailer.Net in my MVC website to send email, I want to send from gmail, but gmail needs EnableSsl=True to be able to send, but I don't know where in ActionMailer.net I can configure this.
You have to edit your web.config file to something like this:
<system.net>
<mailSettings>
<!-- Method#1: Configure smtp server credentials -->
<smtp from="some-email#gmail.com">
<network enableSsl="true" host="smtp.gmail.com" port="587" userName="some-email#gmail.com" password="valid-password" />
</smtp>
</mailSettings>
</system.net>
Reference:
http://www.hanselman.com/blog/NuGetPackageOfTheWeek2MvcMailerSendsMailsWithASPNETMVCRazorViewsAndScaffolding.aspx\

WCF Service response "HTTP/1.1 400 Bad Request" on shared hosting <aka Blank Page, XML Parsing Error, Invalid Address, Webpage cannot be found>

This is both information to those experiencing the issue and a question.
edit: The question is why does dropping "www." from the URL cause this error when a website running at the same address can be referenced without "www.".
I recently reproduced this problem using a trivial WCF service (the one from endpoint.tv) after resolving the usual config issues one faces moving a service from local IIS to shared hosting.
The problem was the following response (from fiddler) upon checking the url in browser. In searching the web for posts on the topic I found a number of unresolved issues pointing to the same problem in addition to the posts where the usual shared hosting config issues fix them up.
HTTP/1.1 400 Bad Request
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Tue, 17 Aug 2010 00:27:52 GMT
Content-Length: 0
In Safari/Chrome this manifests as a blank page.
In IE you get "The webpage cannot be found".
In FF you get "XML Parsing Error: no element found Location: http://................ Line Number 1, Column 1:" (which I saw in numerous unresolved posts on the web - feel free to backlink a possible solution)
In Opera you get "Invalid Address"
I was scratching my head regarding this for a while, then I thought to try putting in the "www." which I was previously omitting from my url for no particular reason.
Problem solved.
I can now see the normal output in the browser and interact with the service via WCF Test Client.
So the question is:
Why does this make a difference to the hosted WCF service when I know it does not make a difference for browsing to the website hosted at the same address? With or without the "www." I can browse to the website at the same domain, hosted on the same account.
So far I've tested this repro on a GoDaddy service. I may try some others later.
Also, if you happen to know - I'd be interested to know what features are likely to make my WCF services need full trust rather than medium trust. And any thoughts you have on whether it is a good idea to utilise such features (in context of least priv ideology).
For reference this is the web.config, including an additional endpoint suggested by Mike to try and resolve this.
<?xml version="1.0"?>
<configuration>
<system.web>
<customErrors mode="Off"/>
<compilation><!--debug="true"-->
<buildProviders>
<remove extension=".svc"/>
<add extension=".svc" type="System.ServiceModel.Activation.ServiceBuildProvider,System.ServiceModel, Version=3.0.0.0, Culture=neutral,PublicKeyToken=b77a5c561934e089"/>
</buildProviders>
</compilation>
</system.web>
<!-- When deploying the service library project, the content of the config file must be added to the host's
app.config file. System.Configuration does not support config files for libraries. -->
<system.serviceModel>
<services>
<service behaviorConfiguration="blah"
name="WCFServ.EvalService">
<endpoint address="http://www.abcdomain.com/WCFServ/WCFServ.EvalService.svc"
binding="basicHttpBinding"
contract="WCFServ.IEvalService" />
<endpoint address="http://abcdomain.com/WCFServ/WCFServ.EvalService.svc"
binding="basicHttpBinding"
contract="WCFServ.IEvalService" />
<!--<endpoint address=""
binding="mexHttpBinding"
contract="IMetadataExchange" />-->
<!--<host>
<baseAddresses>
<add baseAddress="http://abcdomain.com/WCFServ/" />
</baseAddresses>
</host>-->
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="blah">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="true" />
</behavior>
</serviceBehaviors>
</behaviors>
<serviceHostingEnvironment>
<baseAddressPrefixFilters>
<add prefix="http://www.abcdomain.com/WCFServ/"/>
</baseAddressPrefixFilters>
</serviceHostingEnvironment>
</system.serviceModel>
<!--http://localhost/WCFServ/WCFServ.EvalService.svc-->
<startup><supportedRuntime version="v2.0.50727"/></startup></configuration>
Because you're using absolute URLs as your endpoint addresses, WCF needs to see a specific host header in HTTP requests in order to bind to those addresses.
Web servers are no different; if they're configured for a specific host, the request headers must have the host name or they won't serve up content. However, multiple host names can be bound to web sites, however, so sometimes a site may be tied to both www.example.com and example.com. Also, some web browsers, if you go to example.com and get a 404 or if the DNS lookup fails, will automatically retry the request at www.example.com.
I think the easiest thing for you to do to resolve your issue is to modify your endpoint(s) so they are host neutral. For example:
<services>
<service behaviorConfiguration="blah" name="WCFServ.EvalService">
<endpoint address="/WCFServ/WCFServ.EvalService.svc"
binding="basicHttpBinding"
contract="WCFServ.IEvalService"/>
</service>
</services>
<!-- Just leave this out
<serviceHostingEnvironment>
<baseAddressPrefixFilters>
<add prefix="http://www.abcdomain.com/WCFServ/"/>
</baseAddressPrefixFilters>
</serviceHostingEnvironment>
-->
Make sure that you have endpoints defined without the www in your web config.
This page has some good explanations about WCF addressing:
WCF Adressing In Depth.
Is your problem solved by adding the following attribute on your serviceclass?
[ServiceBehavior(AddressFilterMode=AddressFilterMode.Any)]

Resources