Custom ASP.NET SqlMembershipProvider - handling connection string - asp.net-membership

I am creating a custom SqlMembershipProvider class to add some enhanced functionality to the base class. I'm getting caught up on handling the connection string, though. How can I read the Connection String Name from the configuration and make that available to the rest of the methods?
Right now I have:
public override void Initialize(string name, NameValueCollection config)
{
base.Initialize(name, config);
_ConnectionStringName = config["connectionStringName"];
}
But in other methods, the _ConnectionStringName variable is null:
SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings[_ConnectionStringName].ConnectionString)
What is the proper way to store the Connection String Name so it is available globally in my custom membership provider?
Thanks!

ProviderBase will throw a ConfigurationException if there are any entries left in the config collection by the time it get's it so each provider removes it's configuration entries before calling base.Initialize.
The issue, as you have found as a result of this answer is that you must get your values before calling base.Initialize.
Sorry, I missed that at first glance.
The rest of this post is historical and while technically correct misses the salient issue here as enumerated above.
First - try WebConfigurationManager.ConnectionStrings.
WebConfigurationManager handles applying the hierarchy of web.config all the way from your windows\microsoft.net\framework\2.0xxxx\web.config all the way up to your app.
This behaviour is not present in ConfigurationManager, which typically deals with machine.config to app.config.
If this does not solve your problem you must be overwriting the value elsewhere in your code, if indeed _ConnectionStringName is being properly assigned in Initialize.
First, set a breakpoint and ensure that _ConnectionStringName is being set as expected.
Then locate all references to the field and ensure that you do not have a bug.
This is assuming, of course, that _ConnectionStringName is a private field. If it is not, make it so and look for your compile error.

Not sure if this helps, but I was having a similar issue in needing to override the connectionstring in a sub-class of SqlMembershipProvider.
This idea is not my own - I found it in the comments section of this forum posting:
http://forums.asp.net/p/997608/2209437.aspx
public override void Initialize(string name, NameValueCollection config)
{
base.Initialize(name, config);<br>
string connectionString = //...what you want your connection string to be,
//so config["connectionStringName"]...
// Set private property of Membership provider.
System.Reflection.FieldInfo connectionStringField =
GetType().BaseType.GetField("_sqlConnectionString",
System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.NonPublic);
connectionStringField.SetValue(this, connectionString);
}
My apologies - I've never posted here before, so the formatting may be sub-par!

This is probably 6 months too late to help, but I was able to get the connection string this way:
using System.Web.Configuration;
Configuration config = WebConfigurationManager.OpenWebConfiguration("~");
MembershipSection section = config.SectionGroups["system.web"].Sections["membership"] as MembershipSection;
string defaultProvider = section.DefaultProvider;
string connstringName = section.Providers[defaultProvider].ElementInformation.Properties["connectionStringName"].Value.ToString();
string val = config.ConnectionStrings.ConnectionStrings[connstringName].ConnectionString;
This makes the assumption that the default provider has a connection string property - but if you're subclassing SqlMembershipProvider then should always have one, somewhere up the web.config chain (it's defined in the machine.config I believe).
All of this, to add one method to change the username.

Related

aspnet identity; how to login using a legacy password?

i've a site i'm updating to web api with aspnet identity 2.0.
It's a legacy site for which we need to allow the users to use their old passwords; at least during a reasonable migration period
following this article, i've derived a new UserManager from the base UserManager, and set up the PasswordHasher to hash with an old SHA1 algorithm.
My passwordHasher looks like this:
public class SQLPasswordHasher : PasswordHasher
{
public override string HashPassword(string password)
{
string cipherText = EncryptPassword(password);
return cipherText;
}
public override PasswordVerificationResult VerifyHashedPassword(string hashedPassword, string providedPassword)
{
string cipherText = EncryptPassword(providedPassword);
if (cipherText == hashedPassword)
{
return PasswordVerificationResult.SuccessRehashNeeded;
}
else
{
return PasswordVerificationResult.Failed;
}
}
private string EncryptPassword(string plainText)
{
return System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(plainText, "sha1");
}
}
When i register users with this code, I can see the passwords are being hashed and persisted in the database correctly... for the password 'foobar', the hashed value is fixed and recognizable, since this algorithm did not use a salt.
However, I cannot log in as these users. If i set a breakpoint in the new hasher, it never gets it. Neither can i seem to hit a breakpoint anywhere in the account controller when trying to log in.
thanks in advance
I'm answering my own question, in the hopes that someone else may benefit.
The problem was, i couldn't find what in the web api service was being called when logging in. I finally realized that something called /Token was being set up as the url to be called in the app.js javascript.
Searching through the project server side sources and googling led me to this article, which pointed me to the ApplicationOAuthProvider.cs file, in the 'Providers' folder of the template application.
The specific line of interest is where the method GrantResourceOwnerCredentials instantiates it's own user manager, thus:
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
var userManager = context.OwinContext.GetUserManager<ApplicationUserManager>();
From there, all i had to do was add this line:
userManager.PasswordHasher = new SQLPasswordHasher();
and i could finally log in.

Access to pre-interpolated bean validation message template in SpringMVC?

I'm using Spring Validation within a Spring MVC application that delegates validation to Hibernate Validator 5. I'm successfully able to have beans validated and have the messages interpolated by the validator. However, it's important that I also be able to have access to the message template itself, pre-interpolation.
For example, in some bean I have validation #Size(min=5,max=15,message="{my.custom.message}". In a messages.properties file I have entry my.custom.message=test min {min} and max {max}. In my BindingResult, I see the ObjectError with error message "test min 5 and max 15", but I need to look a value up at this point based on the non-interpolated my.custom.message raw value.
Can this be done? If it can't out of the box, can someone point me in the right direction for how I might customize spring's LocalValidatorFactoryBean to preserve this?
Update
I'm looking at extending org.springframework.validation.beanvalidation.SpringValidatorAdapter, and wrapping the getArgumentsForConstraint to automatically append the pre-interpolated message to the returned list of arguments. The notion of exactly what these 'arguments' are and how they're used is unclear to me, but if it's purely used for message interpolation, it seems relatively safe for me to append at the end. Any reason this might not work? Problems it might cause? Better ideas?
Solution
Didn't find any great solutions other than my 'update' above, so I ended up subclassing LocalValidatorFactoryBean with this:
#Override
protected Object[] getArgumentsForConstraint(String objectName, String field, ConstraintDescriptor<?> descriptor) {
if (null == descriptor) return super.getArgumentsForConstraint(objectName, field, descriptor);
Object[] orig = super.getArgumentsForConstraint(objectName, field, descriptor);
if (null == orig || orig.length < 1) return new Object[] { descriptor };
Object[] retval = new Object[orig.length+1];
System.arraycopy(orig, 0, retval, 0, orig.length);
retval[retval.length-1] = descriptor;
return retval;
}
In subsequent code, I look at the last object in this array and test to see if it's an instance of ConstraintDescriptor. Good enough I suppose.
However, it's important that I also be able to have access to the message template itself, pre-interpolation.
In which context do you need to access the template? If it is after validation, then getMessageTemplate() on ConstraintViolation gives you this. If it is within a constraint validator implementation, then you could use getDefaultConstraintMessageTemplate() on ConstraintValidatorContext.

MongoDB - override default Serializer for a C# primitive type

I'd like to change the representation of C# Doubles to rounded Int64 with a four decimal place shift in the serialization C# Driver's stack for MongoDB. In other words, store (Double)29.99 as (Int64)299900
I'd like this to be transparent to my app. I've had a look at custom serializers but I don't want to override everything and then switch on the Type with fallback to the default, as that's a bit messy.
I can see that RegisterSerializer() won't let me add one for an existing type, and that BsonDefaultSerializationProvider has a static list of primitive serializers and it's marked as internal with private members so I can't easily subclass.
I can also see that it's possible to RepresentAs Int64 for Doubles, but this is a cast not a conversion. I need essentially a cast AND a conversion in both serialization directions.
I wish I could just give the default serializer a custom serializer to override one of it's own, but that would mean a dirty hack.
Am I missing a really easy way?
You can definitely do this, you just have to get the timing right. When the driver starts up there are no serializers registered. When it needs a serializer, it looks it up in the dictionary where it keeps track of the serializers it knows about (i.e. the ones that have been registered). Only it it can't find one in the dictionary does it start figuring out where to get one (including calling the serialization providers) and if it finds one it registers it.
The limitation in RegisterSerializer is there so that you can't replace an existing serializer that has already been used. But that doesn't mean you can't register your own if you do it early enough.
However, keep in mind that registering a serializer is a global operation, so if you register a custom serializer for double it will be used for all doubles, which could lead to unexpected results!
Anyway, you could write the custom serializer something like this:
public class CustomDoubleSerializer : BsonBaseSerializer
{
public override object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var rep = bsonReader.ReadInt64();
return rep / 100.0;
}
public override void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var rep = (long)((double)value * 100);
bsonWriter.WriteInt64(rep);
}
}
And register it like this:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
You could test it using the following class:
public class C
{
public int Id;
public double X;
}
and this code:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
var c = new C { Id = 1, X = 29.99 };
var json = c.ToJson();
Console.WriteLine(json);
var r = BsonSerializer.Deserialize<C>(json);
Console.WriteLine(r.X);
You can also use your own serialization provider to tell Mongo which serializer to use for certain types, which I ended up doing to mitigate some of the timing issues mentioned when trying to override existing serializers. Here's an example of a serialisation provider that overrides how to serialize decimals:
public class CustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(decimal)) return new DecimalSerializer(BsonType.Decimal128);
return null; // falls back to Mongo defaults
}
}
If you return null from your custom serialization provider, it will fall back to using Mongo's default serialization provider.
Once you've written your provider, you just need to register it:
BsonSerializer.RegisterSerializationProvider(new CustomSerializationProvider());
I looked through the latest iteration of the driver's code and checked if there's some sort of backdoor to set custom serializers. I am afraid there's none; you should open an issue in the project's bug tracker if you think this needs to be looked at for future iterations of the driver (https://jira.mongodb.org/).
Personally, I'd open a ticket -- and if a quick workaround is necessary or required, I'd subclass DoubleSerializer, implement the new behavior, and then use Reflection to inject it into either MongoDB.Bson.Serialization.Serializers.DoubleSerializer.__instance or MongoDB.Bson.Serialization.BsonDefaultSerializationProvider.__serializers.

Entlib Cache.Contains NULL problem

I have a combined authorization and menustructure system on our backend.
For performance reasons EntLib caching is used in the frontend client (MVC rel 1.0 website, IIS 5.1 local, IIS 6.0 server, no cluster).
Sometimes 'Cache.Contains' will return true, but the contents of the cache is NULL. I know for certain that I filled it correctly, so what can be the problem here?
EDIT: when I set the cache to 1 minute and add the cacheKey 'A_Key', I will see the key coming back when inspecting the CurrentCacheState. When I view CurrentCacheState after 2 minutes, the key is still there. When I execute 'contains', true is returned. When I execute 'contains' again, the cacheKey is gone!
Synchronization problem??
Regards,
Michel
Excerpt:
if (IntranetCaching.Cache.Contains(cacheKey))
{
menuItems = (List<BoMenuItem>)IntranetCaching.Cache[cacheKey];
}
else
{
using (AuthorizationServiceProxyHelper authorizationServiceProxyHelper = new AuthorizationServiceProxyHelper())
{
menuItems = authorizationServiceProxyHelper.Proxy.SelectMenuByUserAndApplication(APPNAME, userName, AuthorizationType.ENUM_LOGIN);
IntranetCaching.Add(cacheKey, menuItems);
}
}
And the cachehelper:
public static class IntranetCaching
{
public static ICacheManager Cache { get; private set; }
static IntranetCaching()
{
Cache = CacheFactory.GetCacheManager();
}
public static void Add(string key, object value)
{
Cache.Add(
key
, value
, CacheItemPriority.Normal
, null
, new Microsoft.Practices.EnterpriseLibrary.Caching.Expirations.AbsoluteTime(TimeSpan.FromMinutes(10)));
}
}
Thanks Michael for following up your own issue, I have been stuck with this all day identifying that if I try and retrieve an item from Cache this is due to expire (+ 0 up to 25 seconds) I will get a NULL value. Codeplex have posted a workaround (of sorts)as it is in their FAQ:
a. How do I avoid getting a null value from the CacheManager when the item is being refreshed? - Intermittently, you may encounter this situation. To work around this, create your own implementation of an ICacheItemExpiration. In the HasExpired()method, implement a logic that will check whether the item has already expired and will update the item's value if it has expired. This method should always return false for the item to not be tagged as expired. As a result of returning false in the HasExpired() method, the item will not be refreshed and will contain the updated value as implemented in the method.
REF: link
I got the following response from Avanade (creators of Entlib):
Most likely, the BackgroundScheduler
hasn't performed its sweeping yet. If
you're going to examine the source
code, the Contains method only checks
if the specific key is present in the
inmemory cache hashtable while on the
GetData method, the code first checks
if the item has expired, if it has,
the item is removed from the cache.
Sarah Urmeneta Global Technology &
Solutions Avanade, Inc.
entlib.support#avanade.com
That solution is working for me.
Still the question remains why you can use 'Contains' when its outcome cannot be used in a sensible way.
Regards,
M.

Enterprise Library Validation Block - Should validation be placed on class or interface?

I am not sure where the best place to put validation (using the Enterprise Library Validation Block) is? Should it be on the class or on the interface?
Things that may effect it
Validation rules would not be changed in classes which inherit from the interface.
Validation rules would not be changed in classes which inherit from the class.
Inheritance will occur from the class in most cases - I suspect some fringe cases to inherit from the interface (but I would try and avoid it).
The interface main use is for DI which will be done with the Unity block.
The way you are trying to use the Validation Block with DI, I dont think its a problem if you set the attributes at interface level. Also, I dont think it should create problems in the inheritance chain. However, I have mostly seen this block used at class level, with an intent to keep interfaces not over specify things. IMO i dont see a big threat in doing this.
Be very careful here, your test is too simple.
This will not work as you expect for SelfValidation Validators or Class Validators, only for the simple property validators like you have there.
Also, if you are using the PropertyProxyValidator in an ASP.NET page, iI don;t believe it will work either, because it only looks a field validators, not inherited/implemented validators...
Yes big holes in the VAB if you ask me..
For the sake of completeness I decided to write a small test to make sure it would work as expected and it does, I'm just posting it here in case anyone else wants it in future.
using System;
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ISpike spike = new Spike();
spike.Name = "A really long name that will fail.";
ValidationResults r = Validation.Validate<ISpike>(spike);
if (!r.IsValid)
{
throw new InvalidOperationException("Validation error found.");
}
}
}
public class Spike : ConsoleApplication1.ISpike
{
public string Name { get; set; }
}
interface ISpike
{
[StringLengthValidator(2, 5)]
string Name { get; set; }
}
}
What version of Enterprise Library are you using for your code example? I tried it using Enterprise Library 5.0, but it didn't work.
I tracked it down to the following section of code w/in the EL5.0 source code:
[namespace Microsoft.Practices.EnterpriseLibrary.Validation]
[public static class Validation]
public static ValidationResults Validate<T>(T target, ValidationSpecificationSource source)
{
Type targetType = target != null ? target.GetType() : typeof(T);
Validator validator = ValidationFactory.CreateValidator(targetType, source);
return validator.Validate(target);
}
If the target object is defined, then target.GetType() will return the most specific class definition, NOT the interface definition.
My workaround is to replace your line:
ValidationResults r = Validation.Validate<ISpike>(spike);
With:
ValidationResults r ValidationFactory.CreateValidator<ISpike>().Validate(spike);
This got it working for me.

Resources