Is there a way to create multiple instances of CacheManager in Microsoft Enterprise Library, programatically without depending on configuration file - caching

We are trying to migrate to use Microsoft Enterprise Library - Caching block. However, cache manager initialization seems to be pretty tied to the config file entries and our application creates inmemory "containers" on the fly. Is there anyway by which an instance of cache manager can be instantiated on the fly using pre-configured set of values (inmemory only).

Enterprise Library 5 has a fluent configuration which makes it easy to programmatically configure the blocks. For example:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureCaching()
.ForCacheManagerNamed("MyCache")
.WithOptions
.UseAsDefaultCache()
.StoreInIsolatedStorage("MyStore")
.EncryptUsing.SymmetricEncryptionProviderNamed("MySymmetric");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current
= EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
Unfortunately, it looks like you need to configure the entire block at once so you wouldn't be able to add CacheManagers on the fly. (When I call ConfigureCaching() twice on the same builder an exception is thrown.) You can create a new ConfigurationSource but then you lose your previous configuration. Perhaps there is a way to retrieve the existing configuration, modify it (e.g. add a new CacheManager) and then replace it? I haven't been able to find a way.
Another approach is to use the Caching classes directly.
The following example uses the Caching classes to instantiate two CacheManager instances and stores them in a static Dictionary. No configuration required since it's not using the container. I'm not sure it's a great idea -- it feels a bit wrong to me. It's pretty rudimentary but hopefully helps.
public static Dictionary<string, CacheManager> caches = new Dictionary<string, CacheManager>();
static void Main(string[] args)
{
IBackingStore backingStore = new NullBackingStore();
ICachingInstrumentationProvider instrProv = new CachingInstrumentationProvider("myInstance", false, false,
new NoPrefixNameFormatter());
Cache cache = new Cache(backingStore, instrProv);
BackgroundScheduler bgScheduler = new BackgroundScheduler(new ExpirationTask(null, instrProv), new ScavengerTask(0,
int.MaxValue, new NullCacheOperation(), instrProv), instrProv);
CacheManager cacheManager = new CacheManager(cache, bgScheduler, new ExpirationPollTimer(int.MaxValue));
cacheManager.Add("test1", "value1");
caches.Add("cache1", cacheManager);
cacheManager = new CacheManager(new Cache(backingStore, instrProv), bgScheduler, new ExpirationPollTimer(int.MaxValue));
cacheManager.Add("test2", "value2");
caches.Add("cache2", cacheManager);
Console.WriteLine(caches["cache1"].GetData("test1"));
Console.WriteLine(caches["cache2"].GetData("test2"));
}
public class NullCacheOperation : ICacheOperations
{
public int Count { get { return 0; } }
public Hashtable CurrentCacheState { get { return new System.Collections.Hashtable(); } }
public void RemoveItemFromCache(string key, CacheItemRemovedReason removalReason) {}
}
If expiration and scavenging policies are the same perhaps it might be better to create one CacheManager and then use some intelligent key names to represent the different "containers". E.g. the key name could be in the format "{container name}:{item key}" (assuming that a colon will not appear in a container or key name).

You can using UnityContainer:
IUnityContainer unityContainer = new UnityContainer();
IContainerConfigurator configurator = new UnityContainerConfigurator(unityContainer);
configurator.ConfigureCache("MyCache1");
IContainerConfigurator configurator2 = new UnityContainerConfigurator(unityContainer);
configurator2.ConfigureCache("MyCache2");
// here you can access both MyCache1 and MyCache2:
var cache1 = unityContainer.Resolve<ICacheManager>("MyCache1");
var cache2 = unityContainer.Resolve<ICacheManager>("MyCache2");
And this is an extension class for IContainerConfigurator:
public static void ConfigureCache(this IContainerConfigurator configurator, string configKey)
{
ConfigurationSourceBuilder builder = new ConfigurationSourceBuilder();
DictionaryConfigurationSource configSource = new DictionaryConfigurationSource();
// simple inmemory cache configuration
builder.ConfigureCaching().ForCacheManagerNamed(configKey).WithOptions.StoreInMemory();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.ConfigureContainer(configurator, configSource);
}
Using this you should manage an static IUnityContainer object and can add new cache, as well as reconfigure existing caching setting anywhere you want.

Related

DryIoC register configuration

I am working on a Xamarin project with Prism and DryIoC.
Currently I am setting up some custom environment-specific configuration, however I am struggling with the IoC syntax for this.
I have the following code as part of my App.xaml.cs:
private void SetConfiguration(IContainerRegistry containerRegistry)
{
// Get and deserialize config.json file from Configuration folder.
var embeddedResourceStream = Assembly.GetAssembly(typeof(IConfiguration)).GetManifestResourceStream("MyVismaMobile.Configurations.Configuration.config.json");
if (embeddedResourceStream == null)
return;
using (var streamReader = new StreamReader(embeddedResourceStream))
{
var jsonString = streamReader.ReadToEnd();
var configuration = JsonConvert.DeserializeObject<Configuration.Configuration>(jsonString);
What to do with configuration, in order to DI it?
}
What should I do with the configuration variable to inject it?
I have tried the following:
containerRegistry.RegisterSingleton<IConfiguration, Configuration>(c => configuration);
containerRegistry.Register<IConfiguration, Configuration>(c => configuration));
But the syntax is wrong with dryIoC.
RegisterSingleton and Register are meant for registering types where the container will then create the instances. You have your instance already, so you use
containerRegistry.RegisterInstance<IConfiguration>( configuration );
Instances are always singleton, obviously, so there's only no separate RegisterInstanceSingleton...

Protecting webapi with IdentityServer and Autofac - can't get claims

I'm trying to protect my webapi with IdentityServer and OpenID Connect using Autofac. I'm using OWIN. But for some reason I can't get claims of the user. It seems that AccessTokenValidation is not triggered at all. That makes me think there is something wrong in the order of my declarations at my startup. Here is my startup.
public class Startup {
public void Configuration(IAppBuilder appBuilder) {
// Add authentication
this.AddAuthentication(appBuilder);
HttpConfiguration config = new HttpConfiguration();
var container = CreateAutofacContainer();
var resolver = new AutofacWebApiDependencyResolver(container);
config.DependencyResolver = resolver;
WebApiConfig.Register(config);
config.EnsureInitialized();
// Register config - you can't add anything to pipeline after this
appBuilder.UseAutofacMiddleware(container);
appBuilder.UseAutofacWebApi(config);
appBuilder.UseWebApi(config);
}
private static IContainer CreateAutofacContainer() {
var autofacBuilder = new ContainerBuilder();
var assembly = Assembly.GetExecutingAssembly();
// Register your Web API controllers.
autofacBuilder.RegisterApiControllers(assembly);
// For general logging implementation
autofacBuilder.RegisterType<ConsoleLogger>().As<ILogger>();
// Create empty usage context to be filled in OWIN pipeline
IUsageContext usageContext = new RuntimeUsageContext();
autofacBuilder.RegisterInstance(usageContext).As<IUsageContext>().SingleInstance();
// We need to get usage context builded
autofacBuilder.RegisterType<OIDCUsageContextProvider>().InstancePerRequest();
var container = autofacBuilder.Build();
return container;
}
private void AddAuthentication(IAppBuilder app) {
var options = new IdentityServerBearerTokenAuthenticationOptions();
options.Authority = "MYAUTHORITY";
options.RequiredScopes = new[] { "openid", "profile", "email", "api" };
options.ValidationMode = ValidationMode.ValidationEndpoint;
app.UseIdentityServerBearerTokenAuthentication(options);
// Add local claims if needed
app.UseClaimsTransformation(incoming => {
// either add claims to incoming, or create new principal
var appPrincipal = new ClaimsPrincipal(incoming);
// incoming.Identities.First().AddClaim(new Claim("appSpecific", "some_value"));
return Task.FromResult(appPrincipal);
});
}
I'm using hybrid flow and api is called from SPA-application. I've verified (by calling my identity server's endpoint directly) that access token is valid and there are claims available. I also downloaded IdentityServer.AccessTokenValidation project and attached it as a reference. When I set some breakpoints to methods in that project, they never get called. That is why I think there is something wrong with my startup and OWIN pipeline.
I've declared UsageContext in my startup. It is a class I'm using to collect claims and some configuration settings - to be injected to actual controllers. I think it would be nice way to handle this, so in controllers there is always valid UsageContext available.
I've read a lot of samples and examples but still haven't found exactly same situation. I'll appreciate any attempts to point me into right direction.
Regards,
Borre
Could it be your registration of UsageContext as a Singleton? You mention this class contains claims, so this object should be resolved once pr http request - shouldn't it?
It turned out that there was some mysterious line in AccessTokenValidation - library that didn't work. I use that library to get claims. After changing the line everything seemed to work.
So basically my question is closed now and stuff works. But I'm still not totally convinced this is the right way to do this.
Thanks John for your comments!

Implementing a dynamic OAuthBearerServerOptions AccessTokenExpireTimeSpan value from data store

The context of this post involves ASP.NET Web API 2.2 + OWIN
The environment is a single application with both OWIN server and Web Api.
Background:
In the Startup class, one must specify OAuthBearerServerOptions which is supplied to the OAuthBearerAuthenticationProvider. These options are created during the start up of the OWIN server. On the OAuthBearerServerOptions, I must specify the AccessTokenExpireTimeSpan so that I can ensure expiry of tokens.
The Issue
I must be able to dynamically specify the Expiration time span on a per authentication request basis. I am unsure if this can be done and was wondering:
Can it be done?
If yes; at which point could I perform this look up and assignment of the expiration?
Content of start up config:
var config = new HttpConfiguration();
WebApiConfig.Register(config);
var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);
var OAuthServerOptions = new OAuthAuthorizationServerOptions()
{
AllowInsecureHttp = true,
TokenEndpointPath = new PathString("/OAuth"),
AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(**THIS NEEDS TO BE DYNAMIC**)),
Provider = new AuthorizationServerProvider()
};
//STOP!!!!!!!!
//DO NOT CHANGE THE ORDER OF THE BELOW app.Use statements!!!!!
//Token Generation
app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll); //this MUST come before oauth registration
app.UseOAuthAuthorizationServer(OAuthServerOptions);
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions()
{
Provider = new BearerProvider()
});
app.UseAutofacMiddleware(container); //this MUST come before UseAutofacWebApi
app.UseAutofacWebApi(config);//this MUST come before app.UseWebApi
app.UseWebApi(config);
I started messing with the BearerProvider class (see app.UseOAuthBearerAuthentication above for where I use this class) and in specific, the ValidateIdentity method, but wasn't sure if that was the proper point in the auth workflow to set this value. It seemed appropriate, but I seek validation of my position.
public class BearerProvider : OAuthBearerAuthenticationProvider
{
public override async Task RequestToken(OAuthRequestTokenContext context)
{
await base.RequestToken(context);
//No token? attempt to retrieve from query string
if (String.IsNullOrEmpty(context.Token))
{
context.Token = context.Request.Query.Get("access_token");
}
}
public override Task ValidateIdentity(OAuthValidateIdentityContext context)
{
//context.Ticket.Properties.ExpiresUtc= //SOME DB CALL TO FIND OUT EXPIRE VALUE..IS THIS PROPER?
return base.ValidateIdentity(context);
}
}
Thanks in advance!
Setting context.Options.AccessTokenExpireTimeSpan will actually change the global value, and affect all requests, that won't work for the original requirement.
The right place is the TokenEndpoint method.
public override Task TokenEndpoint(OAuthTokenEndpointContext context)
{
...
if (someCondition)
{
context.Properties.ExpiresUtc = GetExpirationDateFromDB();
}
...
}
So I was in the wrong spot entirely. What I ended up having to do was to use my custom OAuthorizationServerProvider and in the overridden GrantResourceOwnerCredentials method in that custom class, I was able to set the timeout value by accessing the...
context.Options.AccessTokenExpireTimeSpan
property.
<!-- language: c# -->
public class AuthorizationServerProvider : OAuthAuthorizationServerProvider
{
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
//DO STUFF
var expireValue=GetTimeOutFromSomeplace();
context.Options.AccessTokenExpireTimeSpan = expireValue;
//DO OTHER TOKEN STUFF
}
}

How to create SubCommunities using the Social Business Toolkit Java API?

In the SDK Javadoc, the Community class does not have a "setParentCommunity" method but the CommunityList class does have a getSubCommunities method so there must be a programmatic way to set a parent Community's Uuid on new Community creation. The REST API mentions a "rel="http://www.ibm.com/xmlns/prod/sn/parentcommunity" element". While looking for clues I check an existing Subcommunity's XmlDataHandler's nodes and found a link element. I tried getting the XmlDataHandler for a newly-created Community and adding a link node with href, rel and type nodes similar to those in the existing Community but when trying to update or re-save the Community I got a bad request error. Actually even when I tried calling dataHandler.setData(n) where n was set as Node n=dataHandler.getData(); without any changes, then calling updateCommunity or save I got the same error, so it appears that manipulating the dataHandler XML is not valid.
What is the recommended way to specify a parent Community when creating a new Community so that it is created as a SubCommunity ?
The correct way to create a sub-community programatically is to modify the POST request body for community creation - here is the link to the Connections 45 infocenter - http://www-10.lotus.com/ldd/appdevwiki.nsf/xpDocViewer.xsp?lookupName=IBM+Connections+4.5+API+Documentation#action=openDocument&res_title=Creating_subcommunities_programmatically_ic45&content=pdcontent
We do not have support in the SBT SDK to do this using CommunityService APIs. We need to use low level Java APIs using Endpoint and ClientService classes to directly call the REST APIs with the appropriate request body.
I'd go ahead and extend the class CommunityService
then go ahead and add CommunityService
https://github.com/OpenNTF/SocialSDK/blob/master/src/eclipse/plugins/com.ibm.sbt.core/src/com/ibm/sbt/services/client/connections/communities/CommunityService.java
Line 605
public String createCommunity(Community community) throws CommunityServiceException {
if (null == community){
throw new CommunityServiceException(null, Messages.NullCommunityObjectException);
}
try {
Object communityPayload;
try {
communityPayload = community.constructCreateRequestBody();
} catch (TransformerException e) {
throw new CommunityServiceException(e, Messages.CreateCommunityPayloadException);
}
String communityPostUrl = resolveCommunityUrl(CommunityEntity.COMMUNITIES.getCommunityEntityType(),CommunityType.MY.getCommunityType());
Response requestData = createData(communityPostUrl, null, communityPayload,ClientService.FORMAT_CONNECTIONS_OUTPUT);
community.clearFieldsMap();
return extractCommunityIdFromHeaders(requestData);
} catch (ClientServicesException e) {
throw new CommunityServiceException(e, Messages.CreateCommunityException);
} catch (IOException e) {
throw new CommunityServiceException(e, Messages.CreateCommunityException);
}
}
You'll want to change your communityPostUrl to match...
https://greenhouse.lotus.com/communities/service/atom/community/subcommunities?communityUuid=2fba29fd-adfa-4d28-98cc-05cab12a7c43
and where the Uuid here is the parent uuid.
I followed #PaulBastide 's recommendation and created a SubCommunityService class, currently only containing a method for creation. It wraps the CommunityService rather than subclassing it, since I found that preferrable. Here's the code in case you want to reuse it:
public class SubCommunityService {
private final CommunityService communityService;
public SubCommunityService(CommunityService communityService) {
this.communityService = communityService;
}
public Community createCommunity(Community community, String superCommunityId) throws ClientServicesException {
Object constructCreateRequestBody = community.constructCreateRequestBody();
ClientService clientService = communityService.getEndpoint().getClientService();
String entityType = CommunityEntity.COMMUNITY.getCommunityEntityType();
Map<String, String> params = new HashMap<>();
params.put("communityUuid", superCommunityId);
String postUrl = communityService.resolveCommunityUrl(entityType,
CommunityType.SUBCOMMUNITIES.getCommunityType(), params);
String newCommunityUrl = (String) clientService.post(postUrl, null, constructCreateRequestBody,
ClientService.FORMAT_CONNECTIONS_OUTPUT);
String communityId = newCommunityUrl.substring(newCommunityUrl.indexOf("communityUuid=")
+ "communityUuid=".length());
community.setCommunityUuid(communityId);
return community;
}
}

Is it possible to protect Azure connection strings that are referenced with CloudConfigurationManager?

I've read the MSDN blog posts on protecting sensitive data in web.config by encrypting the contents and setting up a certificate on Azure so they can be read back.
However, there is top-secret data in my 'service configuration' .cscfg files in the Visual Studio Azure Deployment project. We store connection strings and other sensitive data here so that the test system, also on Azure, can be directed to equivalent test back-end services.
This data is accessed with CloudConfigurationManager (e.g. .GetSetting("AwsSecretKey")) rather than WebConfigurationManager as discussed in the blog post.
Is it possible to protect this data in a similar way? It's important that we have different AWS and SQL connection strings in test and production, and that the production keys are hidden from me and the rest of the dev staff.
YES, we do this with a x509 cert uploaded in the deployment configuration. However, the settings are only as secure as your policy/procedures for protecting the private key! Here is the code we use in an Azure Role to decrypt a value in the ServiceConfiguration:
/// <summary>Wrapper that will wrap all of our config based settings.</summary>
public static class GetSettings
{
private static object _locker = new object();
/// <summary>locked dictionary that caches our settings as we look them up. Read access is ok but write access should be limited to only within a lock</summary>
private static Dictionary<string, string> _settingValues = new Dictionary<string, string>();
/// <summary>look up a given setting, first from the locally cached values, then from the environment settings, then from app settings. This handles caching those values in a static dictionary.</summary>
/// <param name="settingsKey"></param>
/// <returns></returns>
public static string Lookup(string settingsKey, bool decrypt = false)
{
// have we loaded the setting value?
if (!_settingValues.ContainsKey(settingsKey))
{
// lock our locker, no one else can get a lock on this now
lock (_locker)
{
// now that we're alone, check again to see if someone else loaded the setting after we initially checked it
// if no one has loaded it yet, still, we know we're the only one thats goin to load it because we have a lock
// and they will check again before they load the value
if (!_settingValues.ContainsKey(settingsKey))
{
var lookedUpValue = "";
// lookedUpValue = RoleEnvironment.IsAvailable ? RoleEnvironment.GetConfigurationSettingValue(settingsKey) : ConfigurationManager.AppSettings[settingsKey];
// CloudConfigurationManager.GetSetting added in 1.7 - if in Role, get from ServiceConfig else get from web config.
lookedUpValue = CloudConfigurationManager.GetSetting(settingsKey);
if (decrypt)
lookedUpValue = Decrypt(lookedUpValue);
_settingValues[settingsKey] = lookedUpValue;
}
}
}
return _settingValues[settingsKey];
}
private static string Decrypt(string setting)
{
var thumb = Lookup("DTSettings.CertificateThumbprint");
X509Store store = null;
try
{
store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadOnly);
var cert = store.Certificates.Cast<X509Certificate2>().Single(xc => xc.Thumbprint == thumb);
var rsaProvider = (RSACryptoServiceProvider)cert.PrivateKey;
return Encoding.ASCII.GetString(rsaProvider.Decrypt(Convert.FromBase64String(setting), false));
}
finally
{
if (store != null)
store.Close();
}
}
}
You then can leverage RoleEnvironment.IsAvailable to only decrypt values in the emulator or deployed environment, thereby running the web role in local IIS using an unencrypted App setting with key="MyConnectionString" for local debugging (without the emulator):
ContextConnectionString = GetSettings.Lookup("MyConnectionString", decrypt: RoleEnvironment.IsAvailable);
Then, to complete the example, we created a simple WinForsm App with the following code to encrypt/decrypt the value with the given cert. Our production team maintains access to the production cert and encrypts the necessary values using the WinForms App. They then provide the DEV team with the encrypted value. You can find a full working copy of the solution here. Here's the main code for the WinForms App:
private void btnEncrypt_Click(object sender, EventArgs e)
{
var thumb = tbThumbprint.Text.Trim();
var valueToEncrypt = Encoding.ASCII.GetBytes(tbValue.Text.Trim());
var store = new X509Store(StoreName.My, rbLocalmachine.Checked ? StoreLocation.LocalMachine : StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
var cert = store.Certificates.Cast<X509Certificate2>().Single(xc => xc.Thumbprint == thumb);
var rsaProvider = (RSACryptoServiceProvider)cert.PublicKey.Key;
var cypher = rsaProvider.Encrypt(valueToEncrypt, false);
tbEncryptedValue.Text = Convert.ToBase64String(cypher);
store.Close();
btnCopy.Enabled = true;
}
private void btnDecrypt_Click(object sender, EventArgs e)
{
var thumb = tbThumbprint.Text.Trim();
var valueToDecrypt = tbEncryptedValue.Text.Trim();
var store = new X509Store(StoreName.My, rbLocalmachine.Checked ? StoreLocation.LocalMachine : StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
var cert = store.Certificates.Cast<X509Certificate2>().Single(xc => xc.Thumbprint == thumb);
var rsaProvider = (RSACryptoServiceProvider)cert.PrivateKey;
tbDecryptedValue.Text = Encoding.ASCII.GetString(rsaProvider.Decrypt(Convert.FromBase64String(valueToDecrypt), false));
}
private void btnCopy_Click(object sender, EventArgs e)
{
Clipboard.SetText(tbEncryptedValue.Text);
}

Resources