I have built a custom msbuild task that I use to convert 3D models in the format I use in my engine. However there are some optional behaviours that I would like to provide. For example allowing the user to choose whether to compute the tangent array or not, whether to reverse the winding order of the indices, etc.
In the actual UI where you select the Build action for each file, is it possible to define custom fields that would then be fed to the input parameters of the task? Such as a "Compute Tangents" dropbox where you can choose True or False?
If that is possible, how? Are there any alternatives besides defining multiple tasks? I.e. ConvertModelTask, ConvertModelComputeTangentTask, ConvertModelReverseIndicesTask, etc.
Everything in a MsBuild Custom Task, has to have "settable properties" to drive behavior.
Option 1.
Define an ENUM-esque to drive you behavior.
From memory, the MSBuild.ExtensionPack.tasks and MSBuild.ExtensionPack.Xml.XmlFile TaskAction="ReadElementText" does this type of thing.
The "TaskAction" is the enum-esque thing. I say "esque", because all you can do on the outside is set a string. and then in the code, convert the string to an internal enum.
See code here:
http://searchcode.com/codesearch/view/14325280
Option 2: You can still use OO on the tasks. Create a BaseTask (abstract) for shared logic), and then subclass it, and make the other class a subclass, and the msbuild task that you call.
SvnExport does this. SvnClient is the base class. And it has several subclasses.
See code here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/Subversion/SvnExport.cs
You can probably dive deep with EnvDTE or UITypeEditor but since you already have a custom task why not keep it simple with a basic WinForm?
namespace ClassLibrary1
{
public class Class1 : Task
{
public bool ComputeTangents { set { _computeTangents = value; } }
private bool? _computeTangents;
public override bool Execute()
{
if (!_computeTangents.HasValue)
using (var form1 = new Form1())
{
form1.ShowDialog();
_computeTangents = form1.checkBox1.Checked;
}
Log.LogMessage("Compute Tangents: {0}", _computeTangents.Value);
return !Log.HasLoggedErrors;
}
}
}
Related
I'm trying to build a Gradle plugin that would allow the following:
myPluginConfig {
something1 {
// this is a closure
}
somethingElse {
// this is another closure
}
// more closures here
}
To achieve this I'm fairly certain I need to use a NamedDomainObjectContainer to wrap a Closure collection, so I've set up the following plugin:
class SwitchDependenciesPlugin implements Plugin<Project> {
void apply(Project project) {
// add the extension
project.getExtensions().myPluginConfig = project.container(Closure)
// read the current configuration
NamedDomainObjectContainer<Closure> config = project.myPluginConfig
// test it out, always returns []
System.out.println(config)
}
}
What am I doing wrong, do I need to use project.extensions.create instead? If so, how?
EDIT: my use case consists in adding dependencies according to some variables defined in the project hierarchy. For example, the following configuration would add the red project if the variable red is defined on project.ext, or gson otherwise:
myPluginConfig {
redTrue {
compile project(':red')
}
redFalse {
compile 'com.google.code.gson:gson:2.4'
}
greenTrue {
compile project(':green')
}
}
For this use case I need to have dynamic names for myPluginConfig, and therefore either a Map or a NamedDomainObjectContainer.
Can you elaborate what you try to model here? I think you have two options. One is to use NamedDomainObjectContainer. Here you need a class that represents "something". Have a look at the Gradle userguide chapter about maintaining multiple domain objects (see https://docs.gradle.org/current/userguide/custom_plugins.html#N175CF) in the sample of the userguide, the "thing" is 'Book'. The build-in configuration syntax like you described above comes for free.
If you want to have a syntax like above without the need for maintaining multiple domain objects, you can simply add a method that takes a Closure as a parameter to your Extension class:
void somethingToConfigure(Closure) {
}
You cannot have Closure as a type for NamedDomainObjectContainer simply because the type you use must have a property called name and a public constructor with a single String parameter.
To overcome this, you may create a wrapper around Closure with a name field added.
Is there a way to pass data to dependencies registered with either Execution Context Scope or Lifetime Scope in Simple Injector?
One of my dependencies requires a piece of data in order to be constructed in the dependency chain. During HTTP and WCF requests, this data is easy to get to. For HTTP requests, the data is always present in either the query string or as a Request.Form parameter (and thus is available from HttpContext.Current). For WCF requests, the data is always present in the OperationContext.Current.RequestContext.RequestMessage XML, and can be parsed out. I have many command handler implementations that depend on an interface implementation that needs this piece of data, and they work great during HTTP and WCF scoped lifestyles.
Now I would like to be able to execute one or more of these commands using the Task Parallel Library so that it will execute in a separate thread. It is not feasible to move the piece of data out into a configuration file, class, or any other static artifact. It must initially be passed to the application either via HTTP or WCF.
I know how to create a hybrid lifestyle using Simple Injector, and already have one set up as hybrid HTTP / WCF / Execution Context Scope (command interfaces are async, and return Task instead of void). I also know how to create a command handler decorator that will start a new Execution Context Scope when needed. The problem is, I don't know how or where (or if I can) "save" this piece of data so that is is available when the dependency chain needs it to construct one of the dependencies.
Is it possible? If so, how?
Update
Currently, I have an interface called IProvideHostWebUri with two implementations: HttpHostWebUriProvider and WcfHostWebUriProvider. The interface and registration look like this:
public interface IProvideHostWebUri
{
Uri HostWebUri { get; }
}
container.Register<IProvideHostWebUri>(() =>
{
if (HttpContext.Current != null)
return container.GetInstance<HttpHostWebUriProvider>();
if (OperationContext.Current != null)
return container.GetInstance<WcfHostWebUriProvider>();
throw new NotSupportedException(
"The IProvideHostWebUri service is currently only supported for HTTP and WCF requests.");
}, scopedLifestyle); // scopedLifestyle is the hybrid mentioned previously
So ultimately unless I gut this approach, my goal would be to create a third implementation of this interface which would then depend on some kind of context to obtain the Uri (which is just constructed from a string in the other 2 implementations).
#Steven's answer seems to be what I am looking for, but I am not sure how to make the ITenantContext implementation immutable and thread-safe. I don't think it will need to be made disposable, since it just contains a Uri value.
So what you are basically saying is that:
You have an initial request that contains some contextual information captured in the request 'header'.
During this request you want to kick off a background operation (on a different thread).
The contextual information from the initial request should stay available when running in the background thread.
The short answer is that Simple Injector does not contain anything that allows you to do so. The solution is in creating a piece of infrastructure that allows moving this contextual information along.
Say for instance you are processing command handlers (wild guess here ;-)), you can specify a decorator as follows:
public class BackgroundProcessingCommandHandlerDecorator<T> : ICommandHandler<T>
{
private readonly ITenantContext tenantContext;
private readonly Container container;
private readonly Func<ICommandHandler<T>> decorateeFactory;
public BackgroundProcessingCommandHandlerDecorator(ITenantContext tenantContext,
Container container, Func<ICommandHandler<T>> decorateeFactory) {
this.tenantContext = tenantContext;
this.container = container;
this.decorateeFactory = decorateeFactory;
}
public void Handle(T command) {
// Capture the contextual info in a local variable
// NOTE: This object must be immutable and thread-safe.
var tenant = this.tenantContext.CurrentTenant;
// Kick off a new background operation
Task.Factory.StartNew(() => {
using (container.BeginExecutionContextScope()) {
// Load a service that allows setting contextual information
var context = this.container.GetInstance<ITenantContextApplier>();
// Set the context for this thread, before resolving the handler
context.SetCurrentTenant(tenant);
// Resolve the handler
var decoratee = this.decorateeFactory.Invoke();
// And execute it.
decoratee.Handle(command);
}
});
}
}
Note that in the example I make use of an imaginary ITenantContext abstraction, assuming that you need to supply the commands with information about the current tenant, but any other sort of contextual information will obviously do as well.
The decorator is a small piece of infrastructure that allows you to process commands in the background and it is its responsibility to make sure all the required contextual information is moved to the background thread as well.
To be able to do this, the contextual information is captured and used as a closure in the background thread. I created an extra abstraction for this, namely ITenantContextApplier. Do note that the tenant context implementation can implement both the ITenantContext and the ITenantContextApplier interface. If however you define the ITenantContextApplier in your composition root, it will be impossible for the application to change the context, since it does not have a dependency on ITenantContextApplier.
Here's an example:
// Base library
public interface ITenantContext { }
// Business Layer
public class SomeCommandHandler : ICommandHandler<Some> {
public SomeCommandHandler(ITenantContext context) { ... }
}
// Composition Root
public static class CompositionRoot {
// Make the ITenantContextApplier private so nobody can see it.
// Do note that this is optional; there's no harm in making it public.
private interface ITenantContextApplier {
void SetCurrentTenant(Tenant tenant);
}
private class AspNetTenantContext : ITenantContextApplier, ITenantContext {
// Implement both interfaces
}
private class BackgroundProcessingCommandHandlerDecorator<T> { ... }
public static Container Bootstrap(Container container) {
container.RegisterPerWebRequest<ITenantContext, AspNetTenantContext>();
container.Register<ITenantContextApplier>(() =>
container.GetInstance<ITenantContext>() as ITenantContextApplier);
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(BackgroundProcessingCommandHandlerDecorator<>));
}
}
A different approach would be to just make the complete ITenantContext available to the background thread, but to be able to pull this off, you need to make sure that:
The implementation is immutable and thus thread-safe.
The implementation doesn't require disposing, because it will typically be disposed when the original request ends.
We're using protobuf-net for sending log messages between services. When profiling stress testing, under high concurrency, we see very high CPU usage and that TakeLock in RuntimeTypeModel is the culprit. The hot call stack looks something like:
*Our code...*
ProtoBuf.Serializer.SerializeWithLengthPrefix(class System.IO.Stream,!!0,valuetype ProtoBuf.PrefixStyle)
ProtoBuf.Serializer.SerializeWithLengthPrefix(class System.IO.Stream,!!0,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.Meta.TypeModel.SerializeWithLengthPrefix(class System.IO.Stream,object,class System.Type,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.Meta.TypeModel.SerializeWithLengthPrefix(class System.IO.Stream,object,class System.Type,valuetype ProtoBuf.PrefixStyle,int32,class ProtoBuf.SerializationContext)
ProtoBuf.ProtoWriter.WriteObject(object,int32,class ProtoBuf.ProtoWriter,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.BclHelpers.WriteNetObject(object,class ProtoBuf.ProtoWriter,int32,valuetype
ProtoBuf.BclHelpers/NetObjectOptions)
ProtoBuf.Meta.TypeModel.GetKey(class System.Type&)
ProtoBuf.Meta.RuntimeTypeModel.GetKey(class System.Type,bool,bool)
ProtoBuf.Meta.RuntimeTypeModel.FindOrAddAuto(class System.Type,bool,bool,bool)
ProtoBuf.Meta.RuntimeTypeModel.TakeLock(int32&)
[clr.dll]
I see that we can use the new precompiler to get a speed boost, but I'm wondering if that will get rid of the issue (sounds like it doesn't use reflection); it would be a bit of work for me to integrate this, so I haven't tested it yet. I also see the option to call Serializer.PrepareSerializer. My initial (small scale) testing didn't make the prepare seem promising.
A little more info about the type we're serializing:
[ProtoContract]
public class SomeMessage
{
[ProtoMember(1)]
public SomeEnumType SomeEnum { get; set; }
[ProtoMember(2)]
public long SomeId{ get; set; }
[ProtoMember(3)]
public string SomeString{ get; set; }
[ProtoMember(4)]
public DateTime SomeDate { get; set; }
[ProtoMember(5, DynamicType = true, OverwriteList = true)]
public Collection<object> SomeArguments
}
Thanks for your help!
UPDATE 9/17
Thanks for your response! We're going to try the workaround you suggest and see if that fixes things.
This code lives in our logging system so, in the SomeMessage example, SomeString is really a format string (e.g. "Hello {0}") and the SomeArguments collection is a list of objects used to fill in the format string, just like String.Format. Before we serialize, we look at each argument and call DynamicSerializer.IsKnownType(argument.GetType()), if it isn't known, we convert it to a string first. I haven't looked at the ratios of data, but I'm pretty sure we have a lot of different strings coming in as arguments.
Let me know if this helps. If you need, I'll try to get more details.
TakeLock is only used when it is changing the model, for example because it is seeing a type for the first time. You shouldn't normally see TakeLock after the first time a particular type has been used. In most cases, using Serializaer.PrepareSerializer<SomeMessage>() should perform all the necessary initialization (and similar for any other contracts you are using).
However! I wonder if perhaps this is also related to your use of DynamicType; what are the actual objects being used here? It might be that I need to tweak the logic here, so that it doesn't spend any time on that step. If you let me know the actual objects (so I can repro), I will try to run some tests.
As for whether the precompiler would change this; yes it would. A fully compiled static model has a completely different implementation of the ProtoBuf.Meta.TypeModel.GetKey method, so it would never call TakeLock (you don't need to protect a model that can never change!). But you can actuallydo something very similar without needing to use precompile. Consider the following, run as part of your app's initialization:
static readonly TypeModel serializer;
...
var model = TypeModel.Create();
model.Add(typeof(SomeMessage), true);
// TODO add other contracts you use here
serializer = model.Compile();
This will create a fully static-compiled serializer assembly in memory (instead of a mutable model with individual operations compiled). If you now use serializer.Serialize(...) instead of Serializer.Serialize (i.e. the instance method on your stored TypeModel rather than the static method on Serializer) then it will essentially be doing something very similar to "precompiler", but without the need to actualy precompile it (obviously this will only be available on "full" .NET). This will then never call TakeLock, as it is running a fixed model, rather than a flexible model. It does, however, require you to know what contract-types you use. You could use reflection to find these, by looking for all those types with a given attribute:
static readonly TypeModel serializer;
...
var model = TypeModel.Create();
Type attributeType = typeof(ProtoContractAttribute);
foreach (var type in typeof(SomeMessage).Assembly.GetTypes()) {
if (Attribute.IsDefined(type, attributeType)) {
model.Add(type, true);
}
}
serializer = model.Compile();
But emphasis: the above is a workaround; it sounds like there's a glitch, which I'll happily investigate if I can see an example where it actually happens; most importantly: what are the objects in SomeArguments?
I'd like to change the representation of C# Doubles to rounded Int64 with a four decimal place shift in the serialization C# Driver's stack for MongoDB. In other words, store (Double)29.99 as (Int64)299900
I'd like this to be transparent to my app. I've had a look at custom serializers but I don't want to override everything and then switch on the Type with fallback to the default, as that's a bit messy.
I can see that RegisterSerializer() won't let me add one for an existing type, and that BsonDefaultSerializationProvider has a static list of primitive serializers and it's marked as internal with private members so I can't easily subclass.
I can also see that it's possible to RepresentAs Int64 for Doubles, but this is a cast not a conversion. I need essentially a cast AND a conversion in both serialization directions.
I wish I could just give the default serializer a custom serializer to override one of it's own, but that would mean a dirty hack.
Am I missing a really easy way?
You can definitely do this, you just have to get the timing right. When the driver starts up there are no serializers registered. When it needs a serializer, it looks it up in the dictionary where it keeps track of the serializers it knows about (i.e. the ones that have been registered). Only it it can't find one in the dictionary does it start figuring out where to get one (including calling the serialization providers) and if it finds one it registers it.
The limitation in RegisterSerializer is there so that you can't replace an existing serializer that has already been used. But that doesn't mean you can't register your own if you do it early enough.
However, keep in mind that registering a serializer is a global operation, so if you register a custom serializer for double it will be used for all doubles, which could lead to unexpected results!
Anyway, you could write the custom serializer something like this:
public class CustomDoubleSerializer : BsonBaseSerializer
{
public override object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var rep = bsonReader.ReadInt64();
return rep / 100.0;
}
public override void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var rep = (long)((double)value * 100);
bsonWriter.WriteInt64(rep);
}
}
And register it like this:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
You could test it using the following class:
public class C
{
public int Id;
public double X;
}
and this code:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
var c = new C { Id = 1, X = 29.99 };
var json = c.ToJson();
Console.WriteLine(json);
var r = BsonSerializer.Deserialize<C>(json);
Console.WriteLine(r.X);
You can also use your own serialization provider to tell Mongo which serializer to use for certain types, which I ended up doing to mitigate some of the timing issues mentioned when trying to override existing serializers. Here's an example of a serialisation provider that overrides how to serialize decimals:
public class CustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(decimal)) return new DecimalSerializer(BsonType.Decimal128);
return null; // falls back to Mongo defaults
}
}
If you return null from your custom serialization provider, it will fall back to using Mongo's default serialization provider.
Once you've written your provider, you just need to register it:
BsonSerializer.RegisterSerializationProvider(new CustomSerializationProvider());
I looked through the latest iteration of the driver's code and checked if there's some sort of backdoor to set custom serializers. I am afraid there's none; you should open an issue in the project's bug tracker if you think this needs to be looked at for future iterations of the driver (https://jira.mongodb.org/).
Personally, I'd open a ticket -- and if a quick workaround is necessary or required, I'd subclass DoubleSerializer, implement the new behavior, and then use Reflection to inject it into either MongoDB.Bson.Serialization.Serializers.DoubleSerializer.__instance or MongoDB.Bson.Serialization.BsonDefaultSerializationProvider.__serializers.
I am not sure where the best place to put validation (using the Enterprise Library Validation Block) is? Should it be on the class or on the interface?
Things that may effect it
Validation rules would not be changed in classes which inherit from the interface.
Validation rules would not be changed in classes which inherit from the class.
Inheritance will occur from the class in most cases - I suspect some fringe cases to inherit from the interface (but I would try and avoid it).
The interface main use is for DI which will be done with the Unity block.
The way you are trying to use the Validation Block with DI, I dont think its a problem if you set the attributes at interface level. Also, I dont think it should create problems in the inheritance chain. However, I have mostly seen this block used at class level, with an intent to keep interfaces not over specify things. IMO i dont see a big threat in doing this.
Be very careful here, your test is too simple.
This will not work as you expect for SelfValidation Validators or Class Validators, only for the simple property validators like you have there.
Also, if you are using the PropertyProxyValidator in an ASP.NET page, iI don;t believe it will work either, because it only looks a field validators, not inherited/implemented validators...
Yes big holes in the VAB if you ask me..
For the sake of completeness I decided to write a small test to make sure it would work as expected and it does, I'm just posting it here in case anyone else wants it in future.
using System;
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ISpike spike = new Spike();
spike.Name = "A really long name that will fail.";
ValidationResults r = Validation.Validate<ISpike>(spike);
if (!r.IsValid)
{
throw new InvalidOperationException("Validation error found.");
}
}
}
public class Spike : ConsoleApplication1.ISpike
{
public string Name { get; set; }
}
interface ISpike
{
[StringLengthValidator(2, 5)]
string Name { get; set; }
}
}
What version of Enterprise Library are you using for your code example? I tried it using Enterprise Library 5.0, but it didn't work.
I tracked it down to the following section of code w/in the EL5.0 source code:
[namespace Microsoft.Practices.EnterpriseLibrary.Validation]
[public static class Validation]
public static ValidationResults Validate<T>(T target, ValidationSpecificationSource source)
{
Type targetType = target != null ? target.GetType() : typeof(T);
Validator validator = ValidationFactory.CreateValidator(targetType, source);
return validator.Validate(target);
}
If the target object is defined, then target.GetType() will return the most specific class definition, NOT the interface definition.
My workaround is to replace your line:
ValidationResults r = Validation.Validate<ISpike>(spike);
With:
ValidationResults r ValidationFactory.CreateValidator<ISpike>().Validate(spike);
This got it working for me.