I'm working with Linq expression trees (from the db4o/Mainsoft/Mono port) on the Compact Framework. Since System.Reflection.Emit doesn't exist, I can't compile my LambdaExpressions into delegates, which I want to do for performance reasons.
I thought maybe I could transform my expression tree into IL and basically provide the missing Emit functionality that way, but then I realized that I'd have to either run a WinCE-based ILASM on it or write my own PE headers and assmebly metadata.
I'd much rather have ILASM available. Is it?
Apparently, I can compile Mono.Cecil for use under the Compact Framework, which will allow me to emit and load assemblies.
If you want use a Lambda-Expressions on CF you don't need ILASM or System.Reflection.Emit. The C# compiler for CF supports Lamba-Expressions but the CF base libraries does not have the Expressions classes. If you add reference to assembly with correct named (and correct implemented) classes for expressions, you enable Lambda-Expressions.
Thanks for god, there are this assembly already implemented ( http://evain.net/blog/articles/2008/09/22/linq-expression-trees-on-the-compact-framework ) - I use it with Db4O data access and for SqlCE wit LINQ IQueryableToolkit, and it works well.
Related
When creating a new app in Xamarin Forms I see these two options:
Configure your Forms App
Shared Code:
Use .NET Standard
Use Shared Library
Can someone explain the difference? I looked at the help and I am still confused. I'd appreciate if someone can give me any advice on this. Not sure if it helps but this app is self contained and no code in the app will need to be shared with any other application.
In terms of what you can achieve with both, it is the same. So, in the end, it's mostly a matter of taste.
The biggest difference is that a shared project is compiled into the app itself. It is nothing more than it says on the tin: it's a shared folder that you can use in all platform projects. Using platform-specific code is done through compiler directives.
With a .NET Standard project, you will get a physical binary. It is a project of its own. You can reuse it in other .NET Standard projects, although you already mentioned you won't be using it for that. Executing platform-specific code requires a bit different approach, using the DependencyService.
Seeing that they made a choice to replace the PCL with .NET Standard but keep the shared project points out that the shared project is here to stay for a while. I tend to like the .NET Standard library more. It feels cleaner and forces you to write cleaner code. Also, .NET Standard isn't going anywhere soon and if you decide that code should be reused down the road, you have the ability to.
A good overview, together with pros and cons can be found in the Microsoft Docs: https://learn.microsoft.com/en-us/xamarin/cross-platform/app-fundamentals/code-sharing
I've recently had to look for a C# porting of the Protocol Buffers library originally developped by Google. And guess what, I found two projects owned both by two very well known persons here: protobuf-csharp-port, written by Jon Skeet and protobuf-net, written by Marc Gravell. My question is simple: which one do I have to choose ?
I quite like Marc's solution as it seems to me closer to C# philisophy (for instance, you can just add attributes to the properties of existing class) and it looks like it can support .NET built-in types such as System.Guid.
I am sure both of them are really great projects but what's your oppinion?
I agree with Jon's points; if you are coding over multiple environments, then his version gives you a similar API to the other "core" implementations. protobuf-net is much more similar to how most of the .NET serializers are implemented, so is more familiar (IMO) to .NET devs. And as Jon notes - the raw binary output should be identical so you can re-implement with a different API if you need to later.
Some points re protobuf-net that are specific to this implementation:
works with existing types (not just generated types from .proto)
works under things like WCF and memcached
can be used to implement ISerializable for existing types
supports inheritance* and serialization callback methods
supports common patterns such as ShouldSerialize[name]
works with existing decorated types (XmlType/XmlElement or DataContract/DataMember) - meaning (for example) that LINQ-to-SQL models serialize out-of-the-box (as long as serialization is enabled in the DBML)
in v2, works for POCO types without any attributes
in v2, works in .NET 1.1 (not sure this is a huge selling feature) and most other frameworks (including monotouch - yay!)
possibly (not yet implemented) v2 might support full-graph* serialization (not just tree serialization)
(*=these features use 100% valid protobuf binary, but which might be hard to consume from other languages)
Are you using other languages in your project as well? If so, my C# port will let you write similar code on all platforms. If not, Marc's port is probably more idiomatic C# to start with. (I've tried to make my code "feel" like normal C#, but the design is clearly based on the Java code to start with, deliberately so that it's familiar to those using Java as well.)
Of course one of the beauties of this is that you can change your mind later and be confident that all your data will still be valid via the other project - they should be absolutely binary compatible (in terms of serialized data), as far as I'm aware.
According to it's GitHub project site protobuf-csharp-port has now been folded into the main Google Protocol Buffers project, so it will be the official .NET implementation of protobuf 3. protobuf-net however was last updated in 2013, although there have been some commits recently in GitHub.
I just switched from protobuf-csharp-port to protobuf-net because:
protobuf-net is more ".net like", i.e. descriptors to serialise members instead of code generation.
If you want to compile protobuf-csharp-port .proto files you have to do a 2 step process, i.e. compile with protoc to .protobin and then compile that with protoGen. protobuf-net does this in one step.
In my case I want to use protocol buffers to replace an xml based communication model between a .net client and a j2ee backend. Since I'm already using code generation I'll go for Jon's implementation.
For projects not requiring java interop I'd choose Marc's implementation, especially since v2 allows working without annotations.
I have used T4 to generate partial classes from some input file (XML, etc) and then hand code additional partial bits onto those generated classes.
Is it possible to go the other way? To hand craft partial classes, and use T4 to template boiler plate bits to them?
Obviously I can't use reflection to look for the classes since it's not compiled yet, but I see Visual Studio inspect uncompiled code for different utilities. Perhaps Visual Studio offers some feature to support this I don't know about. Long shot, I guess.
Thanks
Also, you can use T4 with VS's CodeModel to read the code in your project without compiling and then generate from that metadata.
There's some pointers to examples here: http://blogs.msdn.com/b/garethj/archive/2009/09/25/dte-and-t4-better-together.aspx
Actually, T4 is used this way frequently. Yes, it requires reflection, but partial classes compile even if bits of them aren't generated yet. I would look at examples for generating strongly typed views as described here for examples of using reflection to generate new files.
Consider a C++ API defined as a series of __options(declexport/import) classes.
Further, assume that the caller is never permitted to call the ordinary operator new(size_t) on these classes. Either a static factory method does the new-ing or there is a class-specific operator new. And ditto marks on the delete size as needed (frequently just a virtual destructor).
Now, if you compile and link a DLL and an IMPLIB of with the tools from VS2010, can you hand that implib and DLL to a user of VS2005 and expect it to work?
MFC is not involved here at all.
I'd be particularly grateful to any reference to any relatively formal Microsoft statement on the subject.
So long as the name mangling on the C++ API is identical (they are), and does not use STL-type specific parameters, such as basic_string or std::map, whose implementation may have changed between releases of the compiler (and they have), then it should just work.
Of course, you'll want to make sure you either compiled your DLL using /MT mode (static linked runtimes), or include the redistributables for VS2010 runtimes with your supplied libraries and link targets.
EDIT: Expanding on "don't pass in types that have version-specific implementations". A partial list is most easily found by looking at the output of the exports of MSVC100P.DLL.
cd %VS100COMNTOOLS%\..\VC\redist\x86\Microsoft.VC100.CRT
DUMPBIN /exports MSVCP100.DLL
The next issue will be header-only implementations of things like map or set which have changed under the hood between versions of the compiler.
This is why it's highly recommended that only scalar types be passed across boundaries between memory arenas. And thus, simple tests will pass, and be reliable.
You have not mentioned if you have used MFC to create the DLL's .If you have, regular DLL's should work , but I dont think extension shall work as the latter links to the MFC dlls .I am including links for your reference.
http://www.codeguru.com/cpp/cpp/cpp_mfc/tutorials/article.php/c4017
http://www.experts-exchange.com/Programming/System/Windows__Programming/MFC/Q_20385543.html
http://msdn.microsoft.com/en-us/library/26h8x9sy%28v=VS.100%29.aspx
EDIT
If its a normal DLL, there should not be any problem.Also depends on the linkage type.
I was wondering any of you has successfully done that before?
If yes, is anything I need to pay attention to?
That idea is a non-starter.
The VC6 static library will need to link against the same CRT as the VC9 one in order to avoid multiply defined symbols, mismatching heap implementations and other nastiness. That won't be an easy task as the VC compilers make assumptions about the contents of the CRT.
The layout of structs and classes will differ between VC6 and VC9, even though the declarations may match exactly, the objects won't be compatible.
If you need to do this, your best bet would be to wrap the VC6 static library in a VC6 dynamic library that provides a c-style interface and access that from VC9.
I'd say no.
Why not just build it in VC6?