I am attempting to use Hammock as a library to enable easy access to a REST API.
When I add the package using nuget in Visual Studio 2010, it adds 2 references:
Hammock
Hammock.ClientProfile
However, when I attempt to use the hammock classes and methods, it tells me there are duplicate implementations of certain classes. Further, using full namespace scoping does not seem to help.
Is it possible that one should be using only Hammock -or- Hammock.ClientProfile - but never both at the same time?
If so, why?
I have contacted the the creator, and he stated that there is no difference between the libraries - he was at one time planning to implement a server-side library, but never followed through on it.
My personally, I am using Hammock (and not using Hammock.ClientProfile).
Related
When creating a new app in Xamarin Forms I see these two options:
Configure your Forms App
Shared Code:
Use .NET Standard
Use Shared Library
Can someone explain the difference? I looked at the help and I am still confused. I'd appreciate if someone can give me any advice on this. Not sure if it helps but this app is self contained and no code in the app will need to be shared with any other application.
In terms of what you can achieve with both, it is the same. So, in the end, it's mostly a matter of taste.
The biggest difference is that a shared project is compiled into the app itself. It is nothing more than it says on the tin: it's a shared folder that you can use in all platform projects. Using platform-specific code is done through compiler directives.
With a .NET Standard project, you will get a physical binary. It is a project of its own. You can reuse it in other .NET Standard projects, although you already mentioned you won't be using it for that. Executing platform-specific code requires a bit different approach, using the DependencyService.
Seeing that they made a choice to replace the PCL with .NET Standard but keep the shared project points out that the shared project is here to stay for a while. I tend to like the .NET Standard library more. It feels cleaner and forces you to write cleaner code. Also, .NET Standard isn't going anywhere soon and if you decide that code should be reused down the road, you have the ability to.
A good overview, together with pros and cons can be found in the Microsoft Docs: https://learn.microsoft.com/en-us/xamarin/cross-platform/app-fundamentals/code-sharing
I've recently had to look for a C# porting of the Protocol Buffers library originally developped by Google. And guess what, I found two projects owned both by two very well known persons here: protobuf-csharp-port, written by Jon Skeet and protobuf-net, written by Marc Gravell. My question is simple: which one do I have to choose ?
I quite like Marc's solution as it seems to me closer to C# philisophy (for instance, you can just add attributes to the properties of existing class) and it looks like it can support .NET built-in types such as System.Guid.
I am sure both of them are really great projects but what's your oppinion?
I agree with Jon's points; if you are coding over multiple environments, then his version gives you a similar API to the other "core" implementations. protobuf-net is much more similar to how most of the .NET serializers are implemented, so is more familiar (IMO) to .NET devs. And as Jon notes - the raw binary output should be identical so you can re-implement with a different API if you need to later.
Some points re protobuf-net that are specific to this implementation:
works with existing types (not just generated types from .proto)
works under things like WCF and memcached
can be used to implement ISerializable for existing types
supports inheritance* and serialization callback methods
supports common patterns such as ShouldSerialize[name]
works with existing decorated types (XmlType/XmlElement or DataContract/DataMember) - meaning (for example) that LINQ-to-SQL models serialize out-of-the-box (as long as serialization is enabled in the DBML)
in v2, works for POCO types without any attributes
in v2, works in .NET 1.1 (not sure this is a huge selling feature) and most other frameworks (including monotouch - yay!)
possibly (not yet implemented) v2 might support full-graph* serialization (not just tree serialization)
(*=these features use 100% valid protobuf binary, but which might be hard to consume from other languages)
Are you using other languages in your project as well? If so, my C# port will let you write similar code on all platforms. If not, Marc's port is probably more idiomatic C# to start with. (I've tried to make my code "feel" like normal C#, but the design is clearly based on the Java code to start with, deliberately so that it's familiar to those using Java as well.)
Of course one of the beauties of this is that you can change your mind later and be confident that all your data will still be valid via the other project - they should be absolutely binary compatible (in terms of serialized data), as far as I'm aware.
According to it's GitHub project site protobuf-csharp-port has now been folded into the main Google Protocol Buffers project, so it will be the official .NET implementation of protobuf 3. protobuf-net however was last updated in 2013, although there have been some commits recently in GitHub.
I just switched from protobuf-csharp-port to protobuf-net because:
protobuf-net is more ".net like", i.e. descriptors to serialise members instead of code generation.
If you want to compile protobuf-csharp-port .proto files you have to do a 2 step process, i.e. compile with protoc to .protobin and then compile that with protoGen. protobuf-net does this in one step.
In my case I want to use protocol buffers to replace an xml based communication model between a .net client and a j2ee backend. Since I'm already using code generation I'll go for Jon's implementation.
For projects not requiring java interop I'd choose Marc's implementation, especially since v2 allows working without annotations.
I have been unable to find any information how would one replace Processing IDE with Visual Studio 2015 Community.
Is it even possible to replace it, if yes then how?
Processing is a couple of things:
A set of tools that convert "Processing code" into Java code, or JavaScript code with Processing.js.
An IDE that lets you write Processing code and use those tools.
A Java library (and JavaScript, for Processing.js) that is called by that converted code.
That third thing is what you care about. You can use Processing as a Java library the same way you can use any Java library. Here is a tutorial on using it from eclipse.
The steps to use it with Visual Studio will be similar: find the Processing library jar (probably called core.jar), add it to your classpath, and then write Java code that uses the classes from that library jar.
However, I will say that you should know what you're doing with both Java and Processing before trying this. Processing's IDE is designed to make things as simple as possible, so it hides a lot of behind-the-scenes stuff from you. You have to be comfortable with the idea of using an API, OOP, and setting up the classpath.
Also note that Processing 3 has changed a bunch of things, so certain aspects of that tutorial are out of date. Most notably, PApplet no longer extends Applet, so you can't treat it as a component anymore. You have to go through its Surface instead. If you have no idea what I'm talking about, it might be a better idea to stick with Processing's included IDE.
I want to build a language service for visual studio 2010. I was first trying to follow the tutorial and documentations from MSDN.
The problem is i don't succeed to make this works (i'll explain later my problem). So i digged into existing implementations, i found Ook! and lua . both of these projects doesn't use the tutorial or documentation i found on MSDN, but something based on MEF. Lua used this only with previous Visual Studio versions.
So i'm wondering if i'm using an obsolete method to create a language service (But the documentation aims Visual Studio 2010), or there is different ways to do this, which depends on needs.
In my case, i've got a language that doesn't need to be compiled into cli, but i want to have an editor that have colorization, syntax warnings and errors, intellisense ...
The problem i mentionned is that when launching exp instance, there is no text editor with my file extension, and visual studio begins to have many lags. The language service is registered using 3 attributes : ProvideServiceAttribute, ProvideLanguageServiceAttribute and ProvideLanguageServiceExtension. Also initialized in Package intialize method, like mentionned in Proffer the Language.... The package is loaded when i try to open the file with my extension, the language service is initialized.
So i don't get it why i does not work, could you please help me to understand how language service works, and what is the best way to implement it
Thanks
Good chance your IScanner implementation has an endless loop, happened to me.
Custom Compiler Warnings and
C#: Create custom warning in Visual Studio if certain method is used in source code
haven't helped as they deal with code that is under the author's control.
We are using a 3rd party suite of UI controls (DevExpress) in our software and I want to generate a warning when someone uses MessageBox.Show("blah"); instead of XtraMessageBox.Show("blah");
Is there a way to do that?
This sort of thing can be addressed relatively easily via a custom rule for FxCop/Visual Studio Code Analysis. If you are using Visual Studio Developer Edition, you will even see the rule failures displayed along-side your compilation warnings and errors in the IDE.
While there's no way you can do real custom compile-time error in .NET, there's a number of third-party tools (both free and commercial) that can inject their validation logic into the build process, usually after the compilation.
Here are three ways I know of to solve you problem:
Resharper 5.0($) will support custom rules / warnings.
In PostSharp(free) you can define OnMethodBoundary aspect, overwrite its CompileTimeValidate method and emit a [post]compile-time error from it.
NDepend can be integrated with your build process ($) to enforce coding policies like that
No there is no direct way. If you think about it you are looking for a compiler warning for some code that you don't even compile.
If you really want this you could use Reflection methods on YOUR compiled assembly to check if any methods/assemblies you don't want have been called. Cecil has a lot of the functionality you need. You could then make this part of your build process.