I am curious about various implementations of Garbage Collector libraries. And I wanted to take a look on the implementation done by V8 developers. But browsing through code I do not understand where it is in the source tree? Can I get a list of files for GC implementation? Can I use them separately for my hobby projects?
in v8globals.h I found this definition:
enum GarbageCollector { SCAVENGER, MARK_COMPACTOR };
But it does not look like a class or function API.
Thank you.
Try the following files: heap-inl.h, heap.h, heap.cc, incremental-marking-inl.h, incremental-marking.h, incremental-marking.cc, objects-visiting-inl.h, objects-visiting.h, objects-visiting.cc, global-handles.h, global-handles.cc and so on.
The license is BSD 3, so you can use it separately for your projects if you meet the BSD license conditions. Basically retain the copyright headers and don't say that Google endorses your project. IANAL.
Related
VPP provides the I/S for developing custom plugins that can be hooked into a graph of nodes. I've only seen examples for such plugins written in the C language, and was wondering whether other language, Go for instance, can also be used to write such plugins.
I have no idea what "VPP" is but nonetheless the answer is: "maybe"; here's why:
Go code is able to interface with C libraries via its facility known as cgo.
cgo is a multiple-faceted thing: it allows you to "export" certain Go functions in a certain way so that they can be called from the C side, and it allows you to call functions from the C side. It also allows you to write bits of inline C code to provide glue for the C side, when necessary.
Since some time Go building toolset (at least its "reference" implementation) provides for compiling Go code into a static or dynamic library with C-compatible API.
See this.
With these things in mind, in theory, it should be possible to do what you're after.
Note some possible obstacles:
Most of the time, if a "platform" allows you to write a "plugin" in C, it presupposes your plugin will make extensive use of the platform's own API.
This usually means your plugin is supposed to include certain header files provided by the platform.
The platform might also require your plugin to link against some platform-provided library (usually shared), or libraries.
cgo can do all of the above, but you will need to scrutinize the API provided by the platform and maybe write Go helpers to make its usage more natural for the Go code.
Building/linking issues (usually the locations of the header files and the libs) may also be a thing to solve.
I am looking for an executable (or a library that I might embed in C# or via Managed C++ into the C# project) to create binary diff files for two folders and their contents and a patch tool to apply those patch files as well targeting Windows.
This SO post refers to various tools such as bsdiff/bspatch which is highly dated. The 3rd party executable that is available here does not work when trying it out though. Another variant that is not compatible with the original is the following. Unfortunately it relies on bzlib and certain Linux headers and I wasn't really able to set it up accordingly under Visual Studio.
Anyways, all tools and posts are about 8-10 years old and I'd like to know which tools and libraries are currently maintained that I might take a look at.
I have been experimenting with Octodiff and I am impressed and will most likely be using it in production.
I've recently had to look for a C# porting of the Protocol Buffers library originally developped by Google. And guess what, I found two projects owned both by two very well known persons here: protobuf-csharp-port, written by Jon Skeet and protobuf-net, written by Marc Gravell. My question is simple: which one do I have to choose ?
I quite like Marc's solution as it seems to me closer to C# philisophy (for instance, you can just add attributes to the properties of existing class) and it looks like it can support .NET built-in types such as System.Guid.
I am sure both of them are really great projects but what's your oppinion?
I agree with Jon's points; if you are coding over multiple environments, then his version gives you a similar API to the other "core" implementations. protobuf-net is much more similar to how most of the .NET serializers are implemented, so is more familiar (IMO) to .NET devs. And as Jon notes - the raw binary output should be identical so you can re-implement with a different API if you need to later.
Some points re protobuf-net that are specific to this implementation:
works with existing types (not just generated types from .proto)
works under things like WCF and memcached
can be used to implement ISerializable for existing types
supports inheritance* and serialization callback methods
supports common patterns such as ShouldSerialize[name]
works with existing decorated types (XmlType/XmlElement or DataContract/DataMember) - meaning (for example) that LINQ-to-SQL models serialize out-of-the-box (as long as serialization is enabled in the DBML)
in v2, works for POCO types without any attributes
in v2, works in .NET 1.1 (not sure this is a huge selling feature) and most other frameworks (including monotouch - yay!)
possibly (not yet implemented) v2 might support full-graph* serialization (not just tree serialization)
(*=these features use 100% valid protobuf binary, but which might be hard to consume from other languages)
Are you using other languages in your project as well? If so, my C# port will let you write similar code on all platforms. If not, Marc's port is probably more idiomatic C# to start with. (I've tried to make my code "feel" like normal C#, but the design is clearly based on the Java code to start with, deliberately so that it's familiar to those using Java as well.)
Of course one of the beauties of this is that you can change your mind later and be confident that all your data will still be valid via the other project - they should be absolutely binary compatible (in terms of serialized data), as far as I'm aware.
According to it's GitHub project site protobuf-csharp-port has now been folded into the main Google Protocol Buffers project, so it will be the official .NET implementation of protobuf 3. protobuf-net however was last updated in 2013, although there have been some commits recently in GitHub.
I just switched from protobuf-csharp-port to protobuf-net because:
protobuf-net is more ".net like", i.e. descriptors to serialise members instead of code generation.
If you want to compile protobuf-csharp-port .proto files you have to do a 2 step process, i.e. compile with protoc to .protobin and then compile that with protoGen. protobuf-net does this in one step.
In my case I want to use protocol buffers to replace an xml based communication model between a .net client and a j2ee backend. Since I'm already using code generation I'll go for Jon's implementation.
For projects not requiring java interop I'd choose Marc's implementation, especially since v2 allows working without annotations.
Just wanted to understand there are couple of code sharing strategies exist to achieve code reusable capability in Xamarin.
Which one should i use ?
Shared Project way OR Portable Class Library way ?
if you can explain with scenarios , it would be very helpful for me.
Thanks much.
Here is the Xamarin explanation.
The question is possibly duplicated but you ask specifically for scenarios.
If you ever wrote c cross platform projects, shared projects resemble the old-school way allowing you to use #if __IOS__ statements to run device platform code in your shared/common code files. A separate assembly is created for each target (say iOS or Android). They give advantages and disadvantages of each.
PCL generates one single assembly for the common code. PCL has some limited number .net features as shown here in this table. However, most of the important .net goodies are there as you can see.
Xamarin says that shared code method is easier but PCL is easier to compile a module and share or sell that with others.
When I make projects, I check what external plugins/components/ etc I want to use and make a decision based from. For example, you may want to use sqlite and there are different options for using shared and PCL projects.
I understand for non-iOS targets, using shared libraries can lead to lower memory usage, and also that some companies distribute a library and headers (like Superpin) and a static library allows them to not distribute the source of their product. But outside of those, what are the reasons you'd want to use a static library? I use git for all of my projects, and I usually add external libraries (open source ones) as a submodule. This means they take up disk space locally, but they do not clutter up the repo. Also since iOS doesn't support shared libraries, the benefits of building libraries to promote code reuse seems diminished.
Basically, is there any reason outside of selling closed source libraries that it makes sense to build/use static libraries for iOS?
organization, reuse, and easy integration into other programs.
if you have a library which is used by multiple apps or targets multiple platforms, then you will have to maintain the build for each app. with a library, you let the library maintainer set up the build correctly, then you just link to the result (if it's developed internally, then you'll want to add it as a dependency too).
it's like DRY, but for projects.
libraries become more useful as projects become more complex. you should try to identify what programs (functions, class hierarchies, etc) are reusable outside of your app's context, and put it in a library for easy reuse - like pattern recognition.
once your codebase has hundreds or thousands of files, you will want to minimize what you use, and you will not want to maintain the dependencies manually for each project.
Also since iOS doesn't support shared
libraries, the benefits of building
libraries to promote code reuse seems
diminished.
There's no reason you can't build your own static library to use across multiple projects.
Other than for that purpose and the ones you mentioned I don't think there's much else.
Static libraries allow you to have truly standalone executables. Since all library code is actually, physically present in the executable, you don't have to worry about the exec failing to run because there's a too-old version of some library, or a too-new one, or it's completely missing, etc. And you don't have to worry about your app suddenly breaking because some library got replaced. It cuts down on dependencies and lets your app be more encapsulated.