TBB equivalants in C++11 - c++11

I got an old codebase, where I want to use some implementations in a new environment. The old base used the TBB framework which I am really unfamiliar with.
Are there any equivalents implementaions to these TBB's types in C++11:
tbb::enumerable_thread_specific<...>
mutex_t
mutex_t::scoped_lock
If not: Any tips how I can convert them (links to good TBB summaries, tutorials, ...) or do I need to work myself into the whole TBB documentation?
(And no. Inserting TBB to the project is not an option.)
EDIT: forget to mention tbb::this_tbb_thread::yield any suggestion about this?

The TBB features in your code do have near-equivalents in C++11 (or you can create one simply).
enumerable_thread_specific<T> is an implementation of thread-local storage. It can use the platform's local storage, or a tbb::concurrent_vector to hold instances. The default is to consume no platform thread-local storage keys. C++11 has the thread_local qualifier, so depending on how the enumerable_thread_specific is used you can replace it with a thread_local version of the same type. If you are using the structure to persist the data, or to access it outside a thread-local context, you may have your work cut out for you.
mutex_t is a generic mutex type, and can be replaced with std::mutex, though the developer may have chosen a particular implementation (like spin_mutex) that will be affected by the replacement.
scoped_lock is an RAII object that locks the mutex on construction, and when the leaving the scope will unlock the mutex (making it somewhat exception-friendly.) You can use std::lock_guard<std::mutex> if you're at C++17, otherwise you can roll your own.
It has been awhile since I read the yield documentation. I believe the implementation looks for other possible tasks before giving up the time slice. You can use std::this_thread::yield() to relenquish the time slice, but the behavior may differ if the code is using TBB constructs. The fact you haven't mentioned any other TBB stuff implies to me there are none in the program, and the tbb::yield() does the same thing as std::this_thread::yield().

I would suggest making the old codebase work first and only then change.
tbb::enumerable_thread_specific<...> does not have standard equivalents.
mutex_t and mutex_t::scoped_lock you can replace with std::mutex and std::unique_lock<std::mutex>.

Related

Will go compilers ignore unused functions

If there is a function from an external package that is not used at all in my project, will the compiler remove the function from the generated machine code?
This question could be targeted at any language compiler in general. But, I think the behaviour may vary language to language. So, I am interested in knowing what does go compilers do.
I would appreciate any help on understanding this.
The language spec does not mention this anywhere, and from a correctness point of view this is irrelevant.
But know that the current version does remove certain constructs that the compiler can prove is not used and will not change the runtime behaviour of the app.
Quoting from The Go Blog: Smaller Go 1.7 binaries:
The second change is method pruning. Until 1.6, all methods on all used types were kept, even if some of the methods were never called. This is because they might be called through an interface, or called dynamically using the reflect package. Now the compiler discards any unexported methods that do not match an interface. Similarly the linker can discard other exported methods, those that are only accessible through reflection, if the corresponding reflection features are not used anywhere in the program. That change shrinks binaries by 5–20%.
Methods are a "harder" case than functions because methods can be listed and called with reflection (unlike functions), but the Go tools do what they can even to remove unused methods too.
You can see examples and proof of removed / unlinked code in this answer:
How to remove unused code at compile time?
Also see other relevant questions:
Splitting client/server code
Call all functions with special prefix or suffix in Golang

How is is_standard_layout implemented?

In general one can implement typical type_traits using template techniques.
However I didn't imagine how std::is_standard_layout could be implemented in these terms. http://en.cppreference.com/w/cpp/types/is_standard_layout
When I checked the gcc standard library, I found that it is implemented in terms of __is_standard_layout(T) which I could not find defined anywhere else. Is this a compiler magic function?
Would it be possible to implement std::is_standard_layout explicitly?
For example one of the conditions is that it inherits from a single class.
That seems to be impossible to determine at compile time.
No, std::is_standard_layout is not something you can implement without compiler intrinsics. As you've correctly pointed out, it needs more information than the C++ type system can express.

Why c++11 introduces a new std::chrono namespace, why not put things under std directly? [duplicate]

Everything else I have seen so far in the C++ standard library is in the std namespace. If I use things from std::chrono I usually exceed my 80 character per line limit - that is not a problem, just inconvienent.
So here my simple question: Why does the chrono header has its own namespace?
I was lead author on the chrono proposal. A sub-namespace was not my first choice, just because of the verbosity. I find myself writing using namespace std::chrono almost every time I use the facility.
However this was a very controversial proposal. And many people, including some of my co-authors strongly felt that a sub-namespace was appropriate. I did not strongly object to the sub-namespace because we were in a space of needing to compromise, or become just as dead-locked as the US congress. :⁠-⁠) The result of such a dead-lock would have probably been C11's timespec.
boost has experimented with sub-namespaces much more aggressively than the std has and one of the key authors on this paper is also the author of the boost date-time library upon which chrono evolved from. So that would obviously have a strong pull in the direction of using a sub-namespace.
Looking forward it is quite possible that the sub-namespace will become absolutely required. Imagine if we add calendrical services that include an abbreviation for December: dec. This would directly conflict with:
ios_base& dec(ios_base& str);
in <ios>. So all in all, I was probably wrong in not insisting on a sub-namespace from the beginning. :⁠-⁠) Going forward it will be interesting to watch where the committee does and does not create sub-namespaces.
Update (6 years later...)
The truth is always stranger than fiction...
So I did propose std::chrono::dec as an abbreviation for December, thinking that would be safe because of the nested chrono namespace. But no, the committee decided to rename std::chrono::dec to std::chrono::December during the standardization process because of potential conflicts.
So are nested namespaces worth it?
I don't know. This update is a datapoint, not an opinion.
There are other namespaces too, like std::placeholders. Ultimately, in C++03 the Committee did not go for subnamespaces, but it is now painfully obvious that the std namespace is becoming massively overloaded. As such, I expect that many library proposals for C++14 will use a subnamespace for larger organizations of components.

GCC __attribute__((always_inline)) and lambdas, is this syntax correct?

I am using GCC 4.6 as part of the lpcxpresso ide for a Cortex embedded processor. I have very limited code size, especially when compiling in debug mode. Using attribute((always_inline)) has so far proven to be a good tool to inline trivial functions and this saves a lot of code bloat in debug mode while still maintaining readability. I expect it to be somewhat mainstream and supported in the future because it is mentioned here http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0348c/CIAJGAIH.html
Now to my question: Is this the correct Syntax for declaring a Lambda always inline?
#define ALWAYS_INLINE __attribute__((always_inline))
[](volatile int &i)ALWAYS_INLINE{i++;}
It does work, my question is will it continue to work in future and what can I do to ensure it works in the future. If I ever switch to another major compiler that supports c++11 will I find a similar keyword which I can replace the attribute((always_inline)) with?
If I were to meet my fairy godmother I would wish for a compiler directive which causes all lambdas which are constructed as temporaries with empty constructors and bound by reference to be automatically inlined even in debug mode. Any ideas?
Will it continue to work in future?
Likely but, always_inline is compiler specific and since there is no standard specifying its exact behavior with lambda, there is no guaranty that this will continue to work in the future.
What can I do to ensure it works?
This depends on the compiler not you. If a future version drops support for always_inline with lambda, you have to stick with a version that works or code your own preprocessor that inlines lambdas with an always_inline-like keyword.
If I ever switch to another major compiler that supports c++11 will I
find a similar keyword?
Likely but again, there is no guaranty. The only real standard is the C++ inline keyword and it is not applicable to lambdas. For non-lambda it only suggests inlining and tells the compiler that a function may be defined in different compile units.

Is there any downside to redundant qualifiers? Any benefit?

For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}

Resources