BOOST_LOG_TRIVIAL stragne warning VS2008express - boost

I use Visual Studio 9 (2008).
When I compile this simple program:
#include <boost/log/trivial.hpp>
int main(int /*argc*/, char** /*argv*/)
{
BOOST_LOG_TRIVIAL(info) << "padaka";
}
I get a warning:
..\..\deps\boost_1_55_0\boost/parameter/aux_/tagged_argument.hpp(123) : warning C4100: 'x' : unreferenced formal parameter
..\..\deps\boost_1_55_0\boost/log/sources/severity_feature.hpp(252) : see reference to function template instantiation 'const boost::log::v2s_mt_nt5::trivial::severity_level &boost::parameter::aux::tagged_argument<Keyword,Arg>::operator []<boost::log::v2s_mt_nt5::trivial::severity_level>(const boost::parameter::aux::default_<Keyword,Value> &) const' being compiled
with
[
Keyword=boost::log::v2s_mt_nt5::keywords::tag::severity,
Arg=const boost::log::v2s_mt_nt5::trivial::severity_level,
Value=boost::log::v2s_mt_nt5::trivial::severity_level
]
..\..\deps\boost_1_55_0\boost/log/sources/basic_logger.hpp(459) : see reference to function template instantiation 'boost::log::v2s_mt_nt5::record boost::log::v2s_mt_nt5::sources::basic_severity_logger<BaseT,LevelT>::open_record_unlocked<ArgsT>(const ArgsT &)' being compiled
with
[
BaseT=boost::log::v2s_mt_nt5::sources::basic_logger<char,boost::log::v2s_mt_nt5::sources::severity_logger_mt<boost::log::v2s_mt_nt5::trivial::severity_level>,boost::log::v2s_mt_nt5::sources::multi_thread_model<boost::log::v2s_mt_nt5::aux::light_rw_mutex>>,
LevelT=boost::log::v2s_mt_nt5::trivial::severity_level,
ArgsT=boost::parameter::aux::tagged_argument<boost::log::v2s_mt_nt5::keywords::tag::severity,const boost::log::v2s_mt_nt5::trivial::severity_level>
]
..\..\src\MRCPClient\main.cpp(5) : see reference to function template instantiation 'boost::log::v2s_mt_nt5::record boost::log::v2s_mt_nt5::sources::basic_composite_logger<CharT,FinalT,ThreadingModelT,FeaturesT>::open_record<boost::parameter::aux::tagged_argument<Keyword,Arg>>(const ArgsT &)' being compiled
with
[
CharT=char,
FinalT=boost::log::v2s_mt_nt5::sources::severity_logger_mt<boost::log::v2s_mt_nt5::trivial::severity_level>,
ThreadingModelT=boost::log::v2s_mt_nt5::sources::multi_thread_model<boost::log::v2s_mt_nt5::aux::light_rw_mutex>,
FeaturesT=boost::log::v2s_mt_nt5::sources::features<boost::log::v2s_mt_nt5::sources::severity<boost::log::v2s_mt_nt5::trivial::severity_level>>,
Keyword=boost::log::v2s_mt_nt5::keywords::tag::severity,
Arg=const boost::log::v2s_mt_nt5::trivial::severity_level,
ArgsT=boost::parameter::aux::tagged_argument<boost::log::v2s_mt_nt5::keywords::tag::severity,const boost::log::v2s_mt_nt5::trivial::severity_level>
]
Program works correctly, but this is really annoying when you log a lot.
Any ideas how to solve it?

It just means that a parameter is not being used.
Sadly since the warning is generated from within Boost sources, and more notably inside class templates, my experience is that the only way to silence the warning is to disable it globally (e.g. using a pragma or a compiler option):
#pragma warning(push)
#pragma warning(disable: 4100)
#include <boost/log/trivial.hpp>
#pragma warning(pop)
My experience is that you cannot properly restore the warning (likely because of template Point-Of-Instantiations), so you may end up having to use:
#pragma warning(disable: 4100)
#include <boost/log/trivial.hpp>
See also:
Is using #pragma warning push/pop the right way to temporarily alter warning level? and (many) others

Related

why does g++ handle namespaces for definitions with 'using' differently than typedefs [duplicate]

Background
Everybody agrees that
using <typedef-name> = <type>;
is equivalent to
typedef <type> <typedef-name>;
and that the former is to be preferred to the latter for various reasons (see Scott Meyers, Effective Modern C++ and various related questions on stackoverflow).
This is backed by [dcl.typedef]:
A typedef-name can also be introduced by an alias-declaration. The identifier following the using keyword
becomes a typedef-name and the optional attribute-specifier-seq following the identifier appertains to that
typedef-name. Such a typedef-name has the same semantics as if it were introduced by the typedef specifier.
However, consider a declaration such as
typedef struct {
int val;
} A;
For this case, [dcl.typedef] specifies:
If the typedef declaration defines an unnamed class (or enum), the first typedef-name declared by the
declaration to be that class type (or enum type) is used to denote the class type (or enum type) for linkage
purposes only (3.5).
The referenced section 3.5 [basic.link] says
A name having namespace scope that has not
been given internal linkage above has the same linkage as the enclosing namespace if it is the name of
[...]
an unnamed class defined in a typedef declaration in which the class has
the typedef name for linkage purposes [...]
Assuming the typedef declaration above is done in the global namespace, the struct A would have external linkage, since the global namespace has external linkage.
Question
The question is now whether the same is true, if the typedef declaration is replaced by an alias declaration according to the common notion that they are equivalent:
using A = struct {
int val;
};
In particular, does the type A declared via the alias declaration ("using") have the same linkage as the one declared via the typedef declaration?
Note that [decl.typedef] does not say that an alias declaration is a typedef declaration (it only says that both introduce a typedef-name) and that [decl.typedef] speaks only of a typedef declaration (not an alias declaration) having the property of introducing a typedef name for linkage purposes.
If the alias declaration is not capable of introducing a typedef name for linkage purposes, A would just be an alias for an anonymous type and have no linkage at all.
IMO, that's at least one possible, albeit strict, interpretation of the standard. Of course, I may be overlooking something.
This raises the subsequent questions:
If there is indeed this subtle difference, is it by intention or is
it an oversight in the standard?
What is the expected behavior of compilers/linkers?
Research
The following minimal program consisting of three files (we need at least two separate compilation units) is used to investigate the issue.
a.hpp
#ifndef A_HPP
#define A_HPP
#include <iosfwd>
#if USING_VS_TYPEDEF
using A = struct {
int val;
};
#else
typedef struct {
int val;
} A;
#endif
void print(std::ostream& os, A const& a);
#endif // A_HPP
a.cpp
#include "a.hpp"
#include <iostream>
void print(std::ostream& os, A const& a)
{
os << a.val << "\n";
}
main.cpp
#include "a.hpp"
#include <iostream>
int main()
{
A a;
a.val = 42;
print(std::cout, a);
}
GCC
Compiling this with gcc 7.2 with the "typedef" variant compiles cleanly and provides the expected output:
> g++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=0 a.cpp main.cpp
> ./a.out
42
The compilation with the "using" variant produces a compile error:
> g++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=1 a.cpp main.cpp
a.cpp:4:6: warning: ‘void print(std::ostream&, const A&)’ defined but not used [-Wunused-function]
void print(std::ostream& os, A const& a)
^~~~~
In file included from main.cpp:1:0:
a.hpp:16:6: error: ‘void print(std::ostream&, const A&)’, declared using unnamed type, is used but never defined [-fpermissive]
void print(std::ostream& os, A const& a);
^~~~~
a.hpp:9:2: note: ‘using A = struct<unnamed>’ does not refer to the unqualified type, so it is not used for linkage
};
^
a.hpp:16:6: error: ‘void print(std::ostream&, const A&)’ used but never defined
void print(std::ostream& os, A const& a);
^~~~~
This looks like GCC follows the strict interpretation of the standard above and makes a difference concerning linkage between the typedef and the alias declaration.
Clang
Using clang 6, both variants compile and run cleanly without any warnings:
> clang++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=0 a.cpp main.cpp
> ./a.out
42
> clang++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=1 a.cpp main.cpp
> ./a.out
42
One could therefore also ask
Which compiler is correct?
This looks to me like a bug in GCC.
Note that [decl.typedef] does not say that an alias declaration is a typedef declaration
You're right, [dcl.dcl]p9 gives a definition of the term typedef declaration which excludes alias-declarations. However, [dcl.typedef] does explicitly say, as you quoted in your question:
2 A typedef-name can also be introduced by an alias-declaration. The identifier following the using keyword becomes a typedef-name and the optional attribute-specifier-seq following the identifier appertains to that typedef-name. It has the same semantics as if it were introduced by the typedef specifier. [...]
"The same semantics" doesn't leave any doubt. Under GCC's interpretation, typedef and using have different semantics, therefore the only reasonable conclusion is that GCC's interpretation is wrong. Any rules applying to typedef declarations must be interpreted as applying to alias-declarations as well.
It looks like the standard is unclear on this.
On one hand,
[dcl.typedef] A typedef-name can also be introduced by an alias-declaration. [...] Such a typedef-name has the same semantics as if it were introduced by the typedef specifier.
On the other hand, the standard clearly separates the notions of typedef declaration and alias-declaration (the latter term is a grammar production name, so it is italicised and hyphenated; the former is not). In some contexts it talks about "a typedef declaration or alias-declaration", making them equivalent in these contexts; and sometimes it talks solely about "a typedef declaration". In particular, whenever the standard talks about linkage and typedef declarations, it only talks about typedef declarations and does not mention alias-declaration. This includes the key passage
[dcl.typedef] If the typedef declaration defines an unnamed class (or enum), the first typedef-name declared by the declaration to be that class type (or enum type) is used to denote the class type (or enum type) for linkage
purposes only.
Note the standard insists on the first typedef-name being used for linkage. This means that in
typedef struct { int x; } A, B;
only A is used for linkage, and B is not. Nothing in the standard indicates that a name introduced by alias-declaration should behave like A and not like B.
It is my opinion that the standard is insufficiently clear in this area. If the intent is to make only typedef declaration work for linkage, then it would be appropriate to state explicitly in [dcl.typedef] that alias-declaration does not. If the intent is to make alias-declaration work for linkage, this should be stated explicitly too, as is done in other contexts.

using vs. typedef - is there a subtle, lesser known difference?

Background
Everybody agrees that
using <typedef-name> = <type>;
is equivalent to
typedef <type> <typedef-name>;
and that the former is to be preferred to the latter for various reasons (see Scott Meyers, Effective Modern C++ and various related questions on stackoverflow).
This is backed by [dcl.typedef]:
A typedef-name can also be introduced by an alias-declaration. The identifier following the using keyword
becomes a typedef-name and the optional attribute-specifier-seq following the identifier appertains to that
typedef-name. Such a typedef-name has the same semantics as if it were introduced by the typedef specifier.
However, consider a declaration such as
typedef struct {
int val;
} A;
For this case, [dcl.typedef] specifies:
If the typedef declaration defines an unnamed class (or enum), the first typedef-name declared by the
declaration to be that class type (or enum type) is used to denote the class type (or enum type) for linkage
purposes only (3.5).
The referenced section 3.5 [basic.link] says
A name having namespace scope that has not
been given internal linkage above has the same linkage as the enclosing namespace if it is the name of
[...]
an unnamed class defined in a typedef declaration in which the class has
the typedef name for linkage purposes [...]
Assuming the typedef declaration above is done in the global namespace, the struct A would have external linkage, since the global namespace has external linkage.
Question
The question is now whether the same is true, if the typedef declaration is replaced by an alias declaration according to the common notion that they are equivalent:
using A = struct {
int val;
};
In particular, does the type A declared via the alias declaration ("using") have the same linkage as the one declared via the typedef declaration?
Note that [decl.typedef] does not say that an alias declaration is a typedef declaration (it only says that both introduce a typedef-name) and that [decl.typedef] speaks only of a typedef declaration (not an alias declaration) having the property of introducing a typedef name for linkage purposes.
If the alias declaration is not capable of introducing a typedef name for linkage purposes, A would just be an alias for an anonymous type and have no linkage at all.
IMO, that's at least one possible, albeit strict, interpretation of the standard. Of course, I may be overlooking something.
This raises the subsequent questions:
If there is indeed this subtle difference, is it by intention or is
it an oversight in the standard?
What is the expected behavior of compilers/linkers?
Research
The following minimal program consisting of three files (we need at least two separate compilation units) is used to investigate the issue.
a.hpp
#ifndef A_HPP
#define A_HPP
#include <iosfwd>
#if USING_VS_TYPEDEF
using A = struct {
int val;
};
#else
typedef struct {
int val;
} A;
#endif
void print(std::ostream& os, A const& a);
#endif // A_HPP
a.cpp
#include "a.hpp"
#include <iostream>
void print(std::ostream& os, A const& a)
{
os << a.val << "\n";
}
main.cpp
#include "a.hpp"
#include <iostream>
int main()
{
A a;
a.val = 42;
print(std::cout, a);
}
GCC
Compiling this with gcc 7.2 with the "typedef" variant compiles cleanly and provides the expected output:
> g++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=0 a.cpp main.cpp
> ./a.out
42
The compilation with the "using" variant produces a compile error:
> g++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=1 a.cpp main.cpp
a.cpp:4:6: warning: ‘void print(std::ostream&, const A&)’ defined but not used [-Wunused-function]
void print(std::ostream& os, A const& a)
^~~~~
In file included from main.cpp:1:0:
a.hpp:16:6: error: ‘void print(std::ostream&, const A&)’, declared using unnamed type, is used but never defined [-fpermissive]
void print(std::ostream& os, A const& a);
^~~~~
a.hpp:9:2: note: ‘using A = struct<unnamed>’ does not refer to the unqualified type, so it is not used for linkage
};
^
a.hpp:16:6: error: ‘void print(std::ostream&, const A&)’ used but never defined
void print(std::ostream& os, A const& a);
^~~~~
This looks like GCC follows the strict interpretation of the standard above and makes a difference concerning linkage between the typedef and the alias declaration.
Clang
Using clang 6, both variants compile and run cleanly without any warnings:
> clang++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=0 a.cpp main.cpp
> ./a.out
42
> clang++ -Wall -Wextra -pedantic-errors -DUSING_VS_TYPEDEF=1 a.cpp main.cpp
> ./a.out
42
One could therefore also ask
Which compiler is correct?
This looks to me like a bug in GCC.
Note that [decl.typedef] does not say that an alias declaration is a typedef declaration
You're right, [dcl.dcl]p9 gives a definition of the term typedef declaration which excludes alias-declarations. However, [dcl.typedef] does explicitly say, as you quoted in your question:
2 A typedef-name can also be introduced by an alias-declaration. The identifier following the using keyword becomes a typedef-name and the optional attribute-specifier-seq following the identifier appertains to that typedef-name. It has the same semantics as if it were introduced by the typedef specifier. [...]
"The same semantics" doesn't leave any doubt. Under GCC's interpretation, typedef and using have different semantics, therefore the only reasonable conclusion is that GCC's interpretation is wrong. Any rules applying to typedef declarations must be interpreted as applying to alias-declarations as well.
It looks like the standard is unclear on this.
On one hand,
[dcl.typedef] A typedef-name can also be introduced by an alias-declaration. [...] Such a typedef-name has the same semantics as if it were introduced by the typedef specifier.
On the other hand, the standard clearly separates the notions of typedef declaration and alias-declaration (the latter term is a grammar production name, so it is italicised and hyphenated; the former is not). In some contexts it talks about "a typedef declaration or alias-declaration", making them equivalent in these contexts; and sometimes it talks solely about "a typedef declaration". In particular, whenever the standard talks about linkage and typedef declarations, it only talks about typedef declarations and does not mention alias-declaration. This includes the key passage
[dcl.typedef] If the typedef declaration defines an unnamed class (or enum), the first typedef-name declared by the declaration to be that class type (or enum type) is used to denote the class type (or enum type) for linkage
purposes only.
Note the standard insists on the first typedef-name being used for linkage. This means that in
typedef struct { int x; } A, B;
only A is used for linkage, and B is not. Nothing in the standard indicates that a name introduced by alias-declaration should behave like A and not like B.
It is my opinion that the standard is insufficiently clear in this area. If the intent is to make only typedef declaration work for linkage, then it would be appropriate to state explicitly in [dcl.typedef] that alias-declaration does not. If the intent is to make alias-declaration work for linkage, this should be stated explicitly too, as is done in other contexts.

CUDA 8.0: Compile Error with Template Friend in Namespace

I noticed that the following code compiles with g++/clang++-3.8 but not with nvcc:
#include <tuple> // not used, just to make sure that we have c++11
#include <stdio.h>
namespace a {
template<class T>
class X {
friend T;
};
}
I get the following compile error:
/usr/local/cuda-8.0/bin/nvcc -std=c++11 minimum_cuda_test.cu
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
minimum_cuda_test.cu:7:10: error: ‘T’ in namespace ‘::’ does not name a type
friend T;
Interestingly, this works with nvcc:
#include <tuple> // not used, just to make sure that we have c++11
#include <stdio.h>
template<class T>
class X {
friend T;
};
Is this a bug in the compiler? I thought nvcc would internally use g++ or clang as a compiler so I am confused why this would work with my local compiler but not with nvcc.
In both cases, the code is being compiled by g++. However, when you pass a .cu file to nvcc, it puts the code through the CUDA C++ front end before passing it to the host compiler. Looking at CUDA 8 with gcc 4.8, I see that the code has been transformed from
namespace a {
template<class T>
class X {
friend T;
};
}
to
namespace a {
template< class T>
class X {
friend ::T;
};
You can see that the front end has replaced the templated friend with an equivalent, but with a prepended anonymous namespace, which is breaking the compilation. I'm not a C++ language lawyer, but this would appear to me to be a bug in the CUDA front end. I would suggest reporting it to NVIDIA.

Boost Spirit accept rule dynamically when a keyword is used [duplicate]

Going by the opening paragraph of the boost::spirit::qi::symbols documentation, I assumed that it wouldn't be too hard to add symbols to a qi::symbols from a semantic action. Unfortunately it appears to be not as straightforward as I would have assumed.
The following bit of test code exhibits the problem:
#define BOOST_SPIRIT_USE_PHOENIX_V3
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix.hpp>
#include <string>
namespace qi = boost::spirit::qi;
typedef qi::symbols<char, unsigned int> constants_dictionary;
template <typename Iter> struct parser : public qi::grammar<Iter, qi::space_type> {
parser(constants_dictionary &dict) : parser::base_type(start) {
start = qi::lit("#") >> ((+qi::char_) >> qi::uint_)[dict.add(qi::_1, qi::_2)];
}
qi::rule<Iter> start;
};
int main() {
constants_dictionary dict;
parser<std::string::const_iterator> prsr(dict);
std::string test = "#foo 3";
parse(test.begin(), test.end(), prsr, qi::space);
}
Gives type errors related to qi::_2 from VS2010:
C:\Users\k\Coding\dashCompiler\spirit_test.cpp(12) : error C2664: 'const boost::
spirit::qi::symbols<Char,T>::adder &boost::spirit::qi::symbols<Char,T>::adder::o
perator ()<boost::spirit::_1_type>(const Str &,const T &) const' : cannot conver
t parameter 2 from 'const boost::spirit::_2_type' to 'const unsigned int &'
with
[
Char=char,
T=unsigned int,
Str=boost::spirit::_1_type
]
Reason: cannot convert from 'const boost::spirit::_2_type' to 'const uns
igned int'
No user-defined-conversion operator available that can perform this conv
ersion, or the operator cannot be called
C:\Users\k\Coding\dashCompiler\spirit_test.cpp(10) : while compiling cla
ss template member function 'parser<Iter>::parser(constants_dictionary &)'
with
[
Iter=std::_String_const_iterator<char,std::char_traits<char>,std::al
locator<char>>
]
C:\Users\k\Coding\dashCompiler\spirit_test.cpp(21) : see reference to cl
ass template instantiation 'parser<Iter>' being compiled
with
[
Iter=std::_String_const_iterator<char,std::char_traits<char>,std::al
locator<char>>
]
(Apologies for the nasty VS2010 error-style)
What syntax am I supposed to be using to add (and later on, remove) symbols from this table?
This question has been answered before. However, there is quite a range of problems with your posted code, so I'll fix them up one by one to spare you unnecessary staring at pages of error messages.
The working code (plus verification of output) is here on liveworkspace.org.
Notes:
the semantic action must be a Phoenix actor, i.e. you need
boost::bind, phoenix::bind, std::bind
phoenix::lambda<> or phoenix::function<>
a function pointer or polymorphic calleable object (as per the documentation)
I'd recommend phoenix::bind (in this particular case), which I show below
There was a mismatch between the parser's skipper and the start rule
qi::char_ eats all characters. Combined with the skipper, this resulted
in parse failure, because (obviously) the digits in the value were also being
eaten by +qi::char_. I show you one of many solutions, based on qi::lexeme[+qi::graph]
use qi::lexeme to 'bypass' the skipper (i.e. to prevent +qi::graph to cut
across whitespace because the skipper, well, skipped it)
qi::parse doesn't take a skipper; use qi::phrase_parse for that (the
reason it appeared to work is that any trailing 'variadic' arguments are
bound to the exposed attributes of the parser, which in this case are
unspecified, and therefore qi::unused_type).
if you want to pass test.begin() and test.end() directly to
qi::phrase_parse, you need to make it clear that you want const iterators. The
more typical solution would be to introduce explicitely typed variables
(first and last, e.g.)
#define BOOST_SPIRIT_USE_PHOENIX_V3
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix.hpp>
#include <string>
namespace qi = boost::spirit::qi;
namespace phx = boost::phoenix;
typedef qi::symbols<char, unsigned int> constants_dictionary;
template <typename Iter> struct parser : qi::grammar<Iter, qi::space_type>
{
parser(constants_dictionary &dict) : parser::base_type(start)
{
start = qi::lit("#") >> (qi::lexeme [+qi::graph] >> qi::uint_)
[ phx::bind(dict.add, qi::_1, qi::_2) ]
;
}
qi::rule<Iter, qi::space_type> start;
};
int main() {
constants_dictionary dict;
parser<std::string::const_iterator> prsr(dict);
const std::string test = "#foo 3";
if (qi::phrase_parse(test.begin(), test.end(), prsr, qi::space))
{
std::cout << "check: " << dict.at("foo") << "\n";
}
}

What do I need to do to have both C++/CLI and unmanaged C++11 enum classes?

The new enum class type in C++11 came from the C++/CLI version with the same name but they are quite different and are causing me problems.
I have a library written in C++11 containing several structures like (really simplified here):
// File.h
enum class MyEnum : unsigned int
{
Val1,
Val2
};
struct MyStruct
{
MyEnum value;
MyStruct(MyEnum v) : value(v) {}
};
I am trying to reach this code from a C++/CLI class library to expose it to .NET. I include the file like this:
#pragma unmanaged
#include "File.h"
#pragma managed
The problem is that the enum constructor yields a compile error message like:
error C3821: 'v': managed type or function cannot be used in an unmanaged function
suggesting that the compiler still interprets the enum class as a C++/CLI enum class even though I am within the unmanaged section and it really should interpret it as a C++11 enum class. Is there anything I can do about this?
EDIT: I am using VS2012. Please let me know if VS2013 fixes this.
I have the same problem in VS2013. In my case, I'm including an unmanaged enum class but the compiler is generating errors because it thinks it's a managed enum class. With the help from the comments on the question, I was able to fix it by removing all forward declarations of the enum. It looks like this is a compiler bug, where the forward declaration ignores the #pragma unmanaged directive.
I've created an example of how to reproduce the bug here:
#pragma managed(push, off)
enum class TestEnum
{
One,
Two,
Three,
};
#include <vector>
enum class TestEnum; // Forward declaration after actual declaration causes the bug
#pragma managed(pop)
int main(array<System::String ^> ^args)
{
// error C3699: '&&' : cannot use this indirection on type 'TestEnum'
std::vector<TestEnum> enums;
return 0;
}
I've reported the issue to Microsoft here:
https://connect.microsoft.com/VisualStudio/feedback/details/1218131

Resources