converting std::string to upper case without modifying source - c++11

I'm acquainted with the usage of std::transform(data.begin(), data.end(), data.begin(), ::toupper), which can change the string in data to all uppercase. I am wondering, however, if there is a clean solution that can get the all-uppercase version of a string without modifying the source? The workaround of making a copy of the source and then calling std::transform on the copy, and then returning the copy seems a bit like a kludge, and I'm wondering if there's a more efficient and elegant solution.
I am looking for a pure C++11 solution... without dependency on any even widely available C++ libraries such as boost.

Per Igor's comment above, the solution is to use an std::back_inserter on the destination.... std::transform(src.begin(), src.end(), std::back_inserter(dest), ::toupper);

Related

Boost.Tests: Specify function name instead of main

I am working on a project using the cmake build system. By default CMake has a nice framework for generating a single executable from a set of C/C++ code. The cmake function is called create_test_sourcelist. What it does is generate a C/C++ dispatcher with a single main entry point which will call other C/C++ code.
Therefore I have a bunch of C/C++ files with a function signature such as: int TestFunctionality1(int argc, char *argv[]), which I'd like to keep as-is, unless of course it means even more work.
How can I keep this system in place and start using BOOST_CHECK ? I could not figure out how to specify the actual main entry point is not called int main(int argc, char *argv[]).
My goal is have a framework for integration with Jenkins, since the project already uses Boost, I believe this should be doable without re-writing the existing CMake test suite and changing all tests into independent main function.
Unfortunately, it seems there is no straightforward and clean way to do that.
From one side the only useful function of create_test_sourcelist is to generate a test driver: a (stupid pretty simple, naive and with lack of abilities to hack/extend) C/C++ translation unit based on ${cmake-prefix}/share/cmake/Templates/TestDriver.cxx.in (and there is no way to select some other template).
From other side, Boost UTF offers its own test runner (which is equal to test driver in CMake's terminology), but in any variant (static, dynamic, single-header) it contains definition for main() some way or another (even in case of external test runner).
…so you end up with two main() functions and no ability to choose a single one.
Digging a little into sources of create_test_sourcelist I've really wonder why are they implemented it as a command and not as ordinal (extern) cmake module — it doesn't do any special (which can't be implemented using CMake language). This command is really stupid — it doesn't check that required functions are really exists (you'll get compile errors if smth wrong). There are no ways for flexible customization of output source file at all. All that is does is stripping paths and extensions from a list of source files and substitute it to mentioned template using ordinal configure_file()…
So, personally I see no reason to use it at all. It is why I've done the same (but better ;) job in the module mentioned in the comment above.
If you are still want to use that command, generated test driver completely useless if you want to use Boost UTF. You need to provide your own initialization function anyway (and it is not main()), where you can manually register your test cases into a master test suite (or organize your tests into some more complex tree). In that case there is absolutely no reason to use create_test_sourcelist! All what you can get from it is a list of sources that needs to be given to add_executable()… but it is much more easy to do w/ set()… This command even can't help you w/ list of test functions (list of filenames w/o extension actually) to call (it used internally and not exported). Do you still want to use that command??

Scala dynamic class management

I would like to know if the following is possible in Scala (but I think the question can be applied also to Java):
Create a Scala file dynamically (ok, no problem here)
Compile it (I don't think this would be a real problem)
Load/Unload the new class dynamically
Aside from knowing if dynamic code loading/reloading is possible (it's possible in Java so I think it's feasible also in Scala) I would like also to know the implication of this in terms of performance degradation (I could have many many classes, with no name clash but really many of them!).
TIA!
P.S.: I know other questions about class loading in Scala exist, but I haven't been able to find an answer about performance!
Yes, everything you want to do is certainly possible. You might like to take a look at ScalaMock, which is an example of creating Scala source code dynamically. And at SBT which is an example of calling the compiler from code. And then there are many different systems that load classes dynamically - look at the documentation for loadLibrary as a starting point.
But, depending on what you want to achieve, you might like to look at Scala Macros instead. They provide the same kind of flexibility as you would get by generating source code and then compiling it, but without many of the downsides of that approach. The original version of ScalaMock used to work by generating source code, but I'm in the process of moving to using macros instead.
It's all possible in Scala, as is clearly demonstrated by the REPL. It's even going to be relatively easy with Scala 2.10.

Pantheios wide characters?

I'm trying to integrate logging into my Windows C++ application, and I wanted to use Pantheios, as it generally has very favorable comments. That said, all the examples included are using macros like PANTHEIOS_LITERAL_STRING etc., for wrapping string literals, and require typedefs like:
typedef std::basic_string<PAN_CHAR_T> string_t;
to compile correctly. I think this is ugly, and would prefer to not use these typedefs.
Here's an example: http://www.pantheios.org/doc/html/cpp_2misc_2example_8cpp_8misc_8strings_2example_8cpp_8misc_8strings_8cpp-example.html
I tried compiling Pantheios with PANTHEIOS_USE_WIDE_STRINGS disabled but get lots of build errors -- any ideas?
As you've observed the file backend assumes multibyte output in a multibyte build, and wide output in a wide build by default, but IIRC there are initialisation options (for be.file) that allow you to force it one way or the other, regardless of how you're building.
fwiw, I would think that the examples have to take into account all permutations, and that's why the "ugliness" you report is there. If you're only building for one char encoding or the other, you don't have to do that. Pretty much like examples of Windows coding that use TCHAR and all the _tcsXXX() funcs: you don't have to do that unless you're wanting your code to work with both.
HTH

Where is the source code for isnan?

Because of the layers of standards, the include files for c++ are a rats nest. I was trying to figure out what __isnan actually calls, and couldn't find anywhere with an actual definition.
So I just compiled with -S to see the assembly, and if I write:
#include <ieee754.h>
void f(double x) {
if (__isinf(x) ...
if (__isnan(x)) ...
}
Both of these routines are called. I would like to see the actual definition, and possibly refactor things like this to be inline, since it should be just a bit comparison, albeit one that is hard to achieve when the value is in a floating point register.
Anyway, whether or not it's a good idea, the question stands: WHERE is the source code for __isnan(x)?
Glibc has versions of the code in the sysdeps folder for each of the systems it supports. The one you’re looking for is in sysdeps/ieee754/dbl-64/s_isnan.c. I found this with git grep __isnan.
(While C++ headers include code for templates, functions from the C library will not, and you have to look inside glibc or whichever.)
Here, for the master head of glibc, for instance.

Is there any downside to redundant qualifiers? Any benefit?

For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}

Resources