How do you convert System::String to std::string in C++ .NET?
There is cleaner syntax if you're using a recent version of .net
#include "stdafx.h"
#include <string>
#include <msclr\marshal_cppstd.h>
using namespace System;
int main(array<System::String ^> ^args)
{
System::String^ managedString = "test";
msclr::interop::marshal_context context;
std::string standardString = context.marshal_as<std::string>(managedString);
return 0;
}
This also gives you better clean-up in the face of exceptions.
There is an msdn article for various other conversions
And in response to the "easier way" in later versions of C++/CLI, you can do it without the marshal_context. I know this works in Visual Studio 2010; not sure about prior to that.
#include "stdafx.h"
#include <string>
#include <msclr\marshal_cppstd.h>
using namespace msclr::interop;
int main(array<System::String ^> ^args)
{
System::String^ managedString = "test";
std::string standardString = marshal_as<std::string>(managedString);
return 0;
}
C# uses the UTF16 format for its strings.
So, besides just converting the types, you should also be conscious about the string's actual format.
When compiling for Multi-byte Character set Visual Studio and the Win API assumes UTF8 (Actually windows encoding which is Windows-28591 ).
When compiling for Unicode Character set Visual studio and the Win API assume UTF16.
So, you must convert the string from UTF16 to UTF8 format as well, and not just convert to std::string.
This will become necessary when working with multi-character formats like some non-latin languages.
The idea is to decide that std::wstring always represents UTF16.
And std::string always represents UTF8.
This isn't enforced by the compiler, it's more of a good policy to have.
#include "stdafx.h"
#include <string>
#include <msclr\marshal_cppstd.h>
using namespace System;
int main(array<System::String ^> ^args)
{
System::String^ managedString = "test";
msclr::interop::marshal_context context;
//Actual format is UTF16, so represent as wstring
std::wstring utf16NativeString = context.marshal_as<std::wstring>(managedString);
//C++11 format converter
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> convert;
//convert to UTF8 and std::string
std::string utf8NativeString = convert.to_bytes(utf16NativeString);
return 0;
}
Or have it in a more compact syntax:
int main(array<System::String ^> ^args)
{
System::String^ managedString = "test";
msclr::interop::marshal_context context;
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> convert;
std::string utf8NativeString = convert.to_bytes(context.marshal_as<std::wstring>(managedString));
return 0;
}
stdString = toss(systemString);
static std::string toss( System::String ^ s )
{
// convert .NET System::String to std::string
const char* cstr = (const char*) (Marshal::StringToHGlobalAnsi(s)).ToPointer();
std::string sstr = cstr;
Marshal::FreeHGlobal(System::IntPtr((void*)cstr));
return sstr;
}
I had too many ambiguous errors showing up with the above answers ( yes, i'm a C++ noob)
This worked for me for sending string from C# to C++ CLI
C#
bool result;
result = mps.Import(mpsToolName);
C++ CLI
function:
bool ManagedMPS::Import(System::String^ mpsToolNameTest)
std::string mpsToolName;
mpsToolName = toStandardString(mpsToolNameTest);
function that works from converting String^ to std::string
static std::string toStandardString(System::String^ string)
{
using System::Runtime::InteropServices::Marshal;
System::IntPtr pointer = Marshal::StringToHGlobalAnsi(string);
char* charPointer = reinterpret_cast<char*>(pointer.ToPointer());
std::string returnString(charPointer, string->Length);
Marshal::FreeHGlobal(pointer);
return returnString;
}
ON FURTHER RESEARCH, it appears that this is cleaner and safer.
I switched to using this method instead.
std::string Utils::ToUnmanagedString(String^ stringIncoming)
{
std::string unmanagedString = marshal_as<std::string>(stringIncoming);
return unmanagedString;
}
Creating a Windows Runtime Component you can use:
String^ systemString = "Hello";
std::wstring ws1(systemString ->Data());
std::string standardString(ws1.begin(), ws1.end());
Related
I'm trying to include <arpa/inet.h> in a low-level library so that I have access to hton* and ntoh* functions in the library. The low-level library gets called into by higher-level code running a Boost asio socket. I'm aware Boost asio contains the hton* and ntoh* functions, but i'd like to avoid linking all of Boost asio to the library since hton*/ntoh* are all I need.
However, if I simply include <arpa/inet.h> in the low-level library, 0 bytes always will be sent from the Boost asio socket. Confirmed by Wireshark.
Here's the class where i'd like to include <arpa/inet.h> but not Boost. If <arpa/inet.h> is included, 0 bytes will be sent.
#pragma pack(push, 1)
#include "PduHeader.h"
#include <arpa/inet.h>
class ClientInfoPdu
{
public:
ClientInfoPdu(const uint16_t _client_receiver_port)
{
set_client_receiver_port(_client_receiver_port);
}
PduHeader pdu_header{CLIENT_INFO_PDU, sizeof(client_receiver_port)};
inline void set_client_receiver_port(const uint16_t _client_receiver_port)
{
//client_receiver_port = htons(_client_receiver_port);
client_receiver_port = _client_receiver_port;
}
inline uint16_t get_client_receiver_port()
{
return client_receiver_port;
}
inline size_t get_total_size()
{
return sizeof(PduHeader) + pdu_header.get_pdu_payload_size();
}
private:
uint16_t client_receiver_port;
};
#pragma pack(pop)
Here's the higher level code that includes Boost and attempts to send the data via a socket. The printout indicates 5 bytes were sent, however 0 bytes were actually sent.
#include "ServerConnectionThread.h"
#include "config/ClientConfig.h"
#include "protocol_common/ClientInfoPdu.h"
#include <boost/asio.hpp>
#include <unistd.h>
using boost::asio::ip::udp;
void ServerConnectionThread::execute()
{
boost::asio::io_service io_service;
udp::endpoint remote_endpoint =
udp::endpoint(boost::asio::ip::address::from_string(SERVER_IP), SERVER_PORT);
udp::socket socket(io_service);
socket.open(udp::v4());
ClientInfoPdu client_info_pdu = ClientInfoPdu(RECEIVE_PORT);
while (true)
{
uint16_t total_size = client_info_pdu.get_total_size();
socket.send_to(boost::asio::buffer(&client_info_pdu, total_size), remote_endpoint);
printf("sent %u bytes\n", total_size);
usleep(1000000);
}
}
Again, simply removing "#include <arpa/inet.h>" will cause this code to function as expected and send 5 bytes per packet.
How is ClientInfoPdu defined? This looks like it is likely UB:
boost::asio::buffer(&client_info_pdu, total_size)
The thing is total size is sizeof(PduHeader) + pdu_header.get_pdu_payload_size() (so sizeof(PduHeader) + 2);
First problem is that you're mixing access modifiers, killing the POD/standard_layout properties of your types.
#include <type_traits>
static_assert(std::is_standard_layout_v<PduHeader> && std::is_trivial_v<PduHeader>);
static_assert(std::is_standard_layout_v<ClientInfoPdu> && std::is_trivial_v<ClientInfoPdu>);
This will fail to compile. Treating the types as POD (as you do) invokes
Undefined Behaviour.
This is likely the explanation for the fact that "it stops working" with some changes. It never worked: it might just accidentally have appeared to work, but it was undefined behaviour.
It's not easy to achieve POD-ness while still getting the convenience of the
constructors. In fact, I don't think that's possible. In short, if you want to
treat your structs as C-style POD types, make them... C-style POD types.
Another thing: a possible implementation of `PduHeader I
can see working for you looks a bit like so:
enum MsgId{CLIENT_INFO_PDU=0x123};
struct PduHeader {
MsgId id;
size_t payload_size;
size_t get_pdu_payload_size() const { return payload_size; }
};
Here, again you might have/need endianness conversions.
Suggestion
In short, if you want this to work, I'd say keep it simple.
Instead of creating non-POD types all over the place that are responsible for endianness conversion by adding getters/setters for each value, why not create a simple user-defined-type that does this always, and use them instead?
struct PduHeader {
Short id; // or e.g. uint8_t
Long payload_size;
};
struct ClientInfoPdu {
PduHeader pdu_header; // or inheritance, same effect
Short client_receiver_port;
};
Then just use it as a POD struct:
while (true) {
ClientInfoPdu client_info_pdu;
init_pdu(client_info_pdu);
auto n = socket.send_to(boost::asio::buffer(&client_info_pdu, sizeof(client_info_pdu)), remote_endpoint);
printf("sent %lu bytes\n", n);
std::this_thread::sleep_for(1s);
}
The function init_pdu can be implemented with overloads per submessage:
void init_pdu(ClientInfoPdu& msg) {
msg.pdu_header.id = CLIENT_INFO_PDU;
msg.pdu_header.payload_size = sizeof(msg);
}
There are variations on this where it can become a template or take a
PduHeder& (if your message inherits instead of aggregates). But the basic
principle is the same.
Endianness Conversion
Now you'll noticed I avoided using uint32_t/uint16_t directly (though uint8_t is fine because it doesn't need byte ordering). Instead, you could define Long and Short as simple POD wrappers around them:
struct Short {
operator uint16_t() const { return ntohs(value); }
Short& operator=(uint16_t v) { value = htons(v); return *this; }
private:
uint16_t value;
};
struct Long {
operator uint32_t() const { return ntohl(value); }
Long& operator=(uint32_t v) { value = htonl(v); return *this; }
private:
uint32_t value;
};
The assignment and conversions mean that you can use it as just another
int32_t/int16_t except that the necessary conversions are always done.
If you want to satnd on the shoulder of giants instead, you can use the better types from Boost Endian, which also has lots more advanced facilities
DEMO
Live On Coliru
#include <type_traits>
#include <cstdint>
#include <thread>
#include <arpa/inet.h>
using namespace std::chrono_literals;
#pragma pack(push, 1)
enum MsgId{CLIENT_INFO_PDU=0x123};
struct Short {
operator uint16_t() const { return ntohs(value); }
Short& operator=(uint16_t v) { value = htons(v); return *this; }
private:
uint16_t value;
};
struct Long {
operator uint32_t() const { return ntohl(value); }
Long& operator=(uint32_t v) { value = htonl(v); return *this; }
private:
uint32_t value;
};
static_assert(std::is_standard_layout_v<Short>);
static_assert(std::is_trivial_v<Short>);
static_assert(std::is_standard_layout_v<Long>);
static_assert(std::is_trivial_v<Long>);
struct PduHeader {
Short id; // or e.g. uint8_t
Long payload_size;
};
struct ClientInfoPdu {
PduHeader pdu_header; // or inheritance, same effect
Short client_receiver_port;
};
void init_pdu(ClientInfoPdu& msg) {
msg.pdu_header.id = CLIENT_INFO_PDU;
msg.pdu_header.payload_size = sizeof(msg);
}
static_assert(std::is_standard_layout_v<PduHeader> && std::is_trivial_v<PduHeader>);
static_assert(std::is_standard_layout_v<ClientInfoPdu> && std::is_trivial_v<ClientInfoPdu>);
#pragma pack(pop)
#include <boost/asio.hpp>
//#include <unistd.h>
using boost::asio::ip::udp;
#define SERVER_IP "127.0.0.1"
#define SERVER_PORT 6767
#define RECEIVE_PORT 6868
struct ServerConnectionThread {
void execute() {
boost::asio::io_service io_service;
udp::endpoint const remote_endpoint =
udp::endpoint(boost::asio::ip::address::from_string(SERVER_IP), SERVER_PORT);
udp::socket socket(io_service);
socket.open(udp::v4());
while (true) {
ClientInfoPdu client_info_pdu;
init_pdu(client_info_pdu);
auto n = socket.send_to(boost::asio::buffer(&client_info_pdu, sizeof(client_info_pdu)), remote_endpoint);
printf("sent %lu bytes\n", n);
std::this_thread::sleep_for(1s);
}
}
};
int main(){ }
I am using the following code for a class project, but for some reason the #include string is not working, and the compiler is flagging every declaration using string. What did I do wrong?
#ifndef MEMORY_H
#define MEMORY_H
#include <string>
class Memory
{
private:
string mem[1000];
public:
Memory()
{
for each(string s in mem)
{
s = "nop";
}
};
string get(int loc)
{
return mem[loc];
};
void set(int loc, string input)
{
mem[loc] = input;
}
};
#endif
string is part of the std namespace, instead of string, you need:
std::string
For more on namespaces go here.
Add this after your include statement:
using namespace std;
Follwing short programm will run perfect with VS 2013 and reach the marked point. But in XCode the compiler will show an error due ambiguous constructor. How to work around?
#include <iostream>
#include <string>
class atest
{
public:
explicit operator const char *()
{
return "";
}
template<class T> operator T()
{
}
operator std::string()
{
return std::string("Huhuhu");
}
template<class T> atest &operator =(T value)
{
}
atest &operator =(const std::string &value)
{
return *this; // I want to reach this point
}
};
int main(int argc, char* argv[])
{
atest tst;
// auto a = (std::string)tst;
std::string astr;
// do some stuff
astr=tst; // I wanna keep this line
return 0;
}
Clang is not able to distinguish between different constructor where VS2013 is taking the right one. I search now for a way to exclude the "const char *" template of the assignment operator.
std::string have multiple constructors taking single arguments, and since you provide both a conversion operator for std::string and a generic any-type conversion operator, the compiler simply don't know which constructor to pick.
I think you have written far too many overloaded functions. The only function you need is this:
operator std::string()
{
return std::string("Huhuhu");
}
Comment rest all and your code would work just fine.
I broke down a problem I already tried to explain here in following problem:
#include <iostream>
#include <string>
class atest
{
public:
operator std::string()
{
return std::string("Huhuhu");
}
operator int()
{
return 42;
}
};
int main(int argc, char* argv[])
{
atest tst;
std::string astr;
astr=tst;
int i=0;
i=tst;
return 0;
}
std::string seems to have several constructors which even cover int. I got a class which need to be cast able to std::string but also to an integral type. As the assign (=) operator is not overide able outside a class definition I got no Idea how to get the above program running.
It is bad design but it is worth noting that VS2013 has no problem with above code.
You can use explicit conversion
explicit operator std::string()
~~~~~~~
{
return std::string("Huhuhu");
}
The Microsoft documentation for Bluetooth APIs such as BluetoothGetDeviceInfo provide instructions for calling these functions using either static or dynamic imports.
The static import, linking with bthprops.lib, works fine.
#include <windows.h>
#include <BluetoothAPIs.h>
#include <iostream>
int main(int argc, char** argv)
{
BLUETOOTH_DEVICE_INFO binfo = {};
binfo.dwSize = sizeof binfo;
binfo.Address.ullLong = 0xBAADDEADF00Dull;
auto result = ::BluetoothGetDeviceInfo(nullptr, &binfo);
std::wcout << L"BluetoothGetDeviceInfo returned " << result
<< L"\nand the name is \"" << binfo.szName << "\"\n";
return 0;
}
But this isn't ideal in ultra-portable code, because the documentation says they are not supported prior to Windows XP SP2. So one should use dynamic linking and recover from missing functions. However, dynamic loading bthprops.dll as instructed by the MSDN docs fails:
decltype(::BluetoothGetDeviceInfo)* pfnBluetoothGetDeviceInfo;
bool LoadBthprops( void )
{
auto dll = ::LoadLibraryW(L"bthprops.dll");
if (!dll) return false;
pfnBluetoothGetDeviceInfo = reinterpret_cast<decltype(pfnBluetoothGetDeviceInfo)>(::GetProcAddress(dll, "BluetoothGetDeviceInfo"));
return pfnBluetoothGetDeviceInfo != nullptr;
}
How should one dynamically link to these functions?
Apparently this fact is pretty well known to Google but not to MSDN. If you want to dynamically load these functions, use LoadLibrary("bthprops.cpl") which is the correct DLL name, contrary to the nice table in the function documentation.
This works:
decltype(::BluetoothGetDeviceInfo)* pfnBluetoothGetDeviceInfo;
bool LoadBthprops( void )
{
auto dll = ::LoadLibraryW(L"bthprops.cpl");
if (!dll) return false;
pfnBluetoothGetDeviceInfo = reinterpret_cast<decltype(pfnBluetoothGetDeviceInfo)>(::GetProcAddress(dll, "BluetoothGetDeviceInfo"));
return pfnBluetoothGetDeviceInfo != nullptr;
}