Access Weight and Bias with nn::sequential - libtorch

If I define std::vector<torch::nn::Linear> linear_layers; and fill this vector with some torch::nn::Linear objects, then I can access the weight and bias values by linear_layers[k].weight and linear_layers[k].bias. Same feature is available with other layer types, e.g., torch::nn::Conv2d.
If create my network using nn::sequential and then push back either of Linear or Conv2d I cannot access the weight and bias directly. Now, my question is how can I access the weight and bias values of each layer when I have used nn::sequential?
Thanks,
Afshin

Here is the soultion: [see the link https://discuss.pytorch.org/t/common-class-of-linear-conv-etc/39987/8 ]
include
using namespace torch;
using namespace torch::nn;
int main()
{
auto net = Sequential(Conv2d(1 /input channels/, 1 /output channels/, 2 /kernel size/),
Conv2d(1, 1, 2));
for (auto& p : net->named_parameters()) {
NoGradGuard no_grad;
// Access name.
std::cout << p.key() << std::endl;
// Access weigth and bias.
p.value().zero_(); // set all zero
std::cout << p.value() << std::endl;
}
return 0;
}
The layers of a sequential, have the following naming convention: ., e.g. see the console output
0.weight # name of the layer
(1,1,.,.) =
0 0
0 0
[ Variable[CPUFloatType]{1,1,2,2} ]
0.bias
0
[ Variable[CPUFloatType]{1} ]
1.weight
(1,1,.,.) =
0 0
0 0
[ Variable[CPUFloatType]{1,1,2,2} ]
1.bias
0
[ Variable[CPUFloatType]{1} ]

Related

Boost size of largest connected component

I know how to calculate the total number of connected components in Boost, but is there an efficient way to compute the size of the largest connected component using the boost's graph library.
I think the most efficient way is to replace the component map with a custom type.
I created a small WritePropertyMap to do that:
template <typename V>
struct Mapper {
using Id = int; // component id
using Cardinality = int;
using Map = boost::container::flat_map<Id, Cardinality>;
using Value = Map::value_type;
Map& storage;
friend void put(Mapper& m, V const& /*v*/, Id id) { m.storage[id] += 1; }
Value largest() const {
return not storage.empty()
? *max_element(begin(storage), end(storage),
[](Value const& a, Value const& b) {
return a.second < b.second;
})
: Value{};
}
};
We need to tell Boost about our property map:
template <typename V> struct boost::property_traits<Mapper<V>> {
using category = boost::writable_property_map_tag;
using key_type = V;
using value_type = int;
};
Note
The separation between storage and property map is because property maps are passed by value - and should be cheap to copy.
Now we can use it, adapting the library example slightly:
Live On Coliru
Mapper<V>::Map result;
Mapper<V> mapper{result};
int num = connected_components(g, mapper);
auto [id, cardinality] = mapper.largest();
std::cout << "Largest component #" << id << " (out of " << num
<< " components) has " << cardinality << " vertices\n";
Prints
Largest component #0 (out of 3 components) has 3 vertices
This matches the expected output.
BONUS
If you have an expected number of components, you may be able to optimize storage by using small_vector/static_vector, e.g.
using Value = std::pair<Id, Cardinality>;
using Map = boost::container::flat_map<
Id, Cardinality, std::less<>,
boost::container::small_vector<Value, 10>>;
This way, unless you have more than 10 components, you will never see a dynamic allocation for the mapper storage.

boost::asio::async_read_until with custom match_char to accept only JSON format

I've been trying to change match_char function to accept only JSON messages when reading data from a socket.
I have 2 implementations (one does not work and the other one works but I don't think it's efficient).
1- First approach (working)
typedef boost::asio::buffers_iterator<boost::asio::streambuf::const_buffers_type> buffer_iterator;
static std::pair<buffer_iterator, bool> match_json2(const buffer_iterator begin,
const buffer_iterator end) {
buffer_iterator i = begin;
while (i != end) {
if ((*i == ']') || (*i == '}')) {
return std::make_pair(i, true);
}
*i++;
}
return std::make_pair(i, false);
}
With this definition, I read in a loop and reconstruct the json. This is a working version, but if I receive a message different from a valid json, I stay in the loop, can't clear tmp_response and never recover from it...
std::string read_buffer_string() {
std::string response;
bool keepReading = true;
while (keepReading) {
std::string tmp_response;
async_read_until(s, ba::dynamic_buffer(tmp_response), match_json2, yc);
if (!tmp_response.empty()) {
response += tmp_response;
if (nlohmann::json::accept(response)) {
keepReading = false;
}
}
}
return response;
}
Second approach (not working). Ideally I would like something like this one (this implementation doesn't work because begin iterator doesn't always point to the start of the message - I guess some data is already been transferred to the buffer-, and therefore match_json returns invalid values.
static std::pair<buffer_iterator, bool> match_json(const buffer_iterator begin,
const buffer_iterator end) {
buffer_iterator i = begin;
while (i != end) {
if ((*i == ']') || (*i == '}')) {
std::string _message(begin, i);
std::cout << _message << std::endl;
if (nlohmann::json::accept(_message)) {
return std::make_pair(i, true);
}
}
*i++;
}
return std::make_pair(i, false);
}
And then call it like this:
std::string read_buffer_string() {
std::string response;
async_read_until(s, ba::dynamic_buffer(response), match_json, yc);
return response;
}
Does anybody now a more efficient way to do it?
Thanks in advance! :)
Of course, right after posting my other answer I remembered that Boost has accepted Boost JSON in 1.75.0.
It does stream parsing way more gracefully: https://www.boost.org/doc/libs/1_75_0/libs/json/doc/html/json/ref/boost__json__stream_parser.html#json.ref.boost__json__stream_parser.usage
It actually deals with trailing data as well!
stream_parser p; // construct a parser
std::size_t n; // number of characters used
n = p.write_some( "[1,2" ); // parse some of a JSON
assert( n == 4 ); // all characters consumed
n = p.write_some( ",3,4] null" ); // parse the remainder of the JSON
assert( n == 6 ); // only some characters consumed
assert( p.done() ); // we have a complete JSON
value jv = p.release(); // take ownership of the value
I would also submit that this could be a better match for a CompletionCondition: see https://www.boost.org/doc/libs/1_75_0/doc/html/boost_asio/reference/read/overload3.html
Here's an implementation that I tested with:
template <typename Buffer, typename SyncReadStream>
static size_t read_json(SyncReadStream& s, Buffer buf,
boost::json::value& message, boost::json::parse_options options = {})
{
boost::json::stream_parser p{{}, options};
size_t total_parsed = 0;
boost::asio::read(s, buf, [&](boost::system::error_code ec, size_t /*n*/) {
size_t parsed = 0;
for (auto& contiguous : buf.data()) {
parsed += p.write_some(
boost::asio::buffer_cast<char const*>(contiguous),
contiguous.size(), ec);
}
buf.consume(parsed);
total_parsed += parsed;
return ec || p.done(); // true means done
});
message = p.release(); // throws if incomplete
return total_parsed;
}
Adding a delegating overload for streambufs:
template <typename SyncReadStream, typename Alloc>
static size_t read_json(SyncReadStream& s,
boost::asio::basic_streambuf<Alloc>& buf,
boost::json::value& message,
boost::json::parse_options options = {})
{
return read_json(s, boost::asio::basic_streambuf_ref<Alloc>(buf), message, options);
}
Demo Program
This demo program adds the test-cases from earlier as well as a socket client with some benchmark statistics added. Arguments:
test to run the tests instead of the socket client
streambuf to use the streambuf overload instead of std::string dynamic buffer
comments to allow comments in the JSON
trailing_commas to allow trailing commas in the JSON
invalid_utf8 to allow invalid utf8 in the JSON
Live On Compiler Explorer¹
#include <boost/spirit/home/x3.hpp>
#include <boost/fusion/adapted.hpp>
#include <iomanip>
#include <iostream>
namespace x3 = boost::spirit::x3;
int main() {
std::string const s =
"? 8==2 : true ! false"
"? 9==3 : 'book' ! 'library'";
using expression = std::string;
using ternary = std::tuple<expression, expression, expression>;
std::vector<ternary> parsed;
auto expr_ = x3::lexeme [+~x3::char_("?:!")];
auto ternary_ = "?" >> expr_ >> ":" >> expr_ >> "!" >> expr_;
std::cout << "=== parser approach:\n";
if (x3::phrase_parse(begin(s), end(s), *x3::seek[ ternary_ ], x3::space, parsed)) {
for (auto [cond, e1, e2] : parsed) {
std::cout
<< " condition " << std::quoted(cond) << "\n"
<< " true expression " << std::quoted(e1) << "\n"
<< " else expression " << std::quoted(e2) << "\n"
<< "\n";
}
} else {
std::cout << "non matching" << '\n';
}
}
With test prints:
----- valid test cases
Testing {} -> Success {}
Testing {"a":4, "b":5} -> Success {"a":4,"b":5}
Testing [] -> Success []
Testing [4, "b"] -> Success [4,"b"]
----- incomplete test cases
Testing { -> (incomplete...)
Testing {"a":4, "b" -> (incomplete...)
Testing [ -> (incomplete...)
Testing [4, " -> (incomplete...)
----- invalid test cases
Testing } -> syntax error
Testing "a":4 } -> Success "a" -- remaining `:4 }`
Testing ] -> syntax error
----- excess input test cases
Testing {}{"a":4, "b":5} -> Success {} -- remaining `{"a":4, "b":5}`
Testing []["a", "b"] -> Success [] -- remaining `["a", "b"]`
Testing {} bogus trailing data -> Success {} -- remaining `bogus trailing data`
With socket client some demos:
Mean packet size: 16 in 2 packets
Request: 28 bytes
Request: {"a":4,"b":"5"} bytes
Remaining data: "bye
"
took 0.000124839s, ~0.213899MiB/s
With a large (448MiB) location_history.json:
Mean packet size: 511.999 in 917791 packets
Request: 469908167 bytes
(large request output suppressed)
took 3.30509s, ~135.59MiB/s
¹ linking non-header only liobraries is not supported on Compiler Explorer
TL/DR;
Seriously, just add framing to your wire protocol. E.g. even HTTP responses do this (e.g. via the content length headers, and maybe chunked encoding)
UPDATE:
Instead of handrolling you can go with Boost JSON as I added in another answer
The first approach is flawed, because you are using "async_read_until" yet treat the operation as if it were synchronous.
The second problem is, neither json::parse nor json::accept can report the location of a complete/broken parse. This means that you really do need framing in your wire protocol, because you CANNOT detect message boundaries.
The rest of this answer will first dive in to expose how the limitations of the nlohmann::json library make your task impossible¹.
So even though it's commendable for you to use an existing library, we look for alternatives.
Making It Work(?)
You could use the approach that Beast uses (http::read(s, buf, http::message<>). That is: have a reference to the entire buffer.
flat_buffer buf;
http::request<http::empty_body> m;
read(s, buf, m); // is a SyncStream like socket
Here, read is a composed operation over the message as well as the buffer. This makes it easy to check the completion criteria. In our case, let's make a reader that also serves as a match-condition:
template <typename DynamicBuffer_v1>
struct JsonReader {
DynamicBuffer_v1 _buf;
nlohmann::json message;
JsonReader(DynamicBuffer_v1 buf) : _buf(buf) {}
template <typename It>
auto operator()(It dummy, It) {
using namespace nlohmann;
auto f = buffers_begin(_buf.data());
auto l = buffers_end(_buf.data());
bool ok = json::accept(f, l);
if (ok) {
auto n = [&] {
std::istringstream iss(std::string(f, l));
message = json::parse(iss);
return iss.tellg(); // detect consumed
}();
_buf.consume(n);
assert(n);
std::advance(dummy, n);
return std::pair(dummy, ok);
} else {
return std::pair(dummy, ok);
}
}
};
namespace boost::asio {
template <typename T>
struct is_match_condition<JsonReader<T>> : public boost::true_type { };
}
This is peachy and works on the happy path. But you run into big trouble on edge/error cases:
you can't distinguish incomplete data from invalid data, so you MUST assume that unaccepted input is just incomplete (otherwise you would never wait for data to be complete)
you will wait until infinity for data to become "valid" if it's just invalid or
worse still: keep reading indefinitely, possibly running out of memory (unless you limit the buffer size; this could lead to a DoS)
perhaps worst of all, if you read more data than the single JSON message (which you can not in general prevent in the context of stream sockets), the original message will be rejected due to "excess input". Oops
Testing It
Here's the test cases that confirm the analysis conclusios predicted:
Live On Compiler Explorer
#include <boost/asio.hpp>
#include <nlohmann/json.hpp>
#include <iostream>
#include <iomanip>
template <typename Buffer>
struct JsonReader {
static_assert(boost::asio::is_dynamic_buffer_v1<Buffer>::value);
Buffer _buf;
nlohmann::json message;
JsonReader() = default;
JsonReader(Buffer buf) : _buf(buf) {}
template <typename It>
auto operator()(It dummy, It) {
using namespace nlohmann;
auto f = buffers_begin(_buf.data());
auto l = buffers_end(_buf.data());
bool ok = json::accept(f, l);
if (ok) {
auto n = [&] {
std::istringstream iss(std::string(f, l));
message = json::parse(iss);
return iss.tellg(); // detect consumed
}();
_buf.consume(n);
assert(n);
//std::advance(dummy, n);
return std::pair(dummy, ok);
} else {
return std::pair(dummy, ok);
}
}
};
namespace boost::asio {
template <typename T>
struct is_match_condition<JsonReader<T>> : public boost::true_type { };
}
static inline void run_tests() {
std::vector<std::string> valid {
R"({})",
R"({"a":4, "b":5})",
R"([])",
R"([4, "b"])",
},
incomplete {
R"({)",
R"({"a":4, "b")",
R"([)",
R"([4, ")",
},
invalid {
R"(})",
R"("a":4 })",
R"(])",
},
excess {
R"({}{"a":4, "b":5})",
R"([]["a", "b"])",
R"({} bogus trailing data)",
};
auto run_tests = [&](auto& cases) {
for (std::string buf : cases) {
std::cout << "Testing " << std::left << std::setw(22) << buf;
bool ok = JsonReader { boost::asio::dynamic_buffer(buf) }
(buf.begin(), buf.end())
.second;
std::cout << " -> " << std::boolalpha << ok << std::endl;
if (ok && !buf.empty()) {
std::cout << " -- remaining buffer " << std::quoted(buf) << "\n";
}
}
};
std::cout << " ----- valid test cases \n";
run_tests(valid);
std::cout << " ----- incomplete test cases \n";
run_tests(incomplete);
std::cout << " ----- invalid test cases \n";
run_tests(invalid);
std::cout << " ----- excess input test cases \n";
run_tests(excess);
}
template <typename SyncReadStream, typename Buffer>
static void read(SyncReadStream& s, Buffer bufarg, nlohmann::json& message) {
using boost::asio::buffers_begin;
using boost::asio::buffers_end;
JsonReader reader{bufarg};;
read_until(s, bufarg, reader);
message = reader.message;
}
int main() {
run_tests();
}
Prints
----- valid test cases
Testing {} -> true
Testing {"a":4, "b":5} -> true
Testing [] -> true
Testing [4, "b"] -> true
----- incomplete test cases
Testing { -> false
Testing {"a":4, "b" -> false
Testing [ -> false
Testing [4, " -> false
----- invalid test cases
Testing } -> false
Testing "a":4 } -> false
Testing ] -> false
----- excess input test cases
Testing {}{"a":4, "b":5} -> false
Testing []["a", "b"] -> false
Testing {} bogus trailing data -> false
Looking For Alternatives
You could roll your own as I did in the past:
Parse a substring as JSON using QJsonDocument
Or we can look at another library that DOES allow us to either detect boundaries of valid JSON fragments OR detect and leave trailing input.
Hand-Rolled Approach
Here's a simplistic translation to more modern Spirit X3 of that linked answer:
// Note: first iterator gets updated
// throws on known invalid input (like starting with `]' or '%')
template <typename It>
bool tryParseAsJson(It& f, It l)
{
try {
return detail::x3::parse(f, l, detail::json);
} catch (detail::x3::expectation_failure<It> const& ef) {
throw std::runtime_error("invalid JSON data");
}
}
The crucial point is that this *in addition to return true/false will update the start iterator according to how far it consumed the input.
namespace JsonDetect {
namespace detail {
namespace x3 = boost::spirit::x3;
static const x3::rule<struct value_> value{"value"};
static auto primitive_token
= x3::lexeme[ x3::lit("false") | "null" | "true" ];
static auto expect_value
= x3::rule<struct expect_value_> { "expect_value" }
// array, object, string, number or other primitive_token
= x3::expect[&(x3::char_("[{\"0-9.+-") | primitive_token | x3::eoi)]
>> value
;
// 2.4. Numbers
// Note our spirit grammar takes a shortcut, as the RFC specification is more restrictive:
//
// However non of the above affect any structure characters (:,{}[] and double quotes) so it doesn't
// matter for the current purpose. For full compliance, this remains TODO:
//
// Numeric values that cannot be represented as sequences of digits
// (such as Infinity and NaN) are not permitted.
// number = [ minus ] int [ frac ] [ exp ]
// decimal-point = %x2E ; .
// digit1-9 = %x31-39 ; 1-9
// e = %x65 / %x45 ; e E
// exp = e [ minus / plus ] 1*DIGIT
// frac = decimal-point 1*DIGIT
// int = zero / ( digit1-9 *DIGIT )
// minus = %x2D ; -
// plus = %x2B ; +
// zero = %x30 ; 0
static auto number = x3::double_; // shortcut :)
// 2.5 Strings
static const x3::uint_parser<uint32_t, 16, 4, 4> _4HEXDIG;
static auto char_ = ~x3::char_("\"\\") |
x3::char_(R"(\)") >> ( // \ (reverse solidus)
x3::char_(R"(")") | // " quotation mark U+0022
x3::char_(R"(\)") | // \ reverse solidus U+005C
x3::char_(R"(/)") | // / solidus U+002F
x3::char_(R"(b)") | // b backspace U+0008
x3::char_(R"(f)") | // f form feed U+000C
x3::char_(R"(n)") | // n line feed U+000A
x3::char_(R"(r)") | // r carriage return U+000D
x3::char_(R"(t)") | // t tab U+0009
x3::char_(R"(u)") >> _4HEXDIG ) // uXXXX U+XXXX
;
static auto string = x3::lexeme [ '"' >> *char_ >> '"' ];
// 2.2 objects
static auto member
= x3::expect [ &(x3::eoi | '"') ]
>> string
>> x3::expect [ x3::eoi | ':' ]
>> expect_value;
static auto object
= '{' >> ('}' | (member % ',') >> '}');
// 2.3 Arrays
static auto array
= '[' >> (']' | (expect_value % ',') >> ']');
// 2.1 values
static auto value_def = primitive_token | object | array | number | string;
BOOST_SPIRIT_DEFINE(value)
// entry point
static auto json = x3::skip(x3::space)[expect_value];
} // namespace detail
} // namespace JsonDetect
Obviously you put the implementation in a TU, but on Compiler Explorer we can't: Live On Compiler Explorer, using an adjusted JsonReader prints:
SeheX3Detector
==============
----- valid test cases
Testing {} -> true
Testing {"a":4, "b":5} -> true
Testing [] -> true
Testing [4, "b"] -> true
----- incomplete test cases
Testing { -> false
Testing {"a":4, "b" -> false
Testing [ -> false
Testing [4, " -> false
----- invalid test cases
Testing } -> invalid JSON data
Testing "a":4 } -> true -- remaining `:4 }`
Testing ] -> invalid JSON data
----- excess input test cases
Testing {}{"a":4, "b":5} -> true -- remaining `{"a":4, "b":5}`
Testing []["a", "b"] -> true -- remaining `["a", "b"]`
Testing {} bogus trailing data -> true -- remaining ` bogus trailing data`
NlohmannDetector
================
----- valid test cases
Testing {} -> true
Testing {"a":4, "b":5} -> true
Testing [] -> true
Testing [4, "b"] -> true
----- incomplete test cases
Testing { -> false
Testing {"a":4, "b" -> false
Testing [ -> false
Testing [4, " -> false
----- invalid test cases
Testing } -> false
Testing "a":4 } -> false
Testing ] -> false
----- excess input test cases
Testing {}{"a":4, "b":5} -> false
Testing []["a", "b"] -> false
Testing {} bogus trailing data -> false
Note how we now achieved some of the goals.
accepting trailing data - so we don't clobber any data after our message
failing early on some inputs that cannot possibly become valid JSON
However, we can't fix the problem of waiting indefinitely on /possibly/ incomplete valid data
Interestingly, one of our "invalid" test cases was wrong (!). (It is always a good sign when test cases fail). This is because "a" is actually a valid JSON value on its own.
Conclusion
In the general case it is impossible to make such a "complete message" detection work without at least limiting buffer size. E.g. a valid input could start with a million spaces. You don't want to wait for that.
Also, a valid input could open a string, object or array², and not terminate that within a few gigabytes. If you stop parsing before hand you'll never know whether it was ultimately a valid message.
Though you'll inevitably have to deal with network timeout anyways you will prefer to be proactive about knowing what to expect. E.g. send the size of the payload ahead of time, so you can use boost::asio::transfer_exactly and validate precisely what you expected to get.
¹ practically. If you don't care about performance, you could iteratively run accept on increasing lengths of buffer
² god forbid, a number like 0000....00001 though that's subject to parser implementation differences

Unexpected value returned by use_count() of shared_ptr while retrieving from vector

The program below is outputting unexpected use_count() value when shared pointer is printed using iterator de-reference of std::vector:
#include<iostream>
#include<memory>
#include<vector>
class A;
typedef std::shared_ptr<A> sharedPtr;
typedef std::vector<sharedPtr> sharedPtrVect;
typedef sharedPtrVect::const_iterator vectItr;
class A
{
public:
A(int inp): m_Val(inp) { /*std::cout << "*** A ctor called: " << m_Val << " ***" <<std::endl;*/ }
~A() { /*std::cout << "### A dtor called: " << m_Val << " ###" <<std::endl; */}
int getVal() const { return m_Val; }
private:
int m_Val;
};
int main()
{
sharedPtrVect myVect1, myVect2;
vectItr myVectItr;
std::shared_ptr<A> tmpPtr;
for(int i = 1 ; i <= 5 ; i++ ) {
std::cout << "Pushed back: " << i << std::endl;
tmpPtr = std::make_shared<A>(i);
myVect1.push_back(tmpPtr);
}
myVectItr = myVect1.begin();
for( ; myVectItr != myVect1.end() ; ++myVectItr) {
std::cout << "-----------------------------" << std::endl;
std::cout << "Element number: " << (*myVectItr).get()->getVal() << std::endl;
std::cout << "Element use count: " << (*myVectItr).use_count() << std::endl;
std::cout << "-----------------------------" << std::endl;
}
return 0;
}
The output of the above code is:
Pushed back: 1
Pushed back: 2
Pushed back: 3
Pushed back: 4
Pushed back: 5
-----------------------------
Element number: 1
Element use count: 1
-----------------------------
-----------------------------
Element number: 2
Element use count: 1
-----------------------------
-----------------------------
Element number: 3
Element use count: 1
-----------------------------
-----------------------------
Element number: 4
Element use count: 1
-----------------------------
-----------------------------
Element number: 5
Element use count: 2 //I am not sure why or how this is 2?
-----------------------------
I don't understand how the use_count() for the last vector element is 2. Shouldn't it be 1 like others? I am not creating any copies of the shared pointer stored in the last element of the vector.
What am I missing here?
EDIT: I have good experience in C++98, but less experience in C++11.
Shouldn't it be 1 like others? I am not creating any copies of the shared pointer stored in the last element of the vector. What am I missing here?
But you are creating a copy. You push_back() from tmpPtr. push_back() puts a copy of its argument into the vector, unless you tell it to move instead. (More on that later!)
Therefore, what happens for all but the last element is this:
tmpPtr holds the only reference to the shared resource
You push_back() a copy, so the copy-constructor of shared_ptr increments the use count to 2
You then assign the next element to tmpPtr, releasing the reference to, and thereby decrementing the use count of, the previous element's resource.
But, of course, there is no subsequent assignment on the last iteration of the loop. So, at the point of printing, tmpPtr is still in scope, and it retains a reference to the last resource that was allocated. Hence the 1-higher refcount on the last element. This seems perfectly expected to me. ;)
To see the results you expected, you need to either destroy tmpPtr after you copy it but before you print, or simply avoid the copy from it in the first place. The former could be done by moving its declaration into the for loop, as SirGuy pointed out in the comments.
However, clearly, the latter is superior. How do we do that? Well, C++11 lets us move instead. So, you could emplace_back( std::move(tmpPtr) ), in which the move will cast to an rvalue and thus invoke the move-constructor of the vector element. This will cause tmpPtr to release its reference upon being moved into the vector, effectively ensuring the use count is always 1. This leaves tmpPtr (like any moved-from object) in a valid-but-unspecified state, i.e. useful only to be reassigned-to.
(Note: push_back() will achieve the same thing, but I generally prefer using emplace_back() wherever possible, as it's more efficient in other situations, so it's a better default.)
Of course, you can then combine both of these: declare tmpPtr within the scope of the for loop, and move from it. However... you don't even need tmpPtr at all! It does not appear to serve any useful purpose. So, you could just not use it, and instead directly emplace_back() the result of make_shared(). Because the return value thereof will be an rvalue, it will implicitly be moved into the vector; no cast by std::move is needed.

Accessing return by pointer out of function

I have the following function
std::tuple<int,val*>Socket::recv(val* values ) // const
{
char buf [ MAXRECV + 1 ];
memset ( buf, 0, MAXRECV + 1 );
int status = ::recv ( m_sock, buf, MAXRECV, 0 );
if ( status == -1 )
{
std::cout << "status == -1 errno == " << errno << " in Socket::recv\n";
// return std::make_tuple(0,NULL);//this is not working
}
else if ( status == 0 )
{
//return std::make_tuple(0,NULL); //this is not working
}
else
{
struct val* values=(struct val*) buf;
if(!std::isnan(values->val1) &&
!std::isnan(values->val2) &&
!std::isnan(values->val3) &&
!std::isnan(values->val4),
!std::isnan(values->val5),
!std::isnan(values->val6))
printf("received:%f %f %f %f %f %f\n", values->val1, values->val2,
values->val3, values->val4, values->val5, values->val6);
return std::make_tuple(status,values);
}
}
The received values are printed out in to standard output correctly within the function.
But when I try to access these received values out of the function by calling as follows what I get is all 0's.[after creating Socket rcvd object]
Would you tell me how to access these values outside the function?
1.
std::cout << std::get<1>(rcvd.recv(&values)->val1)
<< std::get<1>(rcvd.recv(&values)->val2)
<< std::get<1>(rcvd.recv(&values)->val3)
<< std::get<1>(rcvd.recv(&values)->val4)
<< std::get<1>(rcvd.recv(&values)->val5)
<< std::get<1>(rcvd.recv(&values)->val6)
<< std::endl;
2.
std::cout << std::get<1>(rcvd.recv(&values).val1)
<< std::get<1>(rcvd.recv(&values).val2)
<< std::get<1>(rcvd.recv(&values).val3)
<< std::get<1>(rcvd.recv(&values).val4)
<< std::get<1>(rcvd.recv(&values).val5)
<< std::get<1>(rcvd.recv(&values).val6)
<< std::endl;
3.
std::cout << std::get<1>(rcvd.recv(&values)[0])
<< std::get<1>(rcvd.recv(&values)[1])
<< std::get<1>(rcvd.recv(&values)[2])
<< std::get<1>(rcvd.recv(&values)[3])
<< std::get<1>(rcvd.recv(&values)[4])
<< std::get<1>(rcvd.recv(&values)[5])
<< std::endl;
where "values" comes from
struct val {
val1;
val2;
val3;
val4;
val5;
val6;} values;
All the three options of calling the function or access the struct val could not work for me.
Would you tell me
how to access these received values externally from any function?
how to return zero to struct pointer [NULL is not working ] when status is 0 or -1
Try
return std::make_tuple<int, val*>(0, nullptr);
The type of tuple is deduced from arguments, so by using 0,NULL you are actually using the null constant wich is evaluted to 0 and hence deduced type is <int,int>.
By the way, I see no reason for using NULL in C++11, if you need that really for some reason then cast NULL to val*
static_cast<val*>(NULL);
EDIT:
Other viable alternatives are
val* nullval = nullptr;
return std::make_tuple(0, nullval);
Or
return std::make_tuple(0, static_cast<val*>(nullptr));
Or (as comment suggest)
return {0, nullptr};
Choose the one that seems more clear to you.
You are lucky that the outside function is printing zeroes. It might have as well just dumped the core on you :)
What you are doing is accessing a buffer, that was created on a stack, after that stack was released (once the function's execution finished). That is HIGHLY UNSAFE and, pretty much, illegal.
Instead what you should do is allocate your data buffer in a 'free memory", using functions like malloc (in C) or operator new/new[] (in C++).
The quick fix is to replace the line
char buf [ MAXRECV + 1 ];
with
char * buf = new char [ MAXRECV + 1 ];
And when you do a type casting on line
struct val* values=(struct val*) buf;
you really ought to be sure that what you do is correct. If the sizeof() of you struct val is more than the sizeof(char[MAXRECV + 1]) you'll get in memory access troubles.
After you are done using the returned data buffer don't forget to release it with a call to free (in C) or delete/delete[] (in C++). Otherwise you'd have what is called a memory leak.

possible to share work when parsing multiple files with libclang?

If I have multiple files in a large project, all of which share a large number of included header files, is there any way to share the work of parsing the header files? I had hoped that creating one Index and then adding multiple translationUnits to it could cause some work to be shared - however even code along the lines of (pseudocode)
index = clang_createIndex();
clang_parseTranslationUnit(index, "myfile");
clang_parseTranslationUnit(index, "myfile");
seems to take the full amount of time for each call to parseTranslationUnit, performing no better than
index1 = clang_createIndex();
clang_parseTranslationUnit(index1, "myfile");
index2 = clang_createIndex();
clang_parseTranslationUnit(index2, "myfile");
I am aware that there are specialized functions for reparsing the exact same file; however what I really want is that parsing "myfile1" and "myfile2" can share the work of parsing "myheader.h", and reparsing-specific functions won't help there.
As a sub-question, is there any meaningful difference between reusing an index and creating a new index for each translation unit?
One way of doing this consists in creating Precompiled Headers (PCH file) from the shared header in your project.
Something along these lines seems to work (you can see the whole example here):
auto Idx = clang_createIndex (0, 0);
CXTranslationUnit TU;
Timer t;
{
char const *args[] = { "-xc++", "foo.hxx" };
int nargs = 2;
t.reset();
TU = clang_parseTranslationUnit(Idx, 0, args, nargs, 0, 0, CXTranslationUnit_ForSerialization);
std::cerr << "PCH parse time: " << t.get() << std::endl;
displayDiagnostics (TU);
clang_saveTranslationUnit (TU, "foo.pch", clang_defaultSaveOptions(TU));
clang_disposeTranslationUnit (TU);
}
{
char const *args[] = { "-include-pch", "foo.pch", "foo.cxx" };
int nargs = 3;
t.reset();
TU = clang_createTranslationUnitFromSourceFile(Idx, 0, nargs, args, 0, 0);
std::cerr << "foo.cxx parse time: " << t.get() << std::endl;
displayDiagnostics (TU);
clang_disposeTranslationUnit (TU);
}
{
char const *args[] = { "-include-pch", "foo.pch", "foo2.cxx" };
int nargs = 3;
t.reset();
TU = clang_createTranslationUnitFromSourceFile(Idx, 0, nargs, args, 0, 0);
std::cerr << "foo2.cxx parse time: " << t.get() << std::endl;
displayDiagnostics (TU);
clang_disposeTranslationUnit (TU);
}
yielding the following output:
PCH parse time: 5.35074
0 diagnostics
foo1.cxx parse time: 0.158232
0 diagnostics
foo2.cxx parse time: 0.143654
0 diagnostics
I did not find much information about libclang and precompiled headers in the API documentation, but here are a few pages where the keyword appears: CINDEX and TRANSLATION_UNIT
Please note that this solution is not optimal by any ways. I'm looking forward to seeing better answers. In particular:
each source file can have at most one precompiled header
nothing here is libclang-specific ; this is the exact same strategy that is used for build time optimization using the standard clang command lines.
it is not really automated, in that you have to explicitly create the precompiled header (and must thus know the name of the shared header file)
I don't think using different CXIndex objects would have made any difference here

Resources