What's the purpose of ANYSIZE_ARRAY, located in WinNT.h?
I see an MSDN blog post about it from 2004 but it doesn't make sense to me.
I assume you are talking about this blog post.
It is often used when a variable-sized (unknown at compile time) array is part of a struct:
typedef struct {
int CommonFlags
int CountOfThings;
THING Things[ANYSIZE_ARRAY]; //Things[1];
} THINGSANDFLAGS;
To work with those structures you often first call the desired API to get the size of the data, then allocate a block of memory big enough and finally call the same API again so it can fill in the data...
From this page:
In C, a variable-size array is declared as a[1] or a[ANYSIZE_ARRAY], where ANYSIZE_ARRAY is defined as 1. Then it is used as if it were bigger.
Related
I am currently trying to make some TensorFlow Inference (C backend) using Boost::GIL (challenging). I need a few thinks, I have been able to load my png image (rgb8_image_t)
and did a conversion to rgb32_f_image_t.
I still need 3 thinks, the raw pointer of the data, memory allocated, and dimensions.
for the memory allocated unfortunately the function total_allocated_size_in_bytes() is private, so I did this:
boost::gil::view(dest).size() * boost::gil::view(dest).num_channels() * sizeof(value_type);
Which is valid, if I do not have any extra padding for alignment story. But does it exist any nice alternative?
For the dimension, I should match with numpy (from PILLOW), I hope both libraries are using the same memory layout pattern. From my understanding, by default, datas are interleaved and contiguous so, it should be good.
Last the raw pointer _memory, it is a private data member of the Image class with no dedicated function. boost::gil::view(dest).row_begin(0) returns a iterator on the first pixel but I not sure how I could get the pointer of the data _memory. Any suggestions ?
Thank you very much,
++t
ps: TensorFlow proposes a C++ backend, however, it is not installed from any package managers, and manipulate Bazel is beyond my strength.
GIL documentation pretty accurately documents the various memory layouts.
The point of the library, though, is to abstract away the memory layouts. If you require some representation (planar/interleaved, packed or unpacked) you are doing things "the hard way" for the library interface.
So, I think you can read and convert in one go, e.g. for a jpeg:
gil::rgb32f_image_t img;
gil::image_read_settings<gil::jpeg_tag> settings;
read_and_convert_image("input.jpg", img, settings);
Now getting the raw data is possible:
auto* raw_data = gil::interleaved_view_get_raw_data(view(img));
It happens to be the case that the preferred implementation storage is interleaved, which is likely what you're expecting. If your particular image storage is planar, the call will not compile (and you'd probably want planar_view_get_raw_data(vw, plane_index) instead).
Note that you'll have to reinterpret_cast to float [const]* if you need that, because there is not public interface to get a reference to the scoped_channel_value<>::value_, but the BaseChannelValue type is indeed float and you can assert that the wrapper doesn't add additional weight:
static_assert(sizeof(float) == sizeof(raw_data[0]));
Alternative Approach:
Conversely, you can setup your own raw pixel buffer, mount a mutable view into it and use that to read/convert your initial load into:
// get dimension
gil::image_read_settings<gil::jpeg_tag> settings;
auto info = gil::read_image_info("input.jpg", settings).get_info();
// setup raw pixel buffer & view
using pixel = gil::rgb32f_pixel_t;
auto data = std::make_unique<pixel[]>(info._width * info._height);
auto vw = gil::interleaved_view(info._width, info._height, data.get(),
info._width * sizeof(pixel));
// load into buffer
read_and_convert_view("input.jpg", vw, settings);
I've actually checked that it works correctly by writing out the resulting view:
//// just for test - doesn't work for 32f, so choose another pixel format
//gil::write_view("output.png", vw, gil::png_tag());
I'm aware of many similar questions on this site. I really like the solution mention in the following link:
https://stackoverflow.com/a/25021520/884553
with some modification, you can include text file at compile time, for example:
constexpr const char* s =
#include "file.txt"
BUT to make this work you have to add string literal prefix and suffix to your original file, for example
R"(
This is the original content,
and I don't want this file to be modified. but i
don't know how to do it.
)";
My question is: is there a way to make this work but not modifying file.txt?
(I know I can use command line tools to make a copy, prepend and append to the copy, remove the copy after compile. I'm looking for a more elegant solution than this. hopefully no need of other tools)
Here's what I've tried (but not working):
#include <iostream>
int main() {
constexpr const char* s =
#include "bra.txt" // R"(
#include "file.txt" //original file without R"( and )";
#include "ket.txt" // )";
std::cout << s << "\n";
return 0;
}
/opt/gcc8/bin/g++ -std=c++1z a.cpp
In file included from a.cpp:5:
bra.txt:1:1: error: unterminated raw string
R"(
^
a.cpp: In function ‘int main()’:
a.cpp:4:27: error: expected primary-expression at end of input
constexpr const char* s =
^
a.cpp:4:27: error: expected ‘}’ at end of input
a.cpp:3:12: note: to match this ‘{’
int main() {
^
No, this cannot be done.
There is a c++2a proposal to allow inclusion of such resources at compile time called std::embed.
The motivation part of ths p1040r1 proposal:
Motivation
Every C and C++ programmer -- at some point -- attempts to #include large chunks of non-C++ data into their code. Of course, #include expects the format of the data to be source code, and thusly the program fails with spectacular lexer errors. Thusly, many different tools and practices were adapted to handle this, as far back as 1995 with the xxd tool. Many industries need such functionality, including (but hardly limited to):
Financial Development
representing coefficients and numeric constants for performance-critical algorithms;
Game Development
assets that do not change at runtime, such as icons, fixed textures and other data
Shader and scripting code;
Embedded Development
storing large chunks of binary, such as firmware, in a well-compressed format
placing data in memory on chips and systems that do not have an operating system or file system;
Application Development
compressed binary blobs representing data
non-C++ script code that is not changed at runtime; and
Server Development
configuration parameters which are known at build-time and are baked in to set limits and give compile-time information to tweak performance under certain loads
SSL/TLS Certificates hard-coded into your executable (requiring a rebuild and potential authorization before deploying new certificates).
In the pursuit of this goal, these tools have proven to have inadequacies and contribute poorly to the C++ development cycle as it continues to scale up for larger and better low-end devices and high-performance machines, bogging developers down with menial build tasks and trying to cover-up disappointing differences between platforms.
MongoDB has been kind enough to share some of their code below. Other companies have had their example code anonymized or simply not included directly out of shame for the things they need to do to support their workflows. The author thanks MongoDB for their courage and their support for std::embed.
The request for some form of #include_string or similar dates back quite a long time, with one of the oldest stack overflow questions asked-and-answered about it dating back nearly 10 years. Predating even that is a plethora of mailing list posts and forum posts asking how to get script code and other things that are not likely to change into the binary.
This paper proposes <embed> to make this process much more efficient, portable, and streamlined. Here’s an example of the ideal:
#include <embed>
int main (int, char*[]) {
constexpr std::span<const std::byte> fxaa_binary = std::embed( "fxaa.spirv" );
// assert this is a SPIRV file, compile-time
static_assert( fxaa_binary[0] == 0x03 && fxaa_binary[1] == 0x02
&& fxaa_binary[2] == 0x23 && fxaa_binary[3] == 0x07
, "given wrong SPIRV data, check rebuild or check the binaries!" )
auto context = make_vulkan_context();
// data kept around and made available for binary
// to use at runtime
auto fxaa_shader = make_shader( context, fxaa_binary );
for (;;) {
// ...
// and we’re off!
// ...
}
return 0;
}
while trying to create an object oriented OpenACC implementation I stumbled upon this question.
From there I took the code provided by #mat-colgrove at the GTC15 (code available at http://www.pgroup.com/lit/samples/gtc15_S5233.tar).
Since I am interested how to use objects to manage data on with OpenACC I posted another question.
I was quite impressed by the ease of the OpenACCArray::swap function, so I created a small example to test it (see gist).
First I tried to just swap and hope that it is sufficient to swap the pointers on the host, but this ends in a fatal memory error. (presumably because the size and capacity members are not updated on the device)
A safer approach, that I assumed to work is to update the host, swap arrays and update device. This runs but creates wrong results.
I am compiling for nvidia accelerators.
Looks like this is my fault since I didn't test the swap routine.
The problem here is while the code is swapping the data on the host, the device copy of the objects still point to the old array. The fix is to re-attach (i.e. set the object's device pointers to the correct arrays) the lists.
void swap(OpenACCArray<type>& x)
{
type* tmp_list = list;
int tmp_size = _size;
int tmp_capacity = _capacity;
list = x.list;
_size = x._size;
_capacity = x._capacity;
x.list = tmp_list;
x._size = tmp_size;
x._capacity = tmp_capacity;
#ifdef _OPENACC
#pragma acc update device(_size,_capacity,x._size,x._capacity)
acc_attach((void**)&list);
acc_attach((void**)&x.list);
#endif
}
"acc_attach" is a PGI extension that hopefully will be adopted in the OpenACC 3.0 standard.
Thanks for trying things out and let me know if you encounter other issues.
- Mat
Now that we've reached Swift 2.0, I've decided to convert my, as yet unfinished, OS X app to Swift. Making progress but I've run into some issues with using termios and could use some clarification and advice.
The termios struct is treated as a struct in Swift, no surprise there, but what is surprising is that the array of control characters in the struct is now a tuple. I was expecting it to just be an array. As you might imagine it took me a while to figure this out. Working in a Playground if I do:
var settings:termios = termios()
print(settings)
then I get the correct details printed for the struct.
In Obj-C to set the control characters you would use, say,
cfmakeraw(&settings);
settings.c_cc[VMIN] = 1;
where VMIN is a #define equal to 16 in termios.h. In Swift I have to do
cfmakeraw(&settings)
settings.c_cc.16 = 1
which works, but is a bit more opaque. I would prefer to use something along the lines of
settings.c_cc.vim = 1
instead, but can't seem to find any documentation describing the Swift "version" of termios. Does anyone know if the tuple has pre-assigned names for it's elements, or if not, is there a way to assign names after the fact? Should I just create my own tuple with named elements and then assign it to settings.c_cc?
Interestingly, despite the fact that pre-processor directives are not supposed to work in Swift, if I do
print(VMIN)
print(VTIME)
then the correct values are printed and no compiler errors are produced. I'd be interested in any clarification or comments on that. Is it a bug?
The remaining issues have to do with further configuration of the termios.
The definition of cfsetspeed is given as
func cfsetspeed(_: UnsafeMutablePointer<termios>, _: speed_t) -> Int32
and speed_t is typedef'ed as an unsigned long. In Obj-C we'd do
cfsetspeed(&settings, B38400);
but since B38400 is a #define in termios.h we can no longer do that. Has Apple set up replacement global constants for things like this in Swift, and if so, can anyone tell me where they are documented. The alternative seems to be to just plug in the raw values and lose readability, or to create my own versions of the constants previously defined in termios.h. I'm happy to go that route if there isn't a better choice.
Let's start with your second problem, which is easier to solve.
B38400 is available in Swift, it just has the wrong type.
So you have to convert it explicitly:
var settings = termios()
cfsetspeed(&settings, speed_t(B38400))
Your first problem has no "nice" solution that I know of.
Fixed sized arrays are imported to Swift as tuples, and – as far as I know – you cannot address a tuple element with a variable.
However,Swift preserves the memory layout of structures imported from C, as
confirmed by Apple engineer Joe Groff:. Therefore you can take the address of the tuple and “rebind” it to a pointer to the element type:
var settings = termios()
withUnsafeMutablePointer(to: &settings.c_cc) { (tuplePtr) -> Void in
tuplePtr.withMemoryRebound(to: cc_t.self, capacity: MemoryLayout.size(ofValue: settings.c_cc)) {
$0[Int(VMIN)] = 1
}
}
(Code updated for Swift 4+.)
I want to get the name of the field with which it was declared from GENERIC representation. I have a BIT_FIELD_REF tree and it's DECL_NAME is zero. For example,
struct {
int a;
unsigned b:1;
} s;
...
if (s.b)
...
For s.b I'll get a BIT_FIELD_REF and there's no obvious way to get the “b”, which is the original name of the field. How to do it?
Try call debug_c_tree (tree_var) or call debug_tree (tree_var) from GDB, and see if that knows the name. If it does, reverse engineer the pretty-printer.
What exactly did I do: investigating stuff in tree-dump.c I ended up understanding that names of the bit fields where they're known were coming from struct's DIEs and were hard to track.
Then I decided to get the name from the BIT_FIELD_REF argument 0 (reference to structure) type, which is RECORD_TYPE and it stores all the fields' sizes and offsets.
Problem was to understand that BIT_FIELD_REF doesn't reference the bits itself: it is used like BIT_FIELD_REF & INTEGER_CST, where constant acts like mask. After understanding this, I quickly computed the offsets and got the name from the type.