Rust manual memory management - memory-management

When I began learning C, I implemented common data structures such as lists, maps and trees. I used malloc, calloc, realloc and free to manage the memory manually when requested. I did the same thing with C++, using new and delete.
Now comes Rust. It seems like Rust doesn't offer any functions or operators which correspond to the ones of C or C++, at least in the stable release.
Are the Heap structure and the ptr module (marked with experimental) the ones to look at for this kind of thing?
I know that these data structures are already in the language. It's for the sake of learning.

Although it's really not recommended to do this ever, you can use malloc and free like you are used to from C. It's not very useful, but here's how it looks:
extern crate libc; // 0.2.65
use std::mem;
fn main() {
unsafe {
let my_num: *mut i32 = libc::malloc(mem::size_of::<i32>() as libc::size_t) as *mut i32;
if my_num.is_null() {
panic!("failed to allocate memory");
}
libc::free(my_num as *mut libc::c_void);
}
}
A better approach is to use Rust's standard library:
use std::alloc::{alloc, dealloc, Layout};
fn main() {
unsafe {
let layout = Layout::new::<u16>();
let ptr = alloc(layout);
*(ptr as *mut u16) = 42;
assert_eq!(*(ptr as *mut u16), 42);
dealloc(ptr, layout);
}
}

It's very unusual to directly access the memory allocator in Rust. You generally want to use the smart pointer constructors (Box::new, Rc::new, Arc::new) for single objects and just use Vec or Box<[T]> if you want a heap-based array.
If you really want to allocate memory and get a raw pointer to it, you can look at the implementation of Rc. (Not Box. Box is magical.) To get its backing memory, it actually creates a Box and then uses its into_raw_non_null function to get the raw pointer out. For destroying, it uses the allocator API, but could alternatively use Box::from_raw and then drop that.

Are the Heap structure and the ptr module (marked with experimental) the ones to look at for this kind of thing?
No, as a beginner you absolutely shouldn't start there. When you started learning C, malloc was all there was, and it's still a hugely error-prone part of the language - but you can't write any non-trivial program without it. It's very important for C programmers to learn about malloc and how to avoid all the pitfalls (memory leaks, use-after-free, and so on).
In modern C++, people are taught to use smart pointers to manage memory, instead of using delete by hand, but you still need to call new to allocate the memory for your smart pointer to manage. It's a lot better, but there's still some risk there. And still, as a C++ programmer, you need to learn how new and delete work, in order to use the smart pointers correctly.
Rust aims to be much safer than C or C++. Its smart pointers encapsulate all the details of how memory is handled at low-level. You only need to know how to allocate and deallocate raw memory if you're implementing a smart pointer yourself. Because of the way ownership is managed, you actually need to know a lot more details of the language to be able to write correct code. It can't be lesson one or two like it is in C or C++: it's a very advanced topic, and one many Rust programmers never need to learn about.
If you want to learn about how to allocate memory on the heap, the Box class is the place to start with that. In the Rust book, the chapter about smart pointers is the chapter about memory allocation.

Related

Is allocation C memory to hold a Go struct a supported use case for cgo?

I've been exploring strategies around not passing nested go pointers around into C. Here's an example of how I'm try out allocating a block of C memory with the intent of holding a Go struct:
(*MyGoStruvt)(C.calloc(1, unsafe.Sizeof(MyGoStruvt{})))
Does anyone know if this is a supported use case? And if not, could someone explain how wrong this approach is?

Synthesize to smart pointers with Boost Spirit X3

I need to parse a complex AST, and it would be impossible to allocate this AST on heap memory, and the AST nodes must support polymorphism. One solution would be to allocate the AST nodes using smart pointers.
To simplify the question, how would I synthesize the following struct (std::unique_ptr<GiantIntegerStruct> giantIntegerStruct), with Boost Spirit X3 for example?
struct GiantIntegerStruct {
std::vector<unique_ptr<int>> manyInts;
}
My tentative solution, is to use semantic actions. Is there an alternative?
You can do semantic actions, or you can define traits for you custom types. However, see here Semantic actions runs multiple times in boost::spirit parsing (especially the two links there) - basically, consider not doing that.
I need to parse a complex AST, and it would be impossible to allocate this AST on heap memory
This somewhat confusing statement leads me to the logical conclusion that you merely need to allocate from a shared memory segment instead.
In the good old spirit of Rule Of Zero you could make a value-wrapper that does the allocation using whatever method you prefer and still enjoy automatic attribute propagation with "value semantics" (which will server as mere "handles" for the actual object in shared memory).
If you need any help getting this set up, feel free to post a new question.

Encoding stronger safety of the WinApi through the Rust FFI

I'm playing around with the winapi crate, but it doesn't seem to me to add safety to the Windows API - it seems merely to provide the types and signatures and allows us to program in mostly the same unsafe paradigms, but using Rust syntax.
Is it possible to, say, subdivide the native types further in the Rust FFI to encode the implicit lifetime information so that winapi programming is actually safer? When the winapi allocates to a pointer or a handle which must be deallocated/released with some call, can we attach the correct Drop behavior for that value? Is Rust expressive enough?
Of course we could completely wrap the winapi calls with safer objects that map between caller and the winapi, but that incurs a runtime hit during the copy/mapping and that's no fun.
(Perhaps it's clear, but I'm new to Rust and to the WinApi and even to native programming.)
I realize that string data would usually have to be converted to Rust's UTF-8. But then I wonder if it would be possible to automatically wrap a native string in a memoizing struct where the string doesn't get converted to UTF-8 (transparently) unless it's needed in Rust code (vs just being passed back to the WinApi as the same format).
Handles and pointers though wouldn't need any conversion, they just need the right lifetimes. But there are many kinds of pointers and many kinds of handles and those type differences ought to be preserved in Rust. But then to encode the library-specific free() with a Drop trait impl I think there would be many permutations and then we'd need overloads for other winapi functions which don't care who allocated it. Right?

Is pointers in c++ performing low-level memory manipulation?

So i was reading something about c++ in wiki and i came across this "low-level memory manipulation", it said c++ facilitates low-level memory manipulation. so first thing that came in my head was pointers
so can someone give me a brief and correct description what low-level memory manipulation actually means and examples of c++ features that does that.Don't comment if you are not sure.
https://en.wikipedia.org/wiki/C%2B%2B
I guess that the text you are referring to is saying that raw pointer manipulation is low-level in genuine C++ and that idiomatic C++11 programs should use smart pointers (like std::unique_ptr or std::shared_ptr) and standard containers (both internally would use raw
pointers, but this is hidden to the programmer).
Low-level memory manipulation would mean explicitly use in your code raw pointers like YourType* ptr; and raw memory allocation like ptr = new YourType(something); and later explicit deletion with delete ptr;
You should read Programming - Principles and practices using C++
C++ evolved from C which is, by far, the dominant language for coding operating systems. The fundamental variable types closely adhere to those of the compiler's target machine and C is used as a kind of high level assembly language. A subset of C++ is increasingly used for the same purpose as it provides close to the metal programming. When C++ programmers look at the assy code they find a close correlation to their program source.

What is the Go language garbage collection approach compared to others?

I do not know much about the Go programming language, but I have seen several claims that said Go has latency-free garbage collection, and it is much better than other garbage collectors (like JVM garbage collector). I have developed application for JVM and i know that JVM garbage collector is not latency-free (specially in large memory usage).
I was wondering, what is difference between the garbage collection approach in Go and and the others which make it latency-free?
Thanks in advance.
Edit:
#All I edited this question entirely, please vote to reopen this question if you find it constructive.
Go does not have latency-free garbage collection. If you can point out where those claims are, I'd like to try to correct them.
One advantage that we believe Go has over Java is that it gives you more control over memory layout. For example, a simple 2D graphics package might define:
type Rect struct {
Min Point
Max Point
}
type Point struct {
X int
Y int
}
In Go, a Rect is just four integers contiguous in memory. You can still pass &r.Max to function expecting a *Point, that's just a pointer into the middle of the Rect variable r.
In Java, the equivalent expression would be to make Rect and Point classes, in which case the Min and Max fields in Rect would be pointers to separately allocated objects. This requires more allocated objects, taking up more memory, and giving the garbage collector more to track and more to do. On the other hand, it does avoid ever needing to create a pointer to the middle of an object.
Compared to Java, then, Go gives you the programmer more control over memory layout, and you can use that control to reduce the load on the garbage collector. That can be very important in programs with large amounts of data. Control over memory layout may also be important for extracting performance from the hardware due to cache effects and such, but that's tangential to the original question.
The collector in the current Go distributions is reasonable but by no means state of the art. We have plans to spend more effort improving it over the next year or two. To be clear,
Go's garbage collector is certainly not as good as modern Java garbage collectors, but we believe it is easier in Go to write programs that don't need as much garbage collection to begin with, so the net effect can still be that garbage collection is less of an issue in a Go program than in an equivalent Java program.

Resources