I want to get memory of a virtual machine with Hyper-V WMI Classes.
There are 4 memory classes; but I could not find any properties of them to get memory value.
Msvm_Memory class have BlockSize and NumberOfBlocks properties.
When I multiply them, I could not get correct memory.
Respect to https://msdn.microsoft.com/en-us/library/hh850175(v=vs.85).aspx It is already wrong approach.
BlockSize
Data type: uint64
Access type: Read-only
The size, in bytes, of the blocks that form the storage extent. If variable block size, then the maximum block size, in bytes, should be specified. If the block size is unknown, or if a block concept is not valid (for example, for aggregate extents, memory, or logical disks), enter a 1 (one). This property is inherited from CIM_StorageExtent, and it is always set to 1048576.
Which class and property should I use?
You can use the Msvm_MemorySettingData class to access the defined memory properties of an instance. You may filter the results by InstanceID and parse AllocationUnits together with Limit to get the configured maximum memory amount.
In the following case there is 1 TB of memory that can be allocated for the specific instance "4764334E-E001-4176-82EE-5594EC9B530E".
Example InstanceID: "Microsoft:Definition\\4764334E-E001-4176-82EE-5594EC9B530E\\Default"
AllocationUnits: "bytes * 2^20"
Limit: 1048576
Msvm_MemorySettingData: https://msdn.microsoft.com/en-us/library/hh850176(v=vs.85).aspx
Related
I built a ram based virtual block device driver with blk-mq API that uses none for I/O scheduler. I am running fio to perform random read/write on the device and noticed that the bv_len in each bio request is always 1024 bytes. I am not aware any place in code that sets this value explicitly. The file system is ext4.
Is this a default config or something I could change in code?
I am not aware any place in code that sets this [bv_len] value explicitly.
In a 5.7 kernel isn't it set explicitly in __bio_add_pc_page__bio_add_pc_page() and __bio_add_page() (which are within block/bio.c)? You'll have to trace back through callers to see how the passed len was set though.
(I found this by searching for the bv_len identifier in LXR and then going through results)
However, #stark's comment about tune2fs is the key to any answer. You never told us the filesystem block size and if your block device is "small" it's likely your filesystem is also small and by default the choice of block size is dependent on that. If you read the mke2fs man page you will see it says the following:
-b block-size
Specify the size of blocks in bytes. Valid block-size values are 1024, 2048 and 4096 bytes per block. If omitted, block-size is heuristically determined by the filesystem size and the expected usage of the filesystem (see the -T option).
[...]
-T usage-type[,...]
[...]
If this option is is not specified, mke2fs will pick a single default usage type based on the size of the filesystem to be created. If the filesystem size is less than or equal to 3 megabytes, mke2fs will use the filesystem type floppy. [...]
And if you look in the default mke2fs.conf the blocksize for a floppy is 1024.
I'm working on a project involving writing packets to a memory-mapped file. Our current strategy is to create a packet class containing the following members
uint32_t packetHeader;
uint8_t packetPayload[];
uint32_t packetChecksum;
When we create a packet, first we'd like to have its address in memory be a specified offset within the memory mapped file, which I think can be done with placement-new(). However, we'd also like for the packetPayload not to be a pointer to some memory from the heap, but contiguous with the rest of the class (so we can avoid memcpying from heap to our eventual output file)
i.e.
Memory
Beginning of class | BOC + 4 | (length of Payload) |
Header Payload Checksum
Would this be achievable using a length argument for the Packet class constructor? Or would we have to template this class for variably sized payloads?
Forget about trying to make that the layout of your class. You'll be fighting against the C++ language all the day long. Instead a class that provides access to the binary layout (in shared memory). But the class instance itself will not be in shared memory. And the byte range in shared/mapped memory will not be a C++ object at all, it just exists within the file mapping address range.
Presumably the length is fixed from the moment of creation? If so, then you can safely cache the length, pointer to the checksum, etc in your accessor object. Since this cache isn't inside the file, you can store whatever you want however you want without concern for its layout. You can even use virtual member functions, because the v-table is going in the class instance, not the range of the binary file.
Also, given that this lives in shared memory, if there are multiple writers you'll have to be very careful to synchronize between them. If you're just prepositioning a buffer in shared/mapped memory to avoid a copy later, but totally handing off ownership between processes so that the data is never shared by simultaneous accesses, it will be easier. You also probably want to calculate the checksum once after all the data is written, instead of trying to keep it up-to-date (and risking data races in the process) for every single write into the buffer.
First remember, that you need to know what your payload length is, somehow. Either you specify it in your instance somewhere, or you template your class over the payload length.
Having said that - you will need one of:
packetOffset being a pointer
A payload length member
A checksum offset member
and you'll want to use a named constructor idiom which takes the allocation length, and performs both the allocation and the setup of the offset/length/pointer member to a value corresponding to the length.
I am running systemd version 219.
root#EVOvPTX1_RE0-re0:/var/log# systemctl --version
systemd 219
+PAM -AUDIT -SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID -ELFUTILS +KMOD -IDN
I have a service, let's call it foo.service which has the following.
[Service]
MemoryLimit=1G
I have deliberately added code to allocate 1M memory 4096 times which causes
4G memory alloc when a certain event is received. The idea is that after
the process consumes 1G of address space, memory alloc would start failing.
However, this does not seem to be the case. I am able to alloc 4G memory
without any issues. This tells me that the memory limit specified in the
service file is not enforced.
Can anyone let me know what am I missing ?
I looked at the proc file system - file named limits. This shows that the
Max address space is Unlimited, which also confirms that the memory limit
is not getting enforced.
This distinction is that you have allocated memory, but you haven't actually used it. In the output of top, this is the difference between the "VIRT" memory column (allocated) and the "RES" column (actually used).
Try modifying your experiment to assign values to elements of a large array instead of just allocating memory and see if you hit the memory limit that way.
Reference: Resident and Virtual memory on Linux: A short example
Can we access memory through a struct page structure?
Note: The page belongs to high memory and has not been mapped to kernel logical address space.
Yes we can access the page belonging to highmem through struct page's virtual field. But in your case you can't access as you mentioned that highmem page is not mapped into kernel virtual memory.
To access it you need to create mapping either permanent or temporary mappping.
To create permanent mapping map page through kmap.
void *kmap(struct page *page)
This function works on either high or low memory. If the page structure belongs to a page in low memory, the page’s virtual address is simply returned. If the page resides in high memory, a permanent mapping is created and the address is returned.The function may sleep, so kmap() works only in process context. Because the number of permanent mappings are limited (if not, we would not be in this mess and could just permanently map all memory), high memory should be unmapped when no longer needed.This is done via the following function, which unmaps the given page:
void kunmap(struct page *page)
The temporary mapping can be created via:
void *kmap_atomic(struct page *page, enum km_type type)
This is an atomic function so you can't sleep and can be called in interrupt context. It is called temporary because next call to kmap_atomic will overwrite the previous mapping.
in case there is no value for virtual field then you can not access that specific physical frame. the simple reason is struct page denotes the mappings between physical and virtual addresses so a system with large memory can not map all memory in kernel space. so high memory is mapped dynamically. but to access that memory it should be mapped i.e. void *virtual should not be NULL.
Is it possible to wrap up memory mapped files something like this?
TVirtualMemoryManager = class
public
function AllocMem (Size : Integer) : Pointer;
procedure FreeMem (Ptr : Pointer);
end;
Since the memory mapped file API functions all take offsets I don't know how to manage the free areas in the memory mapped files. My only idea is to implement some kind of basic memory management (mainting free lists for different block sizes) but I don' t know how efficient this will be.
EDIT: What I really want (as David made clear to me) is this:
IVirtualMemory = interface
function ReadMem (Addr : Int64) : TBytes;
function AllocateMem (Data : TBytes) : Int64;
procedure FreeMem (Addr : Int64);
end;
I need to store continous blocks of bytes (each relatively small) in virtual memory and be able to read them back into memory using a 64-bit adress. Most of the time access is read-only. If a write is necessary I would just use FreeMem followed by AllocMem since the size will be different anyway.
I want a wrapper for a memory mapped file with this interface. Internally it has a handle to a memory mapped files and uses MapViewOfFile on each ReadMem request. The Addr 64-bit integers are just offsets into the memory mapped file. The open question is how to assign those adresses - I currently keep a list of free blocks that I maintain.
Your proposal that "Internally it has a handle to a memory mapped files and uses MapViewOfFile on each ReadMem request" will be just a waste of CPU resource, IMHO.
It is worth saying that your GetMem / FreeMem requirement won't be able to break the 3/4 GB barrier. Since all allocated memory will be mapped into memory until a call to FreeMem, you'll be short of memory space, just as with the regular Delphi memory manager. The best you can do is to rely of FastMM4, and change your program to reduce its memory use.
IMHO you'll have to change/update your specification. For instance, your "updated" question sounds just like a regular storage problem.
What you want is to be able to allocate more than 3/4 GB of data for your application. You have a working implementation of such a feature in our SynBigTable open source unit. This is a fast and light NoSQL solution in pure Delphi.
It is able to create a file of any size (only 64 bit limited), then will map the content of each record into memory, on request. It will use a memory mapping of the file, if possible. You can implement your interface very directly with TSynBigTable methods: ReadMem=Get, AllocMem=Add, FreeMem=Delete. The IDs will be your pointer-like values, and RawByteString will be used instead of TBytes.
You can access any block of data using an integer ID, or a string ID, or even use a sophisticated field layout (inside the record, or as in-memory metadata - including indexes and fast search).
Or rely on a regular embedded SQL database. For instance, SQLite3 is very good at handling BLOB fields, and is able to store huge amount of data. With a simple in-memory caching mechanism for most used records, it could be a powerful solution.