I have a document class hierarchy such as the following:
Document ----- L1----- L2 ----- L3
My current setup does not allow for creation of instances from L1 or L2, only L3 has allowInstance=true and hence all the document instances in the repository are of L3 class type.
I had a requirement to eliminate all L2 classes so the class hierarchy would be: Document ------ L1 ------- L3.
Now, My question is, is it possible (may be changing the Superclass Definition class property)? and If yes, should I expect reclassification for the current instances of L3 type (even though there class did not change).
One option I could think of
Create new/copy of L3 class below L1 called L3temp. (with all properties of L3)
Reclassify L3 docs to L3temp.
Remove L2 and L3 class (should have no docs)
Rename Class L3temp to L3.
Related
When testing the cache access latency on my gem5, the access latency of l1 is 100 cycles lower than that of l2. My modification is to modify the tag_latency, data_latency, and response_latency in the L2 class in gem5/configs/common/Caches.py. Their original value was 20. I changed them all to 5 or all to 0. Every time I recompile gem5, when I run it again, the time does not change. Why is that?
I am using classical cache
By the way, does the meaning of data_latency, tag_latency and response_latency mean data access delay, tag delay, and delay in responsing to CPU ?
gem5/build/X86/gem5.opt --debug-flags=O3CPUAll --debug-start=120000000000
--outdir=gem5/results/test/final gem5/configs/example/attack_code_config.py
--cmd=final
--benchmark_stdout=gem5/results/test/final/final.out
--benchmark_stderr=gem5/results/test/final/final.err
--mem-size=4GB --l1d_size=32kB --l1d_assoc=8 --l1i_size=32kB --l1i_assoc=8
--l2_size=256kB --l2_assoc=8 --l1d_replacement=LRU --l1i_replacement=LRU
--caches --cpu-type=DerivO3CPU
--cmd --l1d_replacement etc. are the options I added to the
option.
If a process writes a immediate operand to an address
int a;
a = 5;
what happens to L1-Data cache and DRAM?
DRAM fills "5" first or L1-Data Cache fills "5" first?
The compiler assigns some memory address to variable a. In the second statement, when a = 5 is executed, if the system is a multi-processor system, a request will be sent downstream to invalidate all lines and give the processor executing the code this particular cache address in a unique cache coherency state. The value of 5 is then written to the L1 cache (assuming the compiler wants to keep the cacheline address in the cache and does not deem that this should be written back to memory/DRAM).
I'm wondering if anyone could help me understand how a virtual address is mapped to its address one the backing store, which is used to hold moved-out pages of all user processes.
Is it a static mapping or a hash algorithm? If it's static, where such mapping is kept? It seems it can't be in the TLB or page table since according to https://en.wikipedia.org/wiki/Page_table, the PTE will be removed from both TLB and page table when a page is moved out. A description of the algorithm and C structs containing such info will be helpful.
Whether it's static mapping or hash algorithm, how to garrantee no 2 process will map its address to the same location on the swap partition, since the virtual address space of each process is so big (2^64) and the swap space is so small?
So:
during page-in, how the OS know where to find the address (corresponding to the virtual address accessed by the user process) on the swap partition to move in?
when a physical page needs to be paged out, how does the OS know where to put on the swap partition?
For the first part of your question : It is actually hardware dependent but the generic way is to keep a reference to the swap block containing the swapped out page (Depending on the implementation of the swap subsystem, it could be a pointer or a block number or an offset into a table) in it's corresponding page table entry.
EDIT:The TLB is a fast associative cache that help to do the virtual to physical page mapping very quickly. When a page is swapped out, it's entry in the TLB could be replaced by a newly active Page. But the entry in the page table cannot be replaced because page tables are not associative memory. A page table remains persistent in memory for all the duration of the process and no entry could be removed or replaced (By another virtual page). Entries in page tables could only be mapped or unmapped. When they are unmapped (Because of Swapping or freeing), the content of the entry could either hold a reference to the swap block or just an invalid value.
For the second part of your question : The system kernel maintains a list of free blocks in the swap partition. Whenever it needs to evict a RAM page, it allocates a free block and then the block reference is returned so that it can be inserted in the PTE. When the page comes back to RAM, the disk block is freed so that it could be used by other pages.
During page-in, how the OS know where to find the page (corresponding to the virtual address accessed by the user process) on the swap device to move in?
That's can actually be a fairly complicated process. The operating system has to maintain a table of where the process's pages are mapped to. This can be complicated because pages can be mapped to multiple devices and even multiple files on the same device. Some systems use the executable file for paging.
when a physical page needs to be paged out, after the virtual address for a physical page is looked up in TLB, how does the OS know where to put on the swap device?
On a rationally designed operating system, the secondary storage is allocated (or determined) when the virtual page is mapped to the process. That location remains fixed for the duration of the program being run.
I am working with the H3 Allwinner SoC. It has 4 cortex a7 processors.
U-boot brings up just one core. I am in the process of bringing up the other cores. However, I am getting stuck trying to initialize the caches and MMUs for the other cores.
In the secondary core boot code, I start by disabling caches and the MMU. I then invalidate the branch predication array and the I-cache. I am then trying to set TTBR0 to point to a pre-made page table with VA=PA flat mappings for my entire address space. However, writing TTBR0 to point to my page table is causing the core to crash, even with the MMU disabled? Any ideas?
Is there some preferred or correct order in initializing this stuff in a secondary core?
Thank you :)
How is the write operation for a memory location that's not in the cache handled in the MESI protocol? The state diagrams i have seen mark it as Write Miss but i can't follow what happens in reality.
I think this results in a load operation on the bus to ensure that the processor trying to do the write gets exclusive access to the location and then the block is modified. Is this how it's done in reality or is the handling of write in invalid state implementation defined?
If the policy is allocate on a write miss:
If the block was not present in any other caches but only main memory, the block is fetched into the cache first, marked as M (modified) state, and then the write proceeds.
If the block was present in some other caches, it's copy in the other caches is first invalidated, so that this cache gains the only copy of the block, and then the write proceeds.
If the policy is no allocate on write miss: all write misses go directly to main memory. A copy is not fetched into the cache. If the main memory does not have the only copy of the block (some other cache has a copy), then the other copies are first invalidated and the write takes place in main memory.