ARM MMU Debugging - debugging

I have been working on a bare metal raspberry pi project, and I am now attempting to initialize the memory management unit, following the documentation and examples.
When I run it, however, on the pi, nothing happens afterwards, and when I run it in QEMU using gdb, gdb either crashes of I get a Prefetch Abort as exception 3. Have I incorrectly set some properties such as shareability, or incorrectly used isb, or is there something I have missed?
Here is my code below
pub unsafe fn init_mmu(start: usize) {
let tcr_el1 =
(0b10 << 30) | //4kb Granule
(0b10 << 28) | //TTBR1 outer shareable
(25 << 16) | //TTBR1 size
(0b10 << 12) | //TTBR0 outer shareable
25; //TTBR0 size
let sctlr_el1 =
(1 << 4) | //Enforce EL0 stack alignment
(1 << 3) | //Enforce EL1 stack alignment
(1 << 1) | // Enforce access alignment
1; //Enable MMU
//0000_0000: nGnRnE device memory
// 0100_0100: non cacheable
let mair = 0b0000_0000_0100_0100;
let mut table = core::slice::from_raw_parts_mut(start as *mut usize, 2048);
for i in 0..(512-8) {
table[i] = (i <<9) |
(1 << 10) | // AF
(0 << 2) | //MAIR index
1; //Block entry
}
for i in (512-8)..512 {
table[i] = (i << 9) |
(1 << 10) | // AF
(1 << 2) | //MAIR index
1; //Block entry
}
table[512] = (512 << 9) |
(1 << 10) | // AF
(1 << 2) | //MAIR index
1; //Block entry
table[1024] = (0x8000000000000000) | start | 3;
table[1025] = (0x8000000000000000) | (start + 512 * 64) | 3;
write!("mair_el1", mair);
write!("ttbr0_el1", start + 1024 * 64);
asm!("isb");
write!("tcr_el1", tcr_el1);
asm!("isb");
write!("sctlr_el1", sctlr_el1);
asm!("isb");
}

Related

How can I modify bit in position in Go?

I tried to modify bit at a certain position, but ran into a problem.
For example I have 1000000001, how can I modify it to 0000000001?
You can apply a bitmask to only keep the bits you are interested in.
In this case if you only want the last bit, you apply the bitmask 0b0000000001
https://go.dev/play/p/RNQEcON7sw1
// 'x' is your value
x := 0b1000000001
// Make the bitmask
mask := 0b0000000001
// Apply the bitmask with bitwise AND
output := x&mask
fmt.Println("This value == 1: ", output)
Explaination
& is a bitwise operator for "AND". Which means it goes through both values bit by bit and sets the resulting bit to 1 if and only if both input bits are 1. I included a truth table for the AND operator below.
+-----------+----------+--------------+
| Input Bit | Mask Bit | Input & Mask |
+-----------+----------+--------------+
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
+-----------+----------+--------------+
Because my mask function only has a 1 in the last position, only the last position of the original input is kept. All preceding bits will always be 0.
Construct a mask that has a one in every place you want to manipulate
Use bitwise OR to set bits.
Use bitwise AND with the inverse mask to clear bits.
Use XOR to toggle a bits
package main
import "fmt"
func main() {
k := 3 // manipulate the 3rd bit ...
mask := uint8(1) << (k - 1) // ... using 0b00000100 as a mask
var n uint8 = 0b10101010
fmt.Printf("0b%08b\n", n) // 0b10101010
// set kth bit
n |= mask
fmt.Printf("0b%08b\n", n) // 0b10101110
// clear kth bit
n &^= mask // &^ is Go's AND NOT operator
fmt.Printf("0b%08b\n", n) // 0b10101010
// toggle kth bit
n ^= mask
fmt.Printf("0b%08b\n", n) // 0b10101110
}
func test() {
i := 1 << 9 //1000000000
i = i | (1 << 8) //1000000000 | 0100000000 == 1100000000
i = i | (1 << 7) //1100000000 | 0010000000 == 1110000000
i = i | (1 << 0) //1110000000 | 0000000001 == 1110000001
fmt.Printf("BEFORE: %010b\n", i) // 1110000001
i = i & ((1 << 9) - 1) // 1110000001 & ((1000000000) - 1) == 1110000001 & (0111111111) == 0110000001
fmt.Printf("AFTER: %010b\n", i) // 0110000001
}

the data method of vector has some wrong

I use the data method of vector in C++, but I have some problems, the code is in belows:
#include <iostream>
#include <vector>
int main ()
{
std::vector<int> myvector (5);
int* p = myvector.data();
*p = 10;
++p;
*p = 20;
p[2] = 100;
std::cout << "myvector contains:";
for (unsigned i=0; i<myvector.size(); ++i)
std::cout << ' ' << myvector[i];
std::cout << '\n';
return 0;
}
the result is myvector contains: 10 20 0 100 0, but why the result is not myvector contains: 10 20 100 0 0, the first one *p = 10; is 10, the second one ++p;*p = 20; is 20, that's all right, but the third one p[2] = 100; should be 100, but it is 0, why?
With visuals:
std::vector<int> myvector (5);
// ---------------------
// | 0 | 0 | 0 | 0 | 0 |
// ---------------------
int* p = myvector.data();
// ---------------------
// | 0 | 0 | 0 | 0 | 0 |
// ---------------------
// ^
// p
*p = 10;
// ----------------------
// | 10 | 0 | 0 | 0 | 0 |
// ----------------------
// ^
// p
++p;
// ----------------------
// | 10 | 0 | 0 | 0 | 0 |
// ----------------------
// ^
// p
*p = 20;
// ----------------------
// | 10 | 20 | 0 | 0 | 0 |
// ----------------------
// ^
// p
p[2] = 100;
// -------------------------
// | 10 | 20 | 0 | 100 | 0 |
// -------------------------
// ^ ^
// p p[2]
It's helpful to remember that p[2] is a shorter way to say *(p + 2).
Because you are modifying p itself.
After ++p (which I remember you it's equivalent to p = p + 1), p points to the element at index 1, so p[2] points at element at index 3 from the beginning of the vector which is why the fourth element is changed instead.
After ++p, pointer p is pointing to myvector[1].
Then we have:
p[0] pointing to myvector[1]
p[1] pointing to myvector[2]
p[2] pointing to myvector[3]

VB.NET enum declaration syntax

I recently saw a declaration of enum that looks like this:
<Serializable()>
<Flags()>
Public Enum SiteRoles
ADMIN = 10 << 0
REGULAR = 5 << 1
GUEST = 1 << 2
End Enum
I was wondering if someone can explain what does "<<" syntax do or what it is used for? Thank you...
The ENUM has a Flags attribute which means that the values are used as bit flags.
Bit Flags are useful when representing more than one attribute in a variable
These are the flags for a 16 bit (attribute) variable (hope you see the pattern which can continue on to X number of bits., limited by the platform/variable type of course)
BIT1 = 0x1 (1 << 0)
BIT2 = 0x2 (1 << 1)
BIT3 = 0x4 (1 << 2)
BIT4 = 0x8 (1 << 3)
BIT5 = 0x10 (1 << 4)
BIT6 = 0x20 (1 << 5)
BIT7 = 0x40 (1 << 6)
BIT8 = 0x80 (1 << 7)
BIT9 = 0x100 (1 << 8)
BIT10 = 0x200 (1 << 9)
BIT11 = 0x400 (1 << 10)
BIT12 = 0x800 (1 << 11)
BIT13 = 0x1000 (1 << 12)
BIT14 = 0x2000 (1 << 13)
BIT15 = 0x4000 (1 << 14)
BIT16 = 0x8000 (1 << 15)
To set a bit (attribute) you simply use the bitwise or operator:
UInt16 flags;
flags |= BIT1; // set bit (Attribute) 1
flags |= BIT13; // set bit (Attribute) 13
To determine of a bit (attribute) is set you simply use the bitwise and operator:
bool bit1 = (flags & BIT1) > 0; // true;
bool bit13 = (flags & BIT13) > 0; // true;
bool bit16 = (flags & BIT16) > 0; // false;
In your example above, ADMIN and REGULAR are bit number 5 ((10 << 0) and (5 << 1) are the same), and GUEST is bit number 3.
Therefore you could determine the SiteRole by using the bitwise AND operator, as shown above:
UInt32 SiteRole = ...;
IsAdmin = (SiteRole & ADMIN) > 0;
IsRegular = (SiteRole & REGULAR) > 0;
IsGuest = (SiteRole & GUEST) > 0;
Of course, you can also set the SiteRole by using the bitwise OR operator, as shown above:
UInt32 SiteRole = 0x00000000;
SiteRole |= ADMIN;
The real question is why do ADMIN and REGULAR have the same values? Maybe it's a bug.
These are bitwise shift operations. Bitwise shifts are used to transform the integer value of the enum mebers here to a different number. Each enum member will actually have the bit-shifted value. This is probably an obfuscation technique and is the same as setting a fixed integer value for each enum member.
Each integer has a binary reprsentation (like 0111011); bit shifting allows bits to move to the left (<<) or right (>>) depending on which operator is used.
For example:
10 << 0 means:
1010 (10 in binary form) moved with 0 bits left is 1010
5 << 1 means:
101 (5 in binary form) moved one bit to the left = 1010 (added a zero to the right)
so 5 << 1 is 10 (because 1010 represents the number 10)
and etc.
In general the x << y operation can be seen as a fast way to calculate x * Pow(2, y);
You can read this article for more detailed info on bit shifting in .NET http://www.blackwasp.co.uk/CSharpShiftOperators.aspx

Go << and >> operators

Could someone please explain to me the usage of << and >> in Go? I guess it is similar to some other languages.
The super (possibly over) simplified definition is just that << is used for "times 2" and >> is for "divided by 2" - and the number after it is how many times.
So n << x is "n times 2, x times". And y >> z is "y divided by 2, z times".
For example, 1 << 5 is "1 times 2, 5 times" or 32. And 32 >> 5 is "32 divided by 2, 5 times" or 1.
From the spec at http://golang.org/doc/go_spec.html, it seems that at least with integers, it's a binary shift. for example, binary 0b00001000 >> 1 would be 0b00000100, and 0b00001000 << 1 would be 0b00010000.
Go apparently doesn't accept the 0b notation for binary integers. I was just using it for the example. In decimal, 8 >> 1 is 4, and 8 << 1 is 16. Shifting left by one is the same as multiplication by 2, and shifting right by one is the same as dividing by two, discarding any remainder.
The << and >> operators are Go Arithmetic Operators.
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
The shift operators shift the left
operand by the shift count specified
by the right operand. They implement
arithmetic shifts if the left operand
is a signed integer and logical shifts
if it is an unsigned integer. The
shift count must be an unsigned
integer. There is no upper limit on
the shift count. Shifts behave as if
the left operand is shifted n times by
1 for a shift count of n. As a result,
x << 1 is the same as x*2 and x >> 1
is the same as x/2 but truncated
towards negative infinity.
They are basically Arithmetic operators and its the same in other languages here is a basic PHP , C , Go Example
GO
package main
import (
"fmt"
)
func main() {
var t , i uint
t , i = 1 , 1
for i = 1 ; i < 10 ; i++ {
fmt.Printf("%d << %d = %d \n", t , i , t<<i)
}
fmt.Println()
t = 512
for i = 1 ; i < 10 ; i++ {
fmt.Printf("%d >> %d = %d \n", t , i , t>>i)
}
}
GO Demo
C
#include <stdio.h>
int main()
{
int t = 1 ;
int i = 1 ;
for(i = 1; i < 10; i++) {
printf("%d << %d = %d \n", t, i, t << i);
}
printf("\n");
t = 512;
for(i = 1; i < 10; i++) {
printf("%d >> %d = %d \n", t, i, t >> i);
}
return 0;
}
C Demo
PHP
$t = $i = 1;
for($i = 1; $i < 10; $i++) {
printf("%d << %d = %d \n", $t, $i, $t << $i);
}
print PHP_EOL;
$t = 512;
for($i = 1; $i < 10; $i++) {
printf("%d >> %d = %d \n", $t, $i, $t >> $i);
}
PHP Demo
They would all output
1 << 1 = 2
1 << 2 = 4
1 << 3 = 8
1 << 4 = 16
1 << 5 = 32
1 << 6 = 64
1 << 7 = 128
1 << 8 = 256
1 << 9 = 512
512 >> 1 = 256
512 >> 2 = 128
512 >> 3 = 64
512 >> 4 = 32
512 >> 5 = 16
512 >> 6 = 8
512 >> 7 = 4
512 >> 8 = 2
512 >> 9 = 1
n << x = n * 2^x   Example: 3 << 5 = 3 * 2^5 = 96
y >> z = y / 2^z   Example: 512 >> 4 = 512 / 2^4 = 32
<< is left shift. >> is sign-extending right shift when the left operand is a signed integer, and is zero-extending right shift when the left operand is an unsigned integer.
To better understand >> think of
var u uint32 = 0x80000000;
var i int32 = -2;
u >> 1; // Is 0x40000000 similar to >>> in Java
i >> 1; // Is -1 similar to >> in Java
So when applied to an unsigned integer, the bits at the left are filled with zero, whereas when applied to a signed integer, the bits at the left are filled with the leftmost bit (which is 1 when the signed integer is negative as per 2's complement).
Go's << and >> are similar to shifts (that is: division or multiplication by a power of 2) in other languages, but because Go is a safer language than C/C++ it does some extra work when the shift count is a number.
Shift instructions in x86 CPUs consider only 5 bits (6 bits on 64-bit x86 CPUs) of the shift count. In languages like C/C++, the shift operator translates into a single CPU instruction.
The following Go code
x := 10
y := uint(1025) // A big shift count
println(x >> y)
println(x << y)
prints
0
0
while a C/C++ program would print
5
20
In decimal math, when we multiply or divide by 10, we effect the zeros on the end of the number.
In binary, 2 has the same effect. So we are adding a zero to the end, or removing the last digit
<< is the bitwise left shift operator ,which shifts the bits of corresponding integer to the left….the rightmost bit being ‘0’ after the shift .
For example:
In gcc we have 4 bytes integer which means 32 bits .
like binary representation of 3 is
00000000 00000000 00000000 00000011
3<<1 would give
00000000 00000000 00000000 00000110 which is 6.
In general 1<<x would give you 2^x
In gcc
1<<20 would give 2^20 that is 1048576
but in tcc it would give you 0 as result because integer is of 2 bytes in tcc.
in simple terms we can take it like this in golang
So
n << x is "n times 2, x times". And y >> z is "y divided by 2, z times".
n << x = n * 2^x Example: 3<< 5 = 3 * 2^5 = 96
y >> z = y / 2^z Example: 512 >> 4 = 512 / 2^4 = 32
These are Right bitwise and left bitwise operators

What's the best way to handle an "all combinations" project?

I've been assigned a school project in which I need to come up with as many integers as possible using only the integers 2 3 4 and the operators + - * / %. I then have to output the integers with cout along with how I got that answer. For example:
cout << "2 + 3 - 4 = " << 2 + 3 - 4;
I can only use each integer once per cout statement, and there can be no duplicate answers.
Everyone else seems to be doing the "brute force" method (i.e., copying and pasting the same statements and changing the numbers and operators), but that hardly seems efficient. I figured I would try cycling through each number and operator one-by-one and checking to see if the answer has already been found, but I'm unsure of what the easiest way to do this would be.
I suppose I could use nested loops, but there's still the problem of checking to see if the answer has already been found. I tried storing the answers in a vector, but I couldn't pass the vector to a user-defined function that checked to see if a value existed in the vector.
You could use a map or a hash_map from the Standard Template Library (STL). These structures store key-value pairs efficiently. Read up on them before you use them but they might give you a good starting point. Hint: The integers you compute would probably make good keys.
Assuming you can use each of the numbers in the set(2, 3, 4) only once there are 3! ways of arranging these 3 numbers. Then there are 2 places for sign and you have total 5 symbols(+ -
* / %) so there are 5*5 = 25 ways to do that. So you have total 3! * 25 expressions.
Than you can create a hash map where key will be number and value will be the expression. If the hash map contains a key already you skip that expression.
You could try a bit of meta-programming, as follows. It has the advantage of using C itself to calculate the expressions rather than you trying to do your own evaluator (and possibly getting it wrong):
#include <stdlib.h>
#include <iostream>
#include <fstream>
using namespace std;
int main (void) {
int n1, n2, n3;
const char *ops[] = {" + ", " - ", " * ", " / ", " % ", 0};
const char **op1, **op2;
ofstream of;
of.open ("prog2.cpp", ios::out);
of << "#include <iostream>\n";
of << "using namespace std;\n";
of << "#define IXCOUNT 49\n\n";
of << "static int mkIdx (int tot) {\n";
of << " int ix = (IXCOUNT / 2) + tot;\n";
of << " if ((ix >= 0) && (ix < IXCOUNT)) return ix;\n";
of << " cout << \"Need more index space, "
<< "try \" << IXCOUNT + 1 + (ix - IXCOUNT) * 2 << \"\\n\";\n";
of << " return -1;\n";
of << "}\n\n";
of << "int main (void) {\n";
of << " int tot, ix, used[IXCOUNT];\n\n";
of << " for (ix = 0; ix < sizeof(used)/sizeof(*used); ix++)\n";
of << " used[ix] = 0;\n\n";
for (n1 = 2; n1 <= 4; n1++) {
for (n2 = 2; n2 <= 4; n2++) {
if (n2 != n1) {
for (n3 = 2; n3 <= 4; n3++) {
if ((n3 != n1) && (n3 != n2)) {
for (op1 = ops; *op1 != 0; op1++) {
for (op2 = ops; *op2 != 0; op2++) {
of << " tot = " << n1 << *op1 << n2 << *op2 << n3 << ";\n";
of << " if ((ix = mkIdx (tot)) < 0) return ix;\n";
of << " if (!used[ix])\n";
of << " cout << " << n1 << " << \"" << *op1 << "\" << "
<< n2 << " << \"" << *op2 << "\" << " << n3
<< " << \" = \" << tot << \"\\n\";\n";
of << " used[ix] = 1;\n\n";
}
}
}
}
}
}
}
of << " return 0;\n";
of << "}\n";
of.close();
system ("g++ -o prog2 prog2.cpp ; ./prog2");
return 0;
}
This gives you:
2 + 3 + 4 = 9
2 + 3 - 4 = 1
2 + 3 * 4 = 14
2 + 3 / 4 = 2
2 + 3 % 4 = 5
2 - 3 + 4 = 3
2 - 3 - 4 = -5
2 - 3 * 4 = -10
2 - 3 % 4 = -1
2 * 3 + 4 = 10
2 * 3 * 4 = 24
2 / 3 + 4 = 4
2 / 3 - 4 = -4
2 / 3 * 4 = 0
2 % 3 + 4 = 6
2 % 3 - 4 = -2
2 % 3 * 4 = 8
2 * 4 + 3 = 11
2 / 4 - 3 = -3
I'm not entirely certain of the wisdom of handing this in as an assignment however :-)

Resources