I'm using a memory editing application known as Cheat Engine. I attach Cheat Engine to a game. In my game, I have a 32-bit integer known as HP. HP is stored at a memory address A. If I restart my game, HP is stored at a new memory address B. It seems that using Cheat Engine, I can do a pointer scan and find a static memory address, C, that points to another memory address and its accompanying offset, D and offset, so that [D + offset] always stores HP's memory address during that session. So if I dereference [D + offset], I always get the memory address that stores HP.
Here is a diagram:
A or B --> HP
D + offset --> A or B
C --> D
What is the benefit of using offsets? Why can't C just point to A or B directly? I'm familiar that using offsets are beneficial when dealing with arrays in the C language. Does that mean that whenever I see an offset to a pointer, the pointer is pointing to the first element in an array and the offset is referring to one of the elements in the array?
It should be easy to imagine and understand if you know the C programming language. Anything you write in C is very close to the actual machine code that gets generated when compiling the program.
The abstraction of an object in C is often done with a "struct".
In your example imagine a very simple "Player" struct:
struct Player {
int id;
float xposition;
float yposition;
int health;
int maxhealth;
};
If you want to create an object like this you could do something like this:
struct Player *myPlayer = malloc(sizeof(struct Player));
What is a nice looking structured thing in the high language is actually just a block of memory in a compiled program.
To access for example "health" you do myPlayer->health; in C. But now how does this look in a compiled program that doesnt know about the beatiful names and just has a block of memory to work with?
It has to work with offsets from the base pointer. In the above example (assuming windows operating system and any default configured sane compiler) an access in some pseudo machine code will look something like this:
move myHealth, read4bytes[myPlayer + 12]
If you reverse-engineer a progam, you can't tell from an offset-access whether the block of memory was a struct, a array, or maybe a class (from C++ code) or something entirely different.
Related
In Linux kernel programming, I see get_user and copy_from_user perform read from user space, earlier one reads fixed 1, 2 or 4 bytes while latter reads arbitrary number of bytes from user space. What was the need of get_user? Did copy_from_user come after get_user and hence get_user was kept for backward compatibility? Are there specific applications of get_user or is it obsolete now? Same queries for put_user and copy_to_user.
You can think about
copy_from_user(dest, src, size);
as some sort of
memcpy(dest, src, size);
and about
get_user(x, ptr);
as some sort of simple assignment:
x = *ptr;
Like a simple assignment is a cleaner(for code undestanding), shorter and faster way than a memcpy() function call, get_user is a cleaner, shorter and faster way than a copy_from_user.
The mostly known case, when the size of the data is constant and small(so get_user is applicable), is an ioctl implementation for devices. You can find many get_user usages by grep-ing kernel sources for get_user, or using online kernel code search service like Linux Cross Reference.
I want to know the clear cut difference between static and dynamic in C#.
I have seen many posts on different blogs but i was not satisfied by their answers .
Please explain me clearly.
These terms are used in a number of ways, depending on the specific context. But in general, static refers to something that's specified early, or hard-coded into a program, and is not easily changed. Dynamic refers to something that's intended to be updated on the fly.
For instance, in C, if you declare an array like:
int arr[100];
the size of the array is static: it's always 100 elements. Even if you use a macro, like this:
int arr[SIZE];
you would have to update the macro definition and recompile the program to change the array's size. The compiler will set aside a fixed block of memory to hold the array; if it's a local variable, it will allocate the memory in the stack frame of the function, if it's a global variable it will be allocated at program start-up time in the BSS segment (the specific details are implementation-dependent, but this is the typical way).
On the other hand, if you use:
int *arr = malloc(n * sizeof(int));
the array's size is dynamic -- it depends on the current value of the variable n, which can depend on program inputs and other state. You can also use realloc() to change the array's size.
So basically I am trying to use some Prolog code to simulate pointer like behavior.
I asked a related question here, and after around one month, I finally have time to start.
Here is a simple example in C:
int a = 1;
int* p = &a;
int b = *p;
And I want to translate this code into Prolog like this (or other better strategies?):
A is 1,
assert(ref(p, a)), <- this can be dynamic fact gene
ref(p, TEMP), <- now I want to use a!!
to_lowercase(TEMP, TEMP1), <- I don't know how to implement to_low
B is TEMP1. <- reflection?
So in the above code, I am confused with
In my understanding, after ref(p, TEMP), then TEMP will equal to "a", and it is just a string, then how can I reuse it as a variable name, sounds like a reflection...?
How to implement the to_lowercase function?
Am I clear?
If you are really that determined to simulate a computer from within Prolog, you should take into account the answers to your previous questions before moving on. Either way, this answer makes a lot of assumptions about what your ultimate goal is. I am guessing that you are trying to simulate a machine, and write a simulator that takes source code written in a C-style language and executes it.
So let's say you have a very simple processor with flat memory space (some small embedded microcontrollers are like this). Your whole memory then would be just one chunk of 16-bit addresses, let's say 1000 of them:
functor(Memory, memory, 1000).
Taking your C code above, a compiler might come up with:
Pick an address Addr1 for a, which is an int, and write the value 1 at that address
Pick an address Addr2 for p, which is an int *, and write the value of the address of a at that address
Pick an address Addr3 for b, which is an int, and write to it the value which is at the memory to which the value in p is pointing to.
This could translate to machine code for the machine you are simulating (assuming the actual addresses have been already picked appropriately by the compiler):
arg(Addr1, Memory, 1), % int a = 1;
arg(Addr2, Memory, Addr1), % int *p = &a;
arg(Addr2, Memory, Tmp1), %% put the value at p in Tmp1; this is an address
arg(Tmp1, Memory, Tmp2), %% read the value at the address Tmp1 into Tmp2
arg(Addr3, Memory, Tmp2). % int b = *p;
So of course all addresses should be within your memory space. Some of the calls to arg/3 above are reads and some are writes; it should be clear which is which. Note that in this particular conjunction all three arguments to the term memory/1000 are still free variables. To modify the values at an address that has been already set you would need to copy it accordingly, freeing the address you need to reuse.
Please read carefully all the answers to your questions before pressing on.
You need to read a good book on Prolog. I'd suggest The Art of Prolog.
Prolog doesn't have anything like pointers, or addresses or variables. It's got terms. An unbound term is variable because it's not yet bound. Once bound (unified), it ceases to be variable and it becomes that with which it unified. It cannot be assigned a new value — unless the unification is undone via backtracking and an alternative path taken. Hence the term unification.
Trying to map the concept of pointers and memory addresses onto prolog is somewhat akin to putting fish gills on a bicycle.
As far as implementing a predicate for converting a strong to lower-case, you should realize that Prolog doesn't really have strings: the Prolog string "ABC" is exactly identical to the list [65,66,67], a a list of integers representing ASCII/Unicode code points. It is what is called syntactic sugar. So...given that identity...
Something like
to_lower( [] , [] ).
to_lower( [C|Cs] , [L|Ls] ) :-
char_code('A',A),
char_code('Z',Z),
C >= A ,
C =< Z ,
! ,
char_code(a,Base),
Offset = C - A ,
L is Base+Offset,
to_lower(Cs,Ls).
to_lower([C|Cs],[C|Ls]) :-
to_lower(Cs,Ls).
Should do you.
Since you tag the question SWI-Prolog, I assume you have clear that the string concept has undergone some important change in recent times, mainly for efficiency reasons.
Look at downcase_atom/2, or string_lower/2, depending on your intended usage (I linked to string processing page, because the string_lower one has a typo).
For storing 'pointers' like objects, I suggest to use global variables, nb_setval/2, nb_getval/2, nb_current/2 instead of assert/retract. For first, are much more efficient (I measured time ago a factor of 3 in favour of nb_ predicate family), and make clearer the intended usage. assert/retract are better used to update a dynamic knowledge base.
I have been fiddling with OpenCL recently, and I have run into a serious limitation: You cannot pass an array of pointers to a kernel. This makes it difficult to pass an arbitrarily sized list of, say, images to a kernel. I had a couple of thoughts toward this, and I was wondering if anybody could say for sure whether or not they would work, or offer better suggestions.
Let's say you had x image objects that you wanted to be passed to the kernel. If they were all only 2D, one solution might be to pack them all into a 3D image, and just index the slices. The problem with this is, if the images are different sizes, then space will be wasted, because the 3D image has to have the width of the widest image, the height of the tallest image, and the depth would be the number of images.
However, I was also thinking that when you pass a buffer object to a kernel, it appears in the kernel as a pointer. If you had a kernel that took an arbitrary data buffer, and a buffer designated just for storing pointers, and then appended the pointer to the first buffer to the end of the second buffer, (provided there was enough allocated space of course) then maybe you could keep a buffer of pointers to other buffers on the device. This buffer could then be passed to other kernels, which would then, with some interesting casting, be able to access these arbitrary buffers on the device. The only problem is whether or not a given buffer pointer would remain the same throughout the life of the buffer. Also, when you pass an image, you get a struct as an argument. Now, does this struct actually have a home in device memory? Or is it around just long enough to be passed to the kernel? These things are important in that they would determine whether or not the pointer buffer trick would work on images too, assuming it would work at all.
Does anybody know if the buffer trick would work? Are there any other ways anybody can think of to pass a list of arbitrary size to a kernel?
EDIT: The buffer trick does NOT work. I have tested it. I am not sure why exactly, but the pointers on the device don't seem to stay the same from one invocation to another.
Passing an array of pointers to a kernel does not make sense, because the pointers would point to host memory, which the OpenCL device does not know anything about. You would have to transfer the data to a device buffer and then pass the buffer pointer to the kernel. (There are some more complicated options with mapped/pinned memory and especially in the case of APUs, but they don't change the main fact, that host pointers are invalid on the device).
I can suggest one approach, although I have never actually used it myself. If you have a large device buffer preallocated, you could fill it up with images back to back from the host. Then call the kernel with the buffer and a list of offsets as arguments.
This is easy, and I've done it. You don't use pointers, so much as references, and you do it like this. In your kernel, you can provide two arguments:
kernel void(
global float *rowoffsets,
global float *data
) {
Now, in your host, you simply take your 2d data, copy it into a 1d array, and put the index of the start of each row into rowoffsets
For the last row, you add an additional rowoffset, pointing to one past the end of data.
Then in your kernel, to read the data from a row, you can do things like:
kernel void(
global float *rowoffsets,
global float *data,
const int N
) {
for( int n = 0; n < N; n++ ) {
const int rowoffset = rowoffsets[n];
const int rowlen = rowoffsets[n+1] - rowoffset;
for( int col = 0; col < rowlen; col++ ) {
// do stuff with data[rowoffset + col] here
}
}
}
Obviously, how you're actually going to assign the data to each workitem is up to you, so whether you're using actual loops, or giving each workitem a single row and column is part of your own application design.
In an std::vector of a non POD data type, is there a difference between a vector of objects and a vector of (smart) pointers to objects? I mean a difference in the implementation of these data structures by the compiler.
E.g.:
class Test {
std::string s;
Test *other;
};
std::vector<Test> vt;
std::vector<Test*> vpt;
Could be there no performance difference between vt and vpt?
In other words: when I define a vector<Test>, internally will the compiler create a vector<Test*> anyway?
In other words: when I define a vector, internally will the compiler create a vector anyway?
No, this is not allowed by the C++ standard. The following code is legal C++:
vector<Test> vt;
Test t1; t1.s = "1"; t1.other = NULL;
Test t2; t2.s = "1"; t2.other = NULL;
vt.push_back(t1);
vt.push_back(t2);
Test* pt = &vt[0];
pt++;
Test q = *pt; // q now equal to Test(2)
In other words, a vector "decays" to an array (accessing it like a C array is legal), so the compiler effectively has to store the elements internally as an array, and may not just store pointers.
But beware that the array pointer is valid only as long as the vector is not reallocated (which normally only happens when the size grows beyond capacity).
In general, whatever the type being stored in the vector is, instances of that may be copied. This means that if you are storing a std::string, instances of std::string will be copied.
For example, when you push a Type into a vector, the Type instance is copied into a instance housed inside of the vector. The copying of a pointer will be cheap, but, as Konrad Rudolph pointed out in the comments, this should not be the only thing you consider.
For simple objects like your Test, copying is going to be so fast that it will not matter.
Additionally, with C++11, moving allows avoiding creating an extra copy if one is not necessary.
So in short: A pointer will be copied faster, but copying is not the only thing that matters. I would worry about maintainable, logical code first and performance when it becomes a problem (or the situation calls for it).
As for your question about an internal pointer vector, no, vectors are implemented as arrays that are periodically resized when necessary. You can find GNU's libc++ implementation of vector online.
The answer gets a lot more complicated at a lower than C++ level. Pointers will of course have to be involved since an entire program cannot fit into registers. I don't know enough about that low of level to elaborate more though.