Could anyone explain the following code to me?
adjustedbuf = (void *) ((uint64_t) buf & ~(pagesize - 1))
I don't get the idea of this statement.
Could anyone explain the following code to me?
adjustedbuf = (void *) ((uint64_t) buf & ~(pagesize - 1))
I don't get the idea of this statement.
Assuming pagesize is a power of two, its binary representation will be something like:
0000 10000000 // assume page size = 2^7 = 128
So pagesize-1 will be:
0000 01111111 // 127
The negation of that (~) will be a bitmask, with all upper bits set, up to the "page size" bit:
1111 10000000
If you & that with anything, you end up with a number that is "rounded down" to a multiple of the page size
1100 10110110
& 1111 10000000
= 1100 10000000
Which is what that statement is doing. It aligns buf to a page size boundary.
(If pagesize isn't a power of two, the whole thing doesn't make much sense.)
It's page-aligning the buffer using bit operations. See
How to allocate aligned memory only using the standard library?
for more depth.