Question:
#define SwapByte4(ldata) \
(((ldata & 0x000000FF) << 24) | \
((ldata & 0x0000FF00) << 8) | \
((ldata & 0x00FF0000) >> 8) | \
((ldata & 0xFF000000) >> 24))
What does that 0x000000FF represent? I know that decimal 15 is represented in hex as F, but why is it << 24?
Best Answer:
Here is a hex value, 0x12345678, written as binary, and annotated with some bit positions:
|31 24|23 16|15 8|7 bit 0| +---------------+---------------+---------------+---------------+ |0 0 0 1 0 0 1 0|0 0 1 1 0 1 0 0|0 1 0 1 0 1 1 0|0 1 1 1 1 0 0 0| +---------------+---------------+---------------+---------------+
...and here is 0x000000FF:
+---------------+---------------+---------------+---------------+ |0 0 0 0 0 0 0 0|0 0 0 0 0 0 0 0|0 0 0 0 0 0 0 0|1 1 1 1 1 1 1 1| +---------------+---------------+---------------+---------------+
So a bitwise AND selects just the bottom 8 bits of the original value:
+---------------+---------------+---------------+---------------+ |0 0 0 0 0 0 0 0|0 0 0 0 0 0 0 0|0 0 0 0 0 0 0 0|0 1 1 1 1 0 0 0| +---------------+---------------+---------------+---------------+
...and shifting it left by 24 bits moves it from the bottom 8 bits to the top:
+---------------+---------------+---------------+---------------+ |0 1 1 1 1 0 0 0|0 0 0 0 0 0 0 0|0 0 0 0 0 0 0 0|0 0 0 0 0 0 0 0| +---------------+---------------+---------------+---------------+
...which is 0x78000000 in hex.
The other parts work on the remaining 8-bit portions of the input:
0x12345678 & 0x000000FF ---------- 0x00000078 << 24 = 0x78000000 (as shown above) 0x12345678 & 0x0000FF00 ---------- 0x00005600 << 8 = 0x00560000 0x12345678 & 0x00FF0000 ---------- 0x00340000 >> 8 = 0x00003400 0x12345678 & 0xFF000000 ---------- 0x12000000 >> 24 = 0x00000012 | ---------- 0x78563412
so the overall effect is to consider the 32-bit value
'via Blog this'
ldata
as a sequence of four 8-bit bytes, and reverse their order.
No comments:
Post a Comment