## Is my understanding of Blockhain Endianness correct?

2

1

I know questions on this topic have been asked to death in a number of different ways. I also know that what endian order refers to specifically is dependent on context.

What I want are some specific clarifications, though, for peace of mind to ensure that I have the right idea.

Endianness with Respect to C and x86 byte ordering

Typically when I think of endianness I'm thinking of the ordering of arbitrary integer values using 8 bit bytes in memory. In other words, 0x3FE17C has no explicit endianness on its own, but if we were to consider the value as stored in big endian, in memory it would look like 3F E1 7C, and in in little endian memory order it would be 7C E1 3F.

So, endianness to me is a convention of memory ordering. I've never, ever explicitly heard of it used in another form.

However, it appears that endianness can also refer to character orderings in a string. One might say that if we have the string "3FE17C", we could state that this order is supposed to be big-endian. If this is the case, with respect to hex literals, if I wanted to take the same value in C and initialize a variable with that same value, I would do:

 uint32_t x = 0x3FE17C; 

And in C on an x86 machine that would of course be stored in memory as "7C E1 3F".

RPC Byte Ordering

So, if I'm reading documentation on the block header, and I notice that the nBits field is displayed in RPC byte ordering. Later, after the getblocktemplate request I then parse the corresponding nBits hex string and have the string literal in the same order as it was displayed to me in a debug console for whatever wallet I'm using.

If I use the following hex2bin conversion subroutine, which reads the string from left to right and writes from the beginning of a buffer to the end for each hex character-pair to byte conversion, would I need to do any further endian conversion of the nBits value read?

My understanding is that the nBits ordering is a) displayed in RPC byte ordering and all other fields which are hex strings are displayed in the exact same form with the only exception being transaction hashes (not the data blobs which are provided adjacent to the hashes).

Here's the routine, for completeness:

bool hex2bin(void *output, const char *hexstr, size_t len)
{
uchar *p = (uchar *) output;
char hex_byte[4];
char *ep;

hex_byte[2] = '\0';

while (*hexstr && len) {
if (!hexstr[1]) {
applog(LOG_ERR, "hex2bin str truncated");
return false;
}
hex_byte[0] = hexstr[0];
hex_byte[1] = hexstr[1];
*p = (uchar) strtol(hex_byte, &ep, 16);
if (*ep) {
applog(LOG_ERR, "hex2bin failed on '%s'", hex_byte);
return false;
}
p++;
hexstr += 2;
len--;
}

return (len == 0 && *hexstr == 0) ? true : false;
}


Specifically,

• A hex to binary conversion of the JSON elements "previousblockhash": "000007c5df566d7ddaf863f11970086f2c6b8aff2925083a179251ffa547c3da", and "bits": "1e0a3618", Should not require any manipulation of byte ordering assuming the string is directly read and passed to the hex2bin function, because the RPC byte ordering is designed to implicitly prevent this (for performance reasons).

• However, any hash field found within a transactions array should be read as a string, converted to binary (using the same function), and then the ordering should be swapped byte-by-byte.

• Furthermore, if I want to take the data field in an entry within the transactions array, after using hex2bin on it, I shouldn't have to do any order manipulation if I want to perform a double sha 256 on the value in order to calculate the merkle root.

Are my assumptions correct, here?