I think maybe a small mistake was made in a previous post in this thread.
The decimal values for the bits in a byte start with 1, not 2:
128 64 32 16 8 4 2 1
so
0000 0000 = 0 in decimal
0000 0001 = 1
0000 0010 = 2
0000 0011 = 3
...
0111 1111 = 127
1000 0000 = 128
1000 0001 = 129
...
1111 1111 = 255
That's as high as a byte can go.
And all that is correct assuming the byte is interpreted as unsigned, which means it can go from 0 to 255.
If the byte is signed, it can go from -128 to 127, like this:
0000 0000 = 0
0000 0001 = 1
0000 0010 = 2
0000 0011 = 3
...
0111 1111 = 127
1000 0000 = -128
1000 0001 = -127
1000 0010 = -126
1000 0011 = -125
...
1111 1111 = -1
It's easy to remember it generally like this:
the least significant (=low=first=right) 7 bits are always interpreted normally: 64 32 16 8 4 2 1
in unsigned mode, the most significant (=highest=last=left) bit is interpreted as +128
in signed mode, it's interpreted as -128
all you have to do is write the values of all bits on top of them, and then add up the values wherever there's a 1.
for example:
to find the value of 10110100 in signed mode, you do:
-128*1 + 64*0 + 32*1 + 16*1 + 8*0 + 4*1 + 2*0 + 1*0 = -76
if that was an unsigned byte, its value would be 180.
This way of interpreting signed/unsigned integer values agrees with the 2's compliment concept. What I mean is - these aren't two different forms, they're the same. (I'm just making sure I'm not confusing anyone.)