# assembly language

i currently doing a project on 8031 microcontroller using assembler A51...but there are two terms that i don't understand...least significant byte (LSB) and most significant byte (MSB) ..anyone can help?

• : i currently doing a project on 8031 microcontroller using assembler A51...but there are two terms that i don't understand...least significant byte (LSB) and most significant byte (MSB) ..anyone can help?
:

LSB can either stand for Least Significant Bit or Byte. i think its easier with an example. take the number 13 - 00001101. here the least significant bit is 1 (far right side) and most significant bit is 0 (far left side). the LSB is the one that has the least impact on the magnitude of the number when it is changed, and the MSB is the one that has the most impact on the magnitude of the number when it is changed.

Now for bytes, imagine a 16-bit number: 10110100 01100011. the MSB is the left 8-bits and the LSB is the right 8-bits.

Now to add confusion there is also something called little-endian and big-endian which shows itself when you compare processors like the intel cpu's against the mac cpu's. this just means that they store each byte in a different order, but you wont have to worry about this if you stick to intel chips
• : : i currently doing a project on 8031 microcontroller using assembler A51...but there are two terms that i don't understand...least significant byte (LSB) and most significant byte (MSB) ..anyone can help?
: :
:
: LSB can either stand for Least Significant Bit or Byte. i think its easier with an example. take the number 13 - 00001101. here the least significant bit is 1 (far right side) and most significant bit is 0 (far left side). the LSB is the one that has the least impact on the magnitude of the number when it is changed, and the MSB is the one that has the most impact on the magnitude of the number when it is changed.
:
: Now for bytes, imagine a 16-bit number: 10110100 01100011. the MSB is the left 8-bits and the LSB is the right 8-bits.
:
: Now to add confusion there is also something called little-endian and big-endian which shows itself when you compare processors like the intel cpu's against the mac cpu's. this just means that they store each byte in a different order, but you wont have to worry about this if you stick to intel chips
:

thank you for your reply...but i still needs your help in some coding
here is the code that i don't understand...could you help me on it?
.equ wflg,00h ;bit flag used to notify main program
.org 0000h
mov sp,#30h ; set stack above bit addreeable area
sjmp over ;jump over INT1 interrupt area

;place /INT1 interrupt routine here
.org 13h
jb int1,noise ;if not low then noise --leave
jbc tr1,stop ;if T1 is running then stop it
setb tr1 ;if T1 is notrunning then start it

noise:
reti ;return with T1 enabled to time pulse

stop:
setb wflg ;set flag to indicate measurement done
clr ex1 ;disable INT1 until next measurement
reti ;return with T1 stopped and wflg set

;the main program is placed here. the program monitors wflg and display the high pulse width on P1 (LSBY) and P2 (MSBY).
;the width is accurate to the nearest microsecond

over:
mov tmod,#90h ;T1 to count when INT1 pin high
setb it1 ;INT1 interrupt on negative edge
mov tl1,#00h ;reset T1 to 00h
mov th1,#00h
mov ie,#84h ;enable global and INT1 interrupts

simulate:
jbc wflg,getwidth ; test flag til measurement made
sjmp simulate ; loop til finish

getwidth:
mov p1,tl1 ;display LS Byte on P1
Mov p2,th1 ;display MS byte on P2
sjmp over
.end

the problems is in the last part of the code in the "getwidth" part...how the pulse width in milisecond can be save into P1 (port 1)and P2(port2)...i mean in what form...let's say for example pulse width with 45.6milisecond...how it gonna to be kept in the MSB and LSB onto P1 and P2? help, anyone..