Palettes are actually used only in modes with pixel width 8-bit and less (e.g. in 256 and 16 color modes). In 15-bit (32k colors), 16-bit (64k colors), 24-bit (16M colors), 32-bit (4G colors) the bytes you write to the memory of graphic card are actually colors.
Indexed modes (those 8-bit and less):Each pixel value in memory of graphic card refers to palette (each value has its own setting of color which can be get/set through ports 0x3c7, 0x3c8 and 0x3c9).The purpose of a palette in an 8-bit mode is to be able to display a picture with the best quality of colors. The only problem is if you try to put two different pictures with different palette on screen at once. It won't look good. You'd have to calculate a new palette to make it look better. The more pictures (each with different palette) are displayed, the worse it looks. That's why 16-bit and 24-bit modes are used. You don't need any palette, each pixel's value is its color (in 16 bit, R, G and B have each 5 bits and the last bit is used for brightness; in 24-bit mode, R, G and B use 8 bits each, so the color quality is very good, although a bit slower and more memory consuming).So the only problem with this palette for 8-bit modes is manipulation with it.
One reason why 24-bit is used (as opposed to 16-bit) is that it's easier (and faster) to manipulate with whole bytes (since each of R, G, B is an 8-bit number, it is aligned on byte boundary) than shifting bits (each of R, G, B is 5 bit so except one of them none fits onto byte boundary) to fit them into a 16-bit word in 16-bit mode. On the other hand, 24-bit mode requires faster graphic card and more memory. I don't think human eye can really see the difference between 24 and 16 bit modes if the stuff on screen is moving.
It looks like you're new here. If you want to get involved, click one of these buttons!