advice regarding parsing a 128 bit bit-field

This forum is for posts that might be considered off-topic but that may be useful or interesting to members. Examples include posts about electronics or programming in general, other microcontrollers or interesting devices, useful websites, etc.
Post Reply
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

advice regarding parsing a 128 bit bit-field

Post by spamiam »

I am trying to parse the SD/MMC "Card Specific Description" register.

It is a string of 16 bytes, but it is really a 128 bit structure that is not necessarily byte aligned.

I can not be sure is the data is transferred Most Significant byte first or last. I can not tell if the data is transferred Most Siginficant bit first or last.

I believe that it is transferred in this order: Bit 0, Bit1, Bit2, Bit3....Bit127

But maybe they are counting backward from that: 127.....0.

When I parse the data, I just get crazy values. Nothing like what I expect

I am quite sure that the bytes I am reading are the "correct" bytes for the CSD, so I assume I am parsing it wrong.

I am using GetBit(), and passing it the array of 16 bits as the starting point.


If it is passed in Bit0....Bit127, then GetBit() might be a little hard to use. I believe that it will read the LSB first. Is the LSB of the first byte actually bit7 in the structure of 128 bits? Will GetBit(OArray, 0) give me the last bit sent, or the 8th to last?

Also, any suggestions on a parsing algorithm?

-Tony
Last edited by spamiam on 26 September 2006, 16:27 PM, edited 1 time in total.
dkinzer
Site Admin
Posts: 3120
Joined: 03 September 2005, 13:53 PM
Location: Portland, OR

Post by dkinzer »

GetBit() indexes the bits in each byte from least significant to most significant and indexes bytes as they are arranged in memory, from lower addresses to higher addresses. Hence, GetBit(var, 0) will return the least significant bit of the first byte of the variable, GetBit(var, 8) will return the least significant bit of the second byte, etc.

If it is easier to comprehend, you can declare a Bit array alias overlaying the bytes in question. The indexing is the same in either case, though.
Last edited by dkinzer on 26 September 2006, 12:29 PM, edited 1 time in total.
- Don Kinzer
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

Post by spamiam »

GetBit(var, will return the least significant bit of the second byte, etc.
That value after "var" did not show up properly on my screen. I got an emoticon!

I really did want to see what the least significant bit was going to be. I think it was going to be 9.

In the CSD bit array, the TAAC value is bits 119:112. Presumably they are showing it as MSB to LSB. I am not positive about the order of the bits being sent. I presume the 0 to 127.

So,this would mean that the first bit of the first byte in my received array is bit zero, and I can just index GetBit(IArray,0) to get the first bit sent. I would count uip from there for the following bits

But if their numbering system is the opposite, then the last bit sent is bit zero. Therefore the MSB of the last byte is the zeroth bit.

To access these bits in order, I think I would need to subract the ZBasic bit location from 127 to get the MMC' cards bit position, and cound down from there.

Can you seen any scenario where I would need to reverse the bits within each byte to get the correct overal bit ordering?

-Tony
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

Post by spamiam »

you can declare a Bit array alias overlaying the bytes in question.
Is it possible to declare such an alias for a 12 bit value that overlaps a byte boundary? (It actually crosses 2 byte boundaries! the bits are:2|8|2)

Or do I need to use a Long and then mask off the unused high bits and right shift away the unused low bits?

I will have to check the manuals on declaring bit arrays..... I never did that before.

-Tony
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

Post by spamiam »

Hence, GetBit(var, 0) will return the least significant bit of the first byte of the variable
I did more thinking the mapping of the bits in a 128 bit field, and accessing them by GetBit().

Code: Select all

I am fairly confident that the Documented CSD bits are numbered as follows

MSB                         LSB   MSB                         LSB
127 126 125 124 123 122 121 120 | 119 118 117 116 115 114 113 112 | ...

But the GetBit() numbering will be:
MSB                                 LSB   MSB                             LSB
7     6   5     4     3     2     1   0 | 15   14   13   12   11   10   9   8 | .....
Is this correct?

If this is true, then I will need to reverse the bit order in each byte (and subtract the documented bit # from 127 to get the ZX bit #) to create the ability to read bits in continuous order.

Does any of this make sense?

My brain feels as if I am asking it to do the mental equivalent of kissing the point of its elbow in figuring this one out.

-Tony

[EDIT]

I looked up how to alias the bit array and the byte array.

Code: Select all

Dim Bit_Array(0 to 127) as Bit
Dim Byte_Array(0 to 15) alias Bit_Array(0)
But from what was said about the bit ordering, it would appear that Bit_Array(0) will not be the MSB of Byte_Array(0).

Is this the recurring problem of Big-Endian vs. Little-Endian conversion?

ADDENDUM:

OK, I think I have the formula to convert a bit location as the SD/MMC documents them to where they reside in ZX memory.

Code: Select all

Dim Doc_Bit as Byte            'the bit as listed in the SD/MMC document
Dim Reverse_Bit as Byte      'interim variable
Dim Mod_Bit as Byte            'interim variable
Dim ZX_Bit as Byte              'the bit to reference in GetBit() or in the Bit array

Doc_Bit = 113     'this is the 15th bit in the bit array
Reverse_Bit = 127 - Doc_Bit
Mod_Bit =  Reverse_Bit MOD 8
ZX_Bit = 7 + Reverse_Bit - (2 * Mod_Bit)     'this ought to work assuming zero-based array numbering
dkinzer
Site Admin
Posts: 3120
Joined: 03 September 2005, 13:53 PM
Location: Portland, OR

Post by dkinzer »

If you will be needing to access many/most of the bits it would seem to me to be more efficient to do a FlipBits() on each byte of the 16 bytes comprising the 128-bit stream. Then, doing a GetBit() with 127-idx will index the bits in order.

As far as the 12-bit field distributed across three bytes, you'll probably be better off extracting each of the three bit fields and constructing a composite value.
- Don Kinzer
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

Post by spamiam »

Don, I took your advice and flipped the bits in each of the bytes to allow simple linear reading of the bits across byte boundaries.

It worked well. I checked bitfields and it gives correct results. The Big-Endians strike again.

So, aside from some weird math errors that I am working out, I get reasonable calculations for memory size of my SD cards.

Interestingly, my old "8 meg" card only holds 6.7million bytes when I did the math to calculate its reported capacity. I really wonder where the makers get off calling it 8 megs!

My "256 Meg" card states its capacity as 252.8 million.

My "1 Gig" card states its capacity as 1,029.8 million.

It would appear that the smaller the card, the worse they have "rounding errors" in their advertised capacity vs what the card itself states.

Interestingly the SPEED of the big card is inferior to both of the other cards (even my really old one) unless my interpretation of the speed numbers is wrong. I must add, the big card was a CHEAP one.

-Tony
JC
Posts: 56
Joined: 19 February 2006, 20:23 PM
Location: Hudson,OH
Contact:

Post by JC »

Hi Tony,
Do you have a copy of the SanDisk MultiMediaCard and Reduced-Size MultiMediaCard Product Manual? I have V1.0, (May, 2004), and can email it to you if you do not have this reference. Likewise for the SanDisk MultiMediaCard Techinical Reference Application Note, V1.0 (Oct. 2003). Newer versions no doubt exist.
Section 3.5.3 defines the Card Specific Data Register, CSD.
My notes note that when reading it in, in SPI mode, one is reading bits 127, 126, 125, ... 3, 2, 1, 0. (In that order).
C_Size is bits 73 - 62.
C_Size_Multiplier is bits 49 - 47.
Read_Block_Length is bits 83 - 80.
Memory capacity = BlockNR * Block_Len, where
BlockNR = (C_Size+1) * Mult
Mult = 2^(C_Size_Mult + 2)
Block_Len = 2^(Read_Bl_Len)

There is an error in many printed versions of the reference which does not format the equation correctly, and says Mult = 2C_Size_Mult + 2.

The above equations are valid for C_Size_Mult < 8
and Read_Bl_Len < 12.

I experienced great difficulty in correctly reading these values, and calculating meaningful card sizes, even when the rest of the MMC I/O was working correctly. It bothered me greatly, as Win XP always read the card size correctly, hence I knew it was my interpretation of the format, and implimentation of the access, and calculations which were at fault.

Note also the difference between the marketing hype 1 K = 1000 and the engineering 1 K = 1024. Scaled up to multi-meg or gig cards the actual number of bytes is quite different. In marketing lingo 1024 is 24 more than 1000, giving one 2.4% "more".

You can verify much of your register reading techniques by reading the CID register, and parsing the Manufacturer ID, serial #, and manufacturing date codes. On some cards the MID is in straight ASCII text, matching the label on the card. On others it is a coded value. It is very gratifying to plug in several different cards and have your program tell you the correct make, and size!

Good luck.
JC
spamiam
Posts: 739
Joined: 13 November 2005, 6:39 AM

Post by spamiam »

I experienced great difficulty in correctly reading these values, and calculating meaningful card sizes, even when the rest of the MMC I/O was working correctly. It bothered me greatly, as Win XP always read the card size correctly, hence I knew it was my interpretation of the format, and implimentation of the access, and calculations which were at fault.
Well, I had trouble too. I think I have it licked. I think I have all the MMC interface functioning. I have not bothered with the multi-block read/write.

I have not experimented with small blocks of read/write either.



In reading the CSD, there were a few parts of the difficulty.

In which byte do bits 127:120 reside? The First byte that is read? The Last?

In which order are those bits within that byte?

When I got a good answer to those 2 questions, then I could work with the bit field.

Previously I had good success reading the CID bytes, but this is easier since all the data is byte aligned. This did confirm that bit 127 is the MSB of the first byte read in these structures.

After that, the issue was big-endian vs. little-endian translation.

I simply made the ZX bytes into the other format by using FlipBits.

This was an issue because I was reading the bits individually and sequentially.

If I had used a "simpler" technique of aliasing integers and longs superimposed on the 128 bit bitfield, then I could have gotten the correct reading by masking off the high bits and right-shifting off the low bits.

I wanted something more simple. I use the following function

Code: Select all

CSD_Register = CByte&#40;GetCSDBits&#40;Starting_Bit, NumBits&#41; 'returns an unsigned long.
This works really well, and I just plug in the SD/MMC manual's starting bit location and size of the register.

BTW, I do have 2 versions of the MMC manual. Most recent is '04 or '05.

I have gotten fully sensible readings in all the CSD registers. I have noted that the Max and Min Current specifications seem backwards from some cards, but I have quadruple checked my translation and it is correct.

I did find that the reported card capacity in Windows is very close to that reported by the card. Windows gives the formatted capacity, and I am getting the raw capacity from the card.

Interestingly, my old old old 8meg card has a raw and formatted capacity of a little over 6meg.

The new 256M card is a few meg short even with the most imaginative maketing calculations. But close enough.

The new 1Gig card gives an honest 1 gig, even if it is 1024*1024*1024 to make a gig.


Now I need to clean up my code and I will submit it.

These same decoding algorithms can be used for the bit-banged interface.

I have managed to get the bit-banged interface to run much faster in the actual bit rate while communicating to the cards.

Further optimization can be made by using independent send and receive bit banging. This will spare the need to manipulate the unnecessary incoming or outgoing bits.

I might have 3 bit-banger routines. SPI_Out(), SPI_In(), SPI_InOut

Right now my SPI_InOut() runs at about 1000 bytes/sec. The SLOWEST setting on the hardware SPUI is 14,400 bytes/sec! Admittedly, H/W SPI does require 2 external support chips, and this is a pain in the neck. Therefore optimizing the bit-banged interface is a meaningful objective too.

-Tony
Post Reply