Solid State Drives: Part 5

Wed, 02/05/2014 - 4:59am
John J. Barbara

SSD Architecture and Function
Controllers, NAND non-volatile memory, and Program Erase Cycles (P/E) were discussed in Part 2. Pages, Blocks, Planes, Dies, TSOPs, Wear-Leveling (WL), and Garbage Collection (GC) were discussed in Part 3. Write Amplification (WA), Over-Provisioning (OP), and Bad Block Management (BBM) were discussed in Part 4.

Brief Discussion of Cylinders, Heads, and Sectors
Early traditional hard drives were supported by a PC’s BIOS using Cylinder, Head, and Sector (CHS) addressing. Data was written using movable recording heads which were controlled via drive control commands. Once stored, the data could then easily be read by moving the heads over a particular cylinder. However, to read or write from a specific sector, that sector had to be specified in terms of its CHS. The combined limitations of the BIOS Int 13h routines and the IDE/ATA standard restricted the capacity of early hard drives to 504 MB (1024 Cylinders * 63 Sectors per Track * 16 Heads * 512 Bytes per Sector = 528 Million Bytes or 504 MBs). To circumvent the 504 MB size limit, Extended CHS addressing was implemented. Although this added a translation step that changed the way the hard drive geometry appeared to the BIOS, CHS addressing was still used. Unfortunately this introduced another size limiting factor for hard drives, namely the 8 GB barrier [1024 Cylinders * 63 Sectors per Track * 256 Heads * 512 Bytes per Sector = 8 GBs).

Logical Block Addressing, and Physical Block Addressing
Logical Block Addressing (LBA) was developed to circumvent this issue and is now the method used with conventional hard drives to translate the CHS of the drive into addresses that can be used by an enhanced system BIOS. Instead of referring to CHS, each sector is assigned a unique “section number,” starting at “0” and ending at “N-1” where “N” represents the number of sectors on the disc. (As an analogy, CHS can be considered as an individual’s home address which is comprised of the street number, street name, city name, and state name. LBA would be analogous to every house in every state having a unique identifying number.) LBA itself is a run time function of a system’s BIOS which uses LBA for commands such as reading, writing, format tracks, and so forth. Information pertaining to the hard drive’s actual true geometry is stored in the system CMOS. LBA BIOS performs a translation from the traditional MS-DOS Track, Head, and Sector to the logical block numbers used by the drive.

Although they function totally differently, from the perspective of the host OS, an SSD appears similar to a conventional hard drive with rotating discs. The Logical to Physical Sector Block Address Translation Layer manages the placement of sectors. The SSD’s Controller constantly writes new data or updates previous data to the first available free block which contains the least number of writes. This is to ensure that the number of write cycles per block is minimized, thereby maximizing the drive’s longevity. Blocks containing old data are marked as “not in use” by the host OS. However, the data remains in the blocks until eventually erased by the GC function. The constant movement of data between blocks and pages can result in parts of any file being stored in any physical sector. The data’s location, its Physical Block Address (PBA), must be tracked. To maintain organization, the Controller uses a mapping table to remap the LBA to the PBA. The table is referred to as the Logical to Physical Block Address Translation Table, or LBA–PBA Translation Table and has to be continually updated such that it can properly identify the correct address or location of data. As long as the index is changed when the data is physically moved, the data can still be located. (This is somewhat analogous to the function of the index of a book which points to the page number or location of a specific topic.) It is important to note that the physical location of any block will inevitably not match the external Logical Block Address.

“TRIM” Command
A traditional hard drive with an NTFS file system contains a Master File Table (MFT). The MFT is essentially an index file which maps everything on the hard drive. All file, directory, and metafile data (size, date and time stamp, data location, data content, permissions, etc.) is stored in MFT entries or in space outside the MFT that is described by MFT entries. When a user deletes a file, the file’s MFT entry is marked as free and available for reuse. However, the actual disk space where the file is located is not reallocated and the data is not deleted, removed, or relocated. Essentially, all the hard drive “knows” is that this space can be reused at some future time. When additional space is needed, the OS will send new data to that location, directly overwriting the old data.

This is not the case with an SSD. An SSD uses OP to improve its longevity and overall performance. However, at some point, an SSD can eventually fill up with both valid and invalid data which can reduce its OP functionality and its performance. NAND memory pages containing old or invalid data cannot be directly overwritten. Rather, they must first be erased at the block level using the Garbage Collection function. Unlike the traditional hard drive, an SSD does need to “know” what data is old or invalid so it can be moved and eventually deleted. The TRIM command (an innovation in storage architecture) is used by the OS to identify which addresses no longer hold valid data and which are available for clearing and re-use. The SSD then takes those addresses and updates the LBA–PBA Translation Table marking the addresses as invalid. During GC, the SSD does not move that invalid data. The net effect is a reduction in the number of write cycles and an increase of the SSD’s longevity. This also provides additional space for OP. The contents of the blocks are not actually erased by the TRIM command, but rather it adds them to a queue of pending blocks which are eventually cleared by the GC function.

This discussion will continue in the next Digital Forensic Insider column.

John J. Barbara owns Digital Forensics Consulting, LLC, providing consulting services for companies and laboratories seeking digital forensics accreditation. An ASCLD/LAB inspector since 1993, John has conducted inspections in several forensic disciplines including Digital Evidence.


Share this Story

You may login with either your assigned username or your e-mail address.
The password field is case sensitive.