WO2008042593A1 - Nonvolatile memory with error correction based on the likehood the error may occur - Google Patents

Nonvolatile memory with error correction based on the likehood the error may occur Download PDF

Info

Publication number
WO2008042593A1
WO2008042593A1 PCT/US2007/078819 US2007078819W WO2008042593A1 WO 2008042593 A1 WO2008042593 A1 WO 2008042593A1 US 2007078819 W US2007078819 W US 2007078819W WO 2008042593 A1 WO2008042593 A1 WO 2008042593A1
Authority
WO
WIPO (PCT)
Prior art keywords
likelihood
likelihood values
bits
data
output
Prior art date
Application number
PCT/US2007/078819
Other languages
French (fr)
Inventor
Yigal Brandman
Kevin M. Conley
Original Assignee
Sandisk Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/536,286 external-priority patent/US7818653B2/en
Priority claimed from US11/536,327 external-priority patent/US7904783B2/en
Application filed by Sandisk Corporation filed Critical Sandisk Corporation
Publication of WO2008042593A1 publication Critical patent/WO2008042593A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/43Majority logic or threshold decoding
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/26Sensing or reading circuits; Data output circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • G11C11/5621Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
    • G11C11/5642Sensing or reading circuits; Data output circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/08Address circuits; Decoders; Word-line control circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/38Response verification devices
    • G11C29/42Response verification devices using error correcting codes [ECC] or parity check
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems

Definitions

  • This invention relates to nonvolatile memory systems and to methods of operating nonvolatile memory systems.
  • Nonvolatile memory systems are used in various applications. Some nonvolatile memory systems are embedded in a larger system such as a personal computer. Other nonvolatile memory systems are removably connected to a host system and may be interchanged between different host systems. Examples of such removable memory systems include memory cards and USB flash drives. Electronic circuit cards, including non-volatile memory cards, have been commercially implemented according to a number of well-known standards. Memory cards are used with personal computers, cellular telephones, personal digital assistants (PDAs), digital still cameras, digital movie cameras, portable audio players and other host electronic devices for the storage of large amounts of data.
  • PDAs personal digital assistants
  • Such cards usually contain a re-programmable non-volatile semiconductor memory cell array along with a controller that controls and supports operation of the memory cell array and interfaces with a host to which the card is connected.
  • a controller that controls and supports operation of the memory cell array and interfaces with a host to which the card is connected.
  • Several of the same type of card may be interchanged in a host card slot designed to accept that type of card.
  • a card made according to one standard is usually not useable with a host designed to operate with a card of another standard.
  • Memory card standards include PC Card, CompactFlashTM card (CFTM card), SmartMediaTM card, MultiMediaCard (MMCTM), Secure Digital (SD) card, a miniSDTM card, Subscriber Identity Module (SIM), Memory StickTM, Memory Stick Duo card and microSD/TransFlashTM memory module standards.
  • USB flash drive products commercially available from SanDisk Corporation under its trademark “Cruzer®.” USB flash drives are typically larger and shaped differently than the memory cards described above.
  • Data stored in a nonvolatile memory system may contain erroneous bits when data is read.
  • Traditional ways to reconstruct corrupted data include the application of Error Correction Codes (ECCs).
  • ECCs Error Correction Codes
  • Simple Error Correction Codes encode data by storing additional parity bits, which set the parity of groups of bits to a required logical value, when the data is written into the memory system. If during storage the data is erroneous, the parity of groups of bits may change. Upon reading the data from the memory system, the parity of the group of the bits is computed once again by the ECC. Because of the data corruption the computed parity may not match the required parity condition, and the ECC may detect the corruption.
  • ECCs can have at least two functions: error detection and error correction. Capability for each of these functions is typically measured in the number of bits can be detected as erroneous and subsequently corrected. Detection capability can be the same or greater than the correction capability. A typical ECC can detect a higher number of error bits than it can correct. A collection of data bits and parity bits is sometimes called a word.
  • An early example is the (7,4) Hamming code, which has the capability of detecting up to two errors per word (seven bits in this example) and has the capability of correcting one error in such a seven-bit word.
  • More sophisticated ECCs can correct more than a single error per word, but it becomes computationally increasingly complex to reconstruct the data.
  • Common practice is to recover the data with some acceptably small likelihood of incorrect recovery.
  • the probability of reliable data recovery also decreases rapidly or the associated costs in additional hardware and/or performance become prohibitively high.
  • data can be represented by the threshold voltages of transistors.
  • different digital data storage values correspond to different voltage ranges. If, for some reason, before or during the read operation the voltage levels shift from their programmed ranges, an error occurs. The error may be detected by the ECC and in some cases these errors may be corrected.
  • a nonvolatile memory array is connected to a decoder so that encoded data read from the memory array is used to calculate likelihood values associated with bits stored in the memory array.
  • a decoder is a Soft-Input Soft-Output (SISO) decoder.
  • the encoded data may be read with a high resolution that gives an indication of likelihood associated with a data bit, not just the logical value of the data bit. For example, where binary data is encoded as +1/-1 volt in a memory, the actual voltage read may be used by the ECC decoder instead of just the sign.
  • Likelihood values may be derived from the values read or other sources. Likelihood values may be provided as a soft-input to a SISO decoder.
  • the output of the SISO decoder may be converted to a hard-output by a converter.
  • the hard- output represents corrected data.
  • a SISO decoder may perform calculations in multiple iterations until some predetermined condition is met.
  • a high resolution read may be achieved by selecting appropriate voltages for individual read steps so that a higher density of reads occurs for a certain portion of a particular threshold voltage function than occurs at another portion. This provides additional resolution for areas of interest, for example, where threshold voltage functions have significant overlap.
  • a demodulator may convert voltages from a memory array into likelihood values Where more than one bit is stored in a cell, a separate likelihood value may be obtained for each bit. Such likelihood values may be used as a soft-input for a SISO decoder.
  • Figure 1 shows likelihood functions of threshold voltages of cells programmed to a logic 1 state and a logic 0 state in a nonvolatile memory, including a voltage V D used to discriminate logic 1 and logic 0 states.
  • Figure 2 shows components of a memory system including a memory array, modulator/demodulator circuits and encoder/decoder circuits.
  • Figure 3 shows likelihood function of read threshold voltages of cells programmed to a logic 1 state and a logic 0 state, showing threshold voltage values.
  • Figure 4 shows components of a memory system including a memory array, modulator/demodulator circuits and encoder/decoder circuits, a demodulator providing likelihood values to a decoder.
  • Figure 5 shows a NAND string connected to a sense amplifier to read the state of a memory cell.
  • Figure 6A shows likelihood functions of read threshold voltages of cells programmed to a logic 1 state and a logic 0 state including three threshold voltages.
  • Figure 6B shows likelihood functions of read threshold voltages of cells programmed to four states and shows threshold voltages where cells are read.
  • Figure 7 shows individual likelihood values for both a first and a second bit as a function of threshold voltage in a memory that stores two bits per cell.
  • Figure 8 shows an encoder/decoder unit having a Soft-Input Soft-Output (SISO) decoder.
  • SISO Soft-Input Soft-Output
  • Figure 9 shows an exemplary encoding scheme where the input data is arranged in a square matrix and a parity bit is calculated for each row and column.
  • Figure 10 shows a particular example of a signal that is subject to noise causing errors in data that are not correctable using a hard-input decoder but are correctable using a SISO decoder.
  • Figure 11 shows an alternative encoding scheme where parity bits are calculated for input data, the input data arranged in rows and columns, a parity bit calculated for each row and column.
  • Figure 12 shows components of a memory system including an encoder that provides the encoding shown in Figure 11 and a demodulator that provides raw likelihood values to a SISO decoder.
  • Figure 13A shows a first horizontal iteration performed by the SISO decoder of Figure
  • Figure 13B shows a first vertical iteration performed by the SISO decoder of Figure 12.
  • Figure 13C shows a second horizontal iteration performed by the SISO decoder of Figure 12.
  • Figure 13D shows a second vertical iteration performed by the SISO decoder of Figure 12.
  • Figure 14 shows a Low Density Parity Check (LDPC) parity check matrix used in a SISO decoder.
  • Figure 15 shows an encoder/decoder having concatenated encoders and having concatenated decoders.
  • LDPC Low Density Parity Check
  • Figure 1 shows the relationship between a physical parameter indicating a memory cell state (threshold voltage, V T ) and the logical values to which the memory cell may be programmed.
  • V T threshold voltage
  • the cell stores one bit of data.
  • Cells programmed to the logic 0 state generally have a higher threshold voltage than cells in the logic 1 (unprogrammed) state.
  • the logic 1 state is the unprogrammed state of the memory cell.
  • the vertical axis of Figure 1 indicates the likelihood of reading a cell at any particular threshold voltage based upon expected threshold voltage distribution.
  • a first likelihood function is shown for cells programmed to logic 1 and a second for cells programmed to logic 0. However, these functions have some degree of overlap between them.
  • a discrimination voltage V D is used in reading such cells. Cells having a threshold voltage below V D are considered to be in state 1 , while those having a threshold voltage above V D are considered to be in state 0. As Figure 1 shows, this may not always be correct. Because of the overlap between functions, there is a non-zero likelihood that a memory cell programmed to a logic 1 state will be read as having a threshold voltage greater than V D and so will be read as being in a logic 0 state.
  • Nonvolatile memory systems commonly employ ECC methods to overcome errors that occur in data that is read from a memory array. Such methods generally calculate some additional ECC bits from input data to be stored in a memory array according to an encoding system. Other ECC schemes may map input data to output data in a more complex way. The ECC bits are stored generally along with the input data or may be stored separately. The input data and ECC bits are later read from the nonvolatile memory array together and a decoder uses both the data and ECC bits to check if any errors are present. In some cases, such ECC bits may also be used to identify a bit that is in error.
  • ECC bits to data bits is not the only way to encode data before storing it in a nonvolatile memory.
  • data bits may be encoded according to a scheme that provides the following transformations: 00 to 1111, 01 to 1100, 10 to 0011 and 11 to 0000.
  • Figure 2 shows an example of input data being stored in a memory system 200.
  • Input data is first received by an ECC unit 201 that includes an encoder 203.
  • the input data may be host data to be stored in memory system 200 or may be data generated by a memory controller.
  • the example of Figure 2 shows four input data bits 1001.
  • Encoder 203 then calculates ECC bits (1111) from the input data bits using an encoding scheme.
  • One example of an encoding scheme is to generate ECC bits that are parity bits for selected groups of data bits.
  • Modulator 207 converts the digital data sent by ECC unit 201 to a form in which it is written in a memory array 209.
  • the digital data is converted to a plurality of threshold voltage values in a plurality of memory cells.
  • various circuits used to convert digital data to a stored threshold voltage in a memory cell may be considered to form a modulator.
  • each memory cell may hold one bit of data.
  • each memory cell may have a threshold voltage in one of two ranges, one signifying a logic "1" state and the other signifying a logic "0" state as shown in Figure 1.
  • the memory cells storing a logic "1" state have a threshold voltage that is less than V D ( ⁇ V D ) while the memory cells storing a logic "0" state have a threshold voltage that is greater than V D (>V D ).
  • Cells may be programmed and verified to a nominal threshold voltage higher than V D to ensure that, at least initially, there is some preferred separation between cells programmed to the two logic states.
  • Data may be stored in memory array 209 for some period of time. During this time, various events may occur to cause threshold voltages of memory cells to change. In particular, operations involving programming and reading may require voltages to be applied to word lines and bit lines in a manner that affects other previously programmed cells. Such disturbs are particularly common where dimensions of devices are reduced so that the interaction between adjacent cells is significant. Charge may also be lost over long periods of time. Such data retention failures can also cause data to change when read. As a result of such changes, data bits may be read out having different states than the data bits originally programmed. In the example of Figure 2, one input data bit 211 is read as having a threshold value less than V D ( ⁇ V D ) when it was originally written having a threshold value greater than V D (>V D ).
  • the threshold voltages of memory cells are converted to bits of data by a demodulator 213 in modulation/demodulation unit 205. This is the reverse of the process performed by the modulator.
  • Demodulator 213 may include sense amplifiers that read a voltage or current from a memory cell in memory array 209 and derive the state of the cell from the reading.
  • a memory cell having a threshold voltage less than V D ( ⁇ V D ) gives a demodulated output of "1"
  • a memory cell having a threshold voltage that is greater than V D (>V D ) gives a demodulated output of "0.”
  • the second bit 208 of this sequence is in error as a result of being stored in the memory array 209.
  • the output of demodulator 213 is sent to a decoder 215 in the ECC unit 201. Decoder
  • the 215 determines from data bits and ECC bits if there are any errors. If a small number of errors is present that is within the correction capability of the code, the errors are corrected. If large numbers of errors are present, they may be identified but not corrected if they are within the detection capability of the code. If the number of errors exceeds the detection capability of the code, the errors may not be detected, or may result in an erroneous correction. In the example of Figure 2, the error in the second bit is detected and is corrected. This provides an output (1001) from decoder 215 that is identical to the input sequence.
  • decoding of memory system 200 is considered to be hard-input hard-output decoding because decoder 215 receives only data bits representing input data bits and ECC bits, and decoder 215 outputs a corrected sequence of data bits corresponding to input data bits (or fails to give an output if the number of errors is too high).
  • Figure 4 shows a memory system 421 using a data storage process that is similar to that of memory system 200 (using the same input data bits and ECC bits) with a different read process.
  • memory system 421 instead of simply determining whether a threshold voltage is above or below a particular value, memory system 421 reads threshold voltages as shown in Figure 3. It will be understood that actual threshold voltage is not necessarily read. Other means of cell operation may be used to store and retrieve data (e.g. current sensing). Voltage sensing is merely used as an example. Generally, threshold voltage refers to a gate voltage at which a transistor turns on. Figure 4 shows a read occurring that provides more detailed information than the previous example.
  • This may be considered a read with a higher resolution than that of Figure 2 (and a resolution that resolves more states than are used for programming).
  • errors occur in the read data.
  • the readings corresponding to the second and third bits are in error.
  • the raw voltages read from memory array 423 of Figure 4 by a series of read operations are sent to a demodulator 425 in a modulation/demodulation circuit 427.
  • the raw voltages have a finite resolution dictated by the resolution of the Analog-to-Digital conversion.
  • raw data is converted into likelihood data.
  • each cell reading is converted into a likelihood that the corresponding bit is a one or a zero.
  • the series of readings from the memory array (0.75, 0.05, 0.10, 0.15, 1.25, 1.0, 3.0, and 0.5 volts) can indicate not only the state of the cell, but can also be used to provide a degree of certainty as to that state. This may be expressed as a likelihood that a memory cell was programmed with a particular bit. Thus, readings that are close to 0 volts may give low likelihood values, while readings that are farther from 0 volts give higher likelihood values.
  • the likelihood values shown are log likelihood ratios (explained in detail below). This provides negative numbers for cells in a logic 0 state and positive numbers for cells in a logic 1 state, with the magnitude of the number indicating the likelihood that the state is correctly identified.
  • the second and third likelihood values (0.1, 0.2) indicate logic "1". The second and third values indicate likelihoods that are quite low.
  • Likelihood values are sent to a decoder 429 in an ECC unit 431 (in some cases, obtaining likelihood values from raw values may be considered as being performed in the decoder).
  • the decoder 429 performs decoding operations on likelihood values. Such a decoder may be considered a soft-input decoder.
  • soft-input refers to an input that includes some quality information related to data that are to be decoded.
  • the additional information provided as a soft-input generally allows a decoder to obtain better results.
  • a decoder may perform decoding calculations using a soft-input to provide calculated likelihood values as an output. This is considered a soft-output and such a decoder is considered a Soft- Input Soft-Output (SISO) decoder. This output can then be used again as input to the SISO decoder to iterate the decoding and improve results.
  • SISO decoder may form part of a larger decoder that provides a hard output to another unit.
  • SISO decoders generally provide good performance and in some cases may provide better performance than is possible with hard-input hard-output decoding. In particular, for the same amount of overhead (number of ECC bits) a SISO decoder may provide greater error correction capability.
  • a suitable encoding/decoding scheme may be implemented and demodulation is adapted to efficiently obtain a soft-input without excessive complexity and without requiring excessive time for reading data from the memory array.
  • a soft-input for a SISO decoder is provided by reading data in a nonvolatile memory array with a resolution that resolves a larger number of states than were used in programming the memory.
  • data may be written by programming a memory cell to one of two threshold voltage ranges and subsequently read by resolving three or more threshold voltage ranges.
  • the number of threshold voltage ranges used in reading will be some multiple of the number of threshold voltage ranges used in programming (for example, twice as many). However, this is not always the case.
  • An encoder/decoder circuit may be formed as a dedicated circuit or may this function may be performed by firmware in a controller.
  • a controller is an Application Specific Integrated Circuit (ASIC) that has circuits designed for specific functions such as ECC and also has firmware to manage controller operations.
  • ASIC Application Specific Integrated Circuit
  • an encoder/decoder may be formed by a combination of hardware and firmware in the memory controller.
  • the modulator/demodulator circuits may be on a memory chip, on a controller chip, on a separate chip or some combination.
  • modulation circuits will include at least some components on the memory chip (such as peripheral circuits connected to a memory array). While Figure 4 indicates threshold voltages being read to a high resolution (an analog read), the degree of resolution chosen may depend on a number of factors including the type of nonvolatile memory used.
  • Figure 5 shows a string 541 of a NAND flash memory array undergoing a read operation.
  • a NAND flash memory is comprised of strings of memory cells connected in series, isolated by select transistors in groups collectively called blocks, the basic unit of erase.
  • the other cells of the string are turned on hard so that the current flowing through the sting depends on the selected cell.
  • Appropriate bias voltages are placed on the gates of the string select transistors 543, 545 at either end of string 541 (typically, one end is connected to ground) and one or more voltages are sequentially applied to the word line that extends over the selected cell. For a cell holding one bit of data, only a single voltage may be needed.
  • a voltage sequence typically consists of sequentially increasing voltage steps or a binary search pattern. Each step corresponds to a discrimination voltage.
  • a cell storing two bits requires four states and a cell storing three bits requires eight states etc.
  • a sense amplifier 547 attached to a bit line determines when the cell switches on and the word line voltage that first causes such switching indicates the threshold voltage range of the cell.
  • the resolution of the read operation depends on the number of voltage steps provided. For example, a single bit read may require 25microseconds to complete a sensing operation, while a two bit read for the same memory requires 75 microseconds to complete the three sensing operations to fully resolve four states. More voltage steps provide a higher resolution but this requires more time.
  • Figure 6A shows as example of a single-bit memory cell that is read with a high resolution that resolves more states than the number of states used in programming the memory.
  • the horizontal axis indicates threshold voltage (V T ) and the vertical axis indicates likelihood of a cell having this threshold voltage for a given programmed state.
  • V T threshold voltage
  • a single read was performed to determine if a cell was programmed into one of two states.
  • three reads are performed to determine if the cell is in one of four read threshold voltage ranges, 651-654.
  • the cell is programmed to one of two threshold voltage ranges (corresponding to two logic states) and is later read with a resolution that identifies the cell as being in one of four threshold voltage ranges (four read states).
  • Voltages V 1 , V 2 , V3 chosen for performing reads are such that the four threshold voltage ranges 651-654 are not equal in size and reads are concentrated near where the two functions (for logic 1 and logic 0) overlap.
  • One read (at a discrimination voltage V 2 ) is similar to that of Figure 1 and indicates which state (0 or 1) the memory cell is in.
  • the other two reads (at Vi and V 3 ) are within the threshold voltage ranges of logic 0 and logic 1 but are not centered in these threshold voltage ranges. Instead, these reads are arranged closer to V 2 .
  • the four read threshold voltage ranges 651-654 give an indication of the likelihood that a particular read bit is correct.
  • a reading below Vi has a high likelihood of being correct
  • a reading between Vi and V 2 has a lower likelihood of being correct
  • a reading between V 2 and V3 has a comparatively low likelihood of being correct
  • a reading above V3 has a higher likelihood of being correct. It can be seen that reading with a resolution that resolves a higher number of states than were used in programming allows a read operation to obtain likelihood information regarding the data being read.
  • Figure 6B shows an example of a two-bit MLC memory cell being read with a high resolution that resolves more states than the number of programmed states.
  • Figure 6B shows a series of read operations being performed with increasing resolution.
  • the threshold voltage of the cell is resolved into one of four states corresponding to a threshold voltage less than V 1 , between Vi and V 2 , between V 2 and V 3 , and greater than V 3 .
  • This first read resolves the same number of states as were used in programming.
  • a second read READ 2 is performed to give a higher resolution.
  • READ 2 resolves a programmed state such as "10" into three read states that correspond to a central portion of the threshold voltage function (between V5 and V 6 ) and two outer portions of the threshold voltage function (one between Vi and V 5 , the other between V 6 and V 2 ).
  • a third read, READ 3, is performed to give higher resolution again.
  • READ 3 resolves the read states of READ 2 so that outer portions are further resolved. In this example, read states corresponding to central portions are not further resolved.
  • the read operations may be performed in the order READ 1, then READ 2, then READ 3 or in any other order. Alternatively, individual read steps may be performed in some other order so that they are combined in a single read operation.
  • read steps may be performed starting from the lowest threshold voltage and going up sequentially according to threshold voltage.
  • the read steps of READ 1, READ 2 and READ 3 are arranged in a pattern having a higher density of read operations for outer portions of the threshold voltage function of a particular programmed state than for a central portion. This provides more information regarding outer portions of threshold voltage functions than central portions of such functions. This is because a cell having a threshold voltage in a central portion of a threshold voltage function for a particular state may be assumed to have a high likelihood of being in that state (close to zero likelihood of being in another state) so that further resolution is not required. Outer portions of a particular function may overlap a neighboring function. More information about such an overlap region (where likelihood values change) is desirable.
  • a single read step may provide information regarding the programmed level of a memory cell.
  • the state of a memory cell is read by measuring the current through the cell under certain biasing conditions.
  • a current mirror can be used to replicate the current from the cell, the replicated current can then be compared with several reference currents in parallel.
  • a high resolution read may be performed in a single step.
  • Figure 7 shows how likelihood values related to individual bits may be derived from threshold voltage information from a cell storing more than one bit of data. In this case, individual likelihood values are assigned to each bit.
  • Figure 7 shows a likelihood function for the four states (11, 10, 00, 01) of Figure 6.
  • Figure 7 also shows a likelihood across all four threshold voltage ranges for the first bit (leftmost bit). Likelihood here is shown as the likelihood that a particular bit is a "1," likelihood could also be given in terms of likelihood that a bit is a "0.”
  • a likelihood level of 0 is shown. This is the level at which there is an equal likelihood of a 1 or a 0. Below the 0 level, there is a larger likelihood of a 0.
  • Figure 7 shows a likelihood across all four threshold voltage ranges for the second bit (rightmost bit). This likelihood is high at either end of the threshold voltage range and low in the middle. Thus, the likelihood values for the two bits have very different patterns.
  • Figure 7 shows a threshold voltage Vl that gives a first- bit likelihood Pl and a second bit likelihood P2.
  • Pl is large because threshold voltage Vl is not close to a threshold voltage associated with a state having a "0" as the first bit.
  • a demodulator may use correlation between threshold voltage and individual bit likelihood values like those shown in Figure 7 to provide raw likelihood values based on readings from a memory array. Where a read operation identifies a threshold voltage range for a cell, likelihood values for each bit may be associated with each such range. Likelihood values may be derived from a variety of information such as characteristics of memory cell behavior for a given technology or from experience of a given memory device. In some cases, likelihood values may vary at different stages during the lifetime of a device.
  • LLR Log Likelihood Ratio
  • LLR is a convenient way to express likelihood though other systems may also be used.
  • some conversion or demodulation is generally performed.
  • One convenient manner of performing such demodulation is to use a lookup table that tabulates the relationship between threshold voltage (or some measured parameter in the memory) and the likelihood values of one or more bits. Where the resolution of the read operation divides the threshold voltage range of a memory cell into N read states and the cell stores R bits, a table may have N x R entries so that a likelihood is given for each bit for each read state.
  • a high resolution read may provide likelihood information regarding data stored in the memory.
  • Such raw likelihood data may be provided to a decoder as a soft-input to the decoder.
  • Figure 8 shows soft-input data such as described above being supplied to a decoder system 861 that includes a SISO decoder 863.
  • SISO decoders generally accept raw likelihood data and perform ECC calculations on the raw likelihood data to provide calculated likelihood data.
  • the calculated likelihood data may be considered a soft-output.
  • such a soft-output is then provided as an input to the SISO decoder so that a second decoding iteration is performed.
  • a SISO decoder may perform successive iterations until at least one predetermined condition is achieved.
  • a predetermined condition may be that all bits have a likelihood that is greater than a certain minimum value.
  • a predetermined condition could also be an aggregate of likelihood values such as a mean likelihood value.
  • a predetermined condition may be convergence of results from one iteration to the next (i.e. keep iterating until there is little improvement from additional iterations).
  • a predetermined condition may be that a predetermined number of iterations are completed. Combinations of these conditions may also be used. Decoding is performed using an encoded pattern in the data that is the result of encoding performed by encoder 865 on the data before it was stored.
  • Encoder 865 and decoder system 861 are both considered parts of ECC unit 867, which may be implemented in a memory system such as memory system 421 (ECC unit 867 is an example of a unit that may be used as ECC unit 431). Various encoding schemes are possible for use with a SISO decoder. Decoder 861 also includes a heart-soft converter 864 to convert a soft-output from SISO decoder 865 to a hard output.
  • FIG. 9 shows an exemplary encoding scheme that may be used in a SISO decoder such as SISO decoder 863.
  • Data entries D11-D33 are arranged in rows and columns with parity bits calculated for each row and column. For example, Pi is calculated from the row comprising entries Dn, Di 2 and Di 3 . Similarly, P 2 is calculated from the row comprising entries D 2 1, D 22 and D 23 . P 4 is calculated from the column Dn, D 21 and D 23 .
  • Figure 10 shows the outcome when the data encoded as shown in Figure 9 is later decoded in both a hard-input hard-output decoder and in a SISO decoder.
  • the two logical states of a binary system are represented by -1 and +1 instead of 0 and 1 respectively. It will be understood that any suitable notation may be used and this notation is simply convenient for this example.
  • the sign indicates whether the bit is most likely a 0 or a 1 and the magnitude of the number indicates the likelihood that this is the correct value.
  • a signal 101 is shown as a group of bits that include data bits and parity bits calculated according to the encoding scheme of Figure 9.
  • the signal is generally the output of an encoder that has suitable circuits to calculate parity bits.
  • the signal may be sent to a modulator which then provides suitable voltages to memory cells to program the memory cells to states to record the signal data.
  • Noise 103 is shown affecting two data bits in this example.
  • Noise is not limited to data bits and may also affect parity bits.
  • Noise may be the result of some physical characteristic of particular cells or may be the result of disturbs that occur in memory when one cell is affected by operations carried out on other cells in the array.
  • noise is considered additive so that data read from the memory reflects the signal data plus the noise, which then becomes the input data for decoding. But noise may have either positive or negative effects on the read value.
  • Input 105 is the raw data obtained from a demodulator connected to a memory. For example, where a read operation is performed with a high resolution, input data may be generated in this form so that a bit of either signal or parity data is represented by 0.1 instead of a 1 or a 0.
  • Input 105 may be considered a soft-input because it includes more than a simple 0 or 1 value.
  • input 105 is converted to hard input 107 by replacing all positive values by +1 and replacing all negative values by -1.
  • the most significant bit represents the sign and may be used as the means of conversion.
  • the hard-input hard-output decoder may attempt to correct the data using this hard input.
  • parity calculations indicate an error in each of the second and third rows and an error in each of the second and third columns. There is no unique solution in this situation because D22 and D33 could be in error or alternatively D32 and D23 could be in error.
  • the hard-input hard-output detector cannot determine which of these solutions is correct. Therefore, a hard-input hard-output decoder is unable to correct the data in this situation
  • soft-input correction data 109 is generated for the first row from the input.
  • Each entry of soft-input correction data 109 is calculated from the sign of the product of the other entries in the same row and the magnitude of the smallest entry in the row (calculation of likelihood values is described in more detail below). This gives an indication of what the other entries in the row indicate the entry should be and may be considered a calculated likelihood or an extrinsic likelihood (as opposed to the intrinsic likelihood of the Input).
  • the soft-input correction data 109 is then added to Input 105 to obtain the soft-output 111 of the first row-iteration.
  • the soft-output 111 of the first row- iteration thus combines the intrinsic and extrinsic likelihood values.
  • the soft-output reflects both intrinsic likelihood information from the raw data and extrinsic likelihood information derived from the other entries in the same row that share the same parity bit. Looking at the sign of the soft-output 113 at this point (converting the soft-output to a hard-output) shows that the data is fully corrected.
  • the soft-input soft-output decoding can correct this data where a hard-input hard-output decoder could not.
  • the soft-input soft-output decoder can make this correction in one iteration using only row parity calculations. If correction was not completed, then further calculations could be performed using column parity calculations. If such a first column-iteration did not provide full correction, then a second row-iteration could be performed. Thus, a SISO may continue to work towards a solution where a hard-input decoder stops without finding a solution.
  • parity bits are calculated for both rows and columns of the input data (just two input data entries per row or column in this example).
  • parity bits are calculated to be the sum modulo 2 of the input data bits of the row or column.
  • Figure 12 shows the input data being received by an encoder 121 in an ECC unit 123 that calculates the parity bits and appends them to the data.
  • an encoder 121 in an ECC unit 123 that calculates the parity bits and appends them to the data.
  • four bits of parity data are appended to four bits of input data.
  • the input data and parity bits thus form encoded signal data that is sent to a modulator 125.
  • Modulator 125 programs individual memory cells according to the signal data. In this case, two bits are stored in a memory cell of memory array
  • the eight bits of signal data are stored in four cells having respective threshold voltage levels V1-V4.
  • the memory cells are read as having threshold voltage ranges Vl '-V4'.
  • the read threshold voltage ranges are demodulated in a demodulator 127 to provide raw likelihood data (1.5, 1.0, 0.2, 0.3, 2.5, 2.0, 6.0, 1.0). This may be obtained using a lookup table or otherwise. In some cases, providing raw likelihood data is considered as a function within a decoder, but for the present case it is considered as taking place within demodulator
  • demodulator 127 provides likelihood values as Log Likelihood Ratio values, though likelihood may be expressed in other formats also.
  • the raw likelihood values are positive for all data entries indicating that, if a hard-output was obtained directly from these entries, all data entries would be considered to be Is at this point (providing two errors).
  • SISO decoder 129 in ECC unit 123, the data may be fully corrected.
  • Figures 13A-13D show how SISO decoder 129 corrects input data.
  • Decoder 129 is a particular example of a SISO decoder that may be used in a memory system such as memory system 421 (decoder 129 may be used as decoder 429)
  • Figure 13A shows a first horizontal iteration using row parity bits 131 to obtain first calculated likelihood values 133 from row likelihood values 132.
  • LLRs are added to obtain calculated likelihood values 133. It can be shown that the sum of two LLRs in this example is given by the product of the signs of the two LLRs and (-1), multiplied by the smaller LLR value.
  • the calculated likelihood corresponding to entry Dn is 0.1 ⁇ 2.5 ⁇ -0.1
  • the calculated likelihood corresponding to entry Di 2 is 1.5 ⁇ 2.5 ⁇ -1.5.
  • Calculated (extrinsic) likelihood values 133 are then added to the raw (intrinsic) likelihood values 132 to obtain output likelihood values 135 from the first horizontal iteration. So the output likelihood value corresponding to entry Dl 1 is the raw likelihood value 1.5 plus the calculated likelihood value -0.1, giving 1.4.
  • the output likelihood values 135 of the top row (1.4, -1.4) indicate relatively high likelihood values indicating that the correct bits are 1 and 0.
  • likelihood values 135 on the bottom row (-0.1, 0.1) indicate low likelihood values that the bits are 0 and 1 respectively. These likelihood values indicate the correct input bits. However, such low likelihood values may not be considered good enough to terminate decoding at this point. So, additional iterations may be performed.
  • Figure 13B shows the output likelihood values 135 from the first horizontal iteration being subjected to a first vertical iteration using column parity bits 137.
  • Calculated likelihood values 139 of the first vertical iteration are calculated in the same manner as before, this time along columns using column parity entries 137.
  • Dl 1 is obtained from the LLR sum of D12 and P3 (-0.1 and 6.0, giving 0.1).
  • D12 is obtained from the LLR sum of Dl 1 and P3 (1.4 and 6.0, giving -1.4).
  • calculated likelihood values 139 are obtained for each entry.
  • the calculated (extrinsic) likelihood values 139 are added to the input likelihood values 135 (the output likelihood values of the first horizontal iteration) to obtain output likelihood values 141 of the first vertical iteration.
  • the output likelihood values 141 obtained from the first vertical iteration (1.5, -1.5, -1.5, 1.1) may be considered sufficiently good to terminate decoding at this point. However, depending on the predetermined condition required to terminate decoding, more decoding iterations may be performed.
  • Figure 13C shows a second horizontal iteration being performed.
  • the calculated likelihood values 139 from the first vertical iteration are first added to the raw likelihood values 132 to obtain input values 143.
  • the input values 143 are then used with row parity entries 131 as before to obtain second horizontal calculated likelihood values 145.
  • the second horizontal likelihood values 145 are then added to the input values 132 to obtain the output likelihood values 147 of the second horizontal iteration.
  • the output likelihood values 147 of the second horizontal iteration do not provide an overall improvement in likelihood values from the output of the first vertical iteration.
  • Figure 13D shows a second vertical iteration being performed. The output values 147 from the second horizontal iteration are used as input values for this iteration.
  • Iterative decoding may cycle through iterations until some predetermined condition is met.
  • the predetermined condition may be that each likelihood value in an output set of likelihood values exceeds some minimum likelihood value.
  • the predetermined condition may be some parameter derived from more than one likelihood value, such as a mean or average likelihood.
  • the predetermined condition may simply be that a certain number of iterations are preformed.
  • a SISO decoder provides output likelihood values that are then subject to another operation that indicates whether additional SISO iterations should be performed or not.
  • Figures 13A-13D may be considered an example of a technique known as Turbo decoding.
  • Horizontal and vertical parity bits provide two alternative encoding schemes that can be separately decoded. Using two such decoding schemes together and using the output from one decoding scheme as the input for the other, turbo coding generally provides high error correction capability.
  • Efficient decoding depends on having a suitable encoding/decoding scheme.
  • Various schemes are known for encoding data in a manner that is suitable for subsequent decoding in a SISO decoder.
  • Encoding/decoding schemes include, but are not limited to, turbo codes, product codes, BCH codes, Reed-Solomon codes, convolutional codes (see U.S. Patent Application Nos. 11/383,401 and 11/383,405), Hamming codes, and Low Density Parity Check (LDPC) codes.
  • LDPC codes are codes that have a parity check matrix that meets certain requirements, resulting in a sparse parity check matrix. This means that each parity check is performed over a relatively small number of bits.
  • An example of a parity check matrix H for an LDPC code is shown in Figure 14.
  • the conditions for an LDPC code are: (1) The number of Is in each row is the same and the number is small in comparison to the total number of entries in the row. (2) The number of Is in each column is the same and the number is small in comparison with the total number of entries in the column. (3) The number of Is in common between any two columns is not greater than 1 (the number of Is in common may only be zero or one).
  • Irregular LDPC codes allow some deviation in the number of Is in columns and rows. Looking at H, each row has three Is, out of a total of seven entries in a row, so that condition (1) is met. Each column has three Is, out of a total of seven entries in a column, so that condition (2) is met. No two columns have more than one 1 in common, so that condition (3) is met. For example, the first, second and fourth columns all have a 1 as the top entry, but none of these columns has another 1 in common.
  • matrix H defines an LDPC code. The code consists of all code words that satisfy matrix H. This means that seven different parity check conditions (defined by the seven rows) must be met. Each parity check condition looks at three entries in a word. For example, the first row indicates that the first, second and fourth entries in a word must have a sum modulo two of zero.
  • Data may be encoded according to an LDPC code by calculating certain parity bits to form a codeword.
  • a codeword of the parity check matrix H may be formed of four data bits and three parity bits calculated from the four data bits. Each parity bit is calculated from a relatively small number of data bits, so encoding may be relatively simple, even where a large number of entries are encoded as a block.
  • a suitable LDPC code for memory applications uses a word of about 4,000-8,000 bits (1-2 sectors, where a sector is 512 bytes). For example, encoding according to the LDPC code may add approximately 12% to the unencoded data. The number of Is in a row of a parity check matrix for such a code may be about 32 out of about 4000, so that even though the word is large, the parity calculations are not excessively long. Thus, parity bits may be relatively easily calculated during encoding and parity may also be relatively easily checked during decoding.
  • LDPC codes may be used with hard-input hard-output decoding or SISO decoding. As shown earlier, SISO decoding can sometimes improve performance over hard-input hard- output decoding.
  • Raw likelihood values may be supplied to a SISO decoder as LLRs or in some other form.
  • An LDPC can use a SISO decoder in an iterative manner. An entry is common to several parity groups so that a calculated likelihood value obtained from one group provides improved data for another parity group. Such calculations may be iteratively performed until some predetermined condition is met. LDPC decoding may sometimes provide poor results when the number of errors is very low. Correcting errors below a certain number becomes difficult creating an "error floor.”
  • One solution to this problem is to combine LDPC decoding with some other form of decoding.
  • a hard-input hard-output decoder using BCH or some similar algebraic code may be added to an LDPC decoder.
  • the LDPC decoder reduces the number of errors to some low level, and then the BCH decoder decodes the remaining errors. Decoders operating in series in this manner are referred to as "concatenated.” Concatenated encoding is also performed before data is stored in the memory array in this case.
  • Figure 15 shows an example of concatenated encoding and decoding in an ECC unit
  • encoding scheme A is a BCH encoding scheme that adds parity bits to the input data, increasing the amount of data by approximately 4% in one example.
  • Encoding scheme B is an LDPC encoding scheme that adds additional parity bits to the encoded data from encoder A, adding an additional 12% to the data in this example.
  • the doubly encoded data is then send to a modulation/demodulation unit and is programmed to a nonvolatile memory array. Subsequently, the doubly encoded data is read from the memory array and is demodulated to provide a soft-input to decoder B. Decoder B decodes data using encoding scheme B. Similarly, decoder A uses encoding scheme A. Decoder B of this example is a SISO decoder that performs one or more decoding iterations on the doubly encoded data. When some predetermined condition is met, decoder B sends output data to decoder A. The output data from decoder B does not generally include entries for parity bits added by encoder B.
  • decoder B The output of decoder B is a soft-output.
  • This soft output may be converted to a hard-data in a soft-hard converter 161 that removes likelihood information and converts the data to binary information.
  • This hard data is then provided as a hard-input to decoder A which performs hard-input, hard-output decoding.
  • the hard-output from the second decoder is then sent out of ECC unit 155 as corrected data.
  • the predetermined condition for terminating iterative decoding in decoder B is that decoder A indicates that the data is good. After an iteration is completed in decoder B, a soft-output may be converted to hard data and provided as a hard-input to decoder A. Decoder A then attempts to decode the data. If decoder A cannot decode the data, then decoder B performs at least one additional iteration. If decoder A can decode the data, then no more iterations are required in decoder B. Thus, in this example, decoder A provides a feedback 163 to decoder B to indicate when the decoder B should terminate.
  • Figure 15 deals with concatenation of hard-input hard-output and SISO coding, other combinations may also be used.
  • Two or more SISO decoders may be used in series and two or more hard-input hard-output decoders may also be used.
  • Soft-input in the examples described above is obtained by reading data with a higher resolution than was used to program the data.
  • other information may be used to derive soft-input data. Any quality information provided in addition to a simple 1 or 0 determination regarding a data bit stored in memory may be used to provide a soft-input.
  • a count is maintained of the number of times a block has been erased. Physical properties of the memory may change in a predictable manner as erase count increases, making certain errors more likely. An erase count may be used to obtain likelihood data where such a pattern is known. Other factors known to affect programmed data in a predictable way may also be used to obtain likelihood information.
  • data may be read from the memory array with the same resolution used to program it and still be used to provide a soft-input.
  • Various sources of likelihood information may be combined.
  • likelihood information from reading data with a high resolution may be combined with likelihood data from another source.
  • a soft-input is not limited to likelihood information obtained directly from reading the memory array.
  • nonvolatile memories are currently in use and the techniques described here may be applied to any suitable nonvolatile memory systems.
  • memory systems may include, but are not limited to, memory systems based on ferroelectric storage (FRAM or FeRAM), memory systems based on magnetoresistive storage (MRAM), and memories based on phase change (PRAM or "OUM” for "Ovonic Unified Memory”).

Abstract

In a nonvolatile memory system, data is read from a memory array and used to obtain likelihood values, which are then provided to a soft-input soft-output decoder. The soft-input soft-output. decoder calculates output likelihood values from input likelihood values and from parity data that was previously added according to an encoding scheme. The likelihood is derived by comparing a measured memory cell voltage with more than one reference voltages.

Description

NONVOLATILE MEMORY WITH ERROR CORRECTION BASED ON THE LIKEHOOD THE ERROR MAY OCCUR
BACKGROUND OF THE INVENTION This invention relates to nonvolatile memory systems and to methods of operating nonvolatile memory systems.
Nonvolatile memory systems are used in various applications. Some nonvolatile memory systems are embedded in a larger system such as a personal computer. Other nonvolatile memory systems are removably connected to a host system and may be interchanged between different host systems. Examples of such removable memory systems include memory cards and USB flash drives. Electronic circuit cards, including non-volatile memory cards, have been commercially implemented according to a number of well-known standards. Memory cards are used with personal computers, cellular telephones, personal digital assistants (PDAs), digital still cameras, digital movie cameras, portable audio players and other host electronic devices for the storage of large amounts of data. Such cards usually contain a re-programmable non-volatile semiconductor memory cell array along with a controller that controls and supports operation of the memory cell array and interfaces with a host to which the card is connected. Several of the same type of card may be interchanged in a host card slot designed to accept that type of card. However, the development of the many electronic card standards has created different types of cards that are incompatible with each other in various degrees. A card made according to one standard is usually not useable with a host designed to operate with a card of another standard. Memory card standards include PC Card, CompactFlash™ card (CF™ card), SmartMedia™ card, MultiMediaCard (MMC™), Secure Digital (SD) card, a miniSD™ card, Subscriber Identity Module (SIM), Memory Stick™, Memory Stick Duo card and microSD/TransFlash™ memory module standards. There are several USB flash drive products commercially available from SanDisk Corporation under its trademark "Cruzer®." USB flash drives are typically larger and shaped differently than the memory cards described above.
Data stored in a nonvolatile memory system may contain erroneous bits when data is read. Traditional ways to reconstruct corrupted data include the application of Error Correction Codes (ECCs). Simple Error Correction Codes encode data by storing additional parity bits, which set the parity of groups of bits to a required logical value, when the data is written into the memory system. If during storage the data is erroneous, the parity of groups of bits may change. Upon reading the data from the memory system, the parity of the group of the bits is computed once again by the ECC. Because of the data corruption the computed parity may not match the required parity condition, and the ECC may detect the corruption.
ECCs can have at least two functions: error detection and error correction. Capability for each of these functions is typically measured in the number of bits can be detected as erroneous and subsequently corrected. Detection capability can be the same or greater than the correction capability. A typical ECC can detect a higher number of error bits than it can correct. A collection of data bits and parity bits is sometimes called a word. An early example is the (7,4) Hamming code, which has the capability of detecting up to two errors per word (seven bits in this example) and has the capability of correcting one error in such a seven-bit word.
More sophisticated ECCs can correct more than a single error per word, but it becomes computationally increasingly complex to reconstruct the data. Common practice is to recover the data with some acceptably small likelihood of incorrect recovery. However with increasing number of errors the probability of reliable data recovery also decreases rapidly or the associated costs in additional hardware and/or performance become prohibitively high.
In semiconductor memory devices, including EEPROM systems, data can be represented by the threshold voltages of transistors. Typically, different digital data storage values correspond to different voltage ranges. If, for some reason, before or during the read operation the voltage levels shift from their programmed ranges, an error occurs. The error may be detected by the ECC and in some cases these errors may be corrected.
SUMMARY OF THE INVENTION
A nonvolatile memory array is connected to a decoder so that encoded data read from the memory array is used to calculate likelihood values associated with bits stored in the memory array. An example of such a decoder is a Soft-Input Soft-Output (SISO) decoder. The encoded data may be read with a high resolution that gives an indication of likelihood associated with a data bit, not just the logical value of the data bit. For example, where binary data is encoded as +1/-1 volt in a memory, the actual voltage read may be used by the ECC decoder instead of just the sign. Likelihood values may be derived from the values read or other sources. Likelihood values may be provided as a soft-input to a SISO decoder. The output of the SISO decoder may be converted to a hard-output by a converter. The hard- output represents corrected data. In some cases, a SISO decoder may perform calculations in multiple iterations until some predetermined condition is met. In a nonvolatile memory, a high resolution read may be achieved by selecting appropriate voltages for individual read steps so that a higher density of reads occurs for a certain portion of a particular threshold voltage function than occurs at another portion. This provides additional resolution for areas of interest, for example, where threshold voltage functions have significant overlap.
In a nonvolatile memory, a demodulator may convert voltages from a memory array into likelihood values Where more than one bit is stored in a cell, a separate likelihood value may be obtained for each bit. Such likelihood values may be used as a soft-input for a SISO decoder.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows likelihood functions of threshold voltages of cells programmed to a logic 1 state and a logic 0 state in a nonvolatile memory, including a voltage VD used to discriminate logic 1 and logic 0 states. Figure 2 shows components of a memory system including a memory array, modulator/demodulator circuits and encoder/decoder circuits.
Figure 3 shows likelihood function of read threshold voltages of cells programmed to a logic 1 state and a logic 0 state, showing threshold voltage values.
Figure 4 shows components of a memory system including a memory array, modulator/demodulator circuits and encoder/decoder circuits, a demodulator providing likelihood values to a decoder.
Figure 5 shows a NAND string connected to a sense amplifier to read the state of a memory cell.
Figure 6A shows likelihood functions of read threshold voltages of cells programmed to a logic 1 state and a logic 0 state including three threshold voltages.
Figure 6B shows likelihood functions of read threshold voltages of cells programmed to four states and shows threshold voltages where cells are read.
Figure 7 shows individual likelihood values for both a first and a second bit as a function of threshold voltage in a memory that stores two bits per cell. Figure 8 shows an encoder/decoder unit having a Soft-Input Soft-Output (SISO) decoder.
Figure 9 shows an exemplary encoding scheme where the input data is arranged in a square matrix and a parity bit is calculated for each row and column. Figure 10 shows a particular example of a signal that is subject to noise causing errors in data that are not correctable using a hard-input decoder but are correctable using a SISO decoder.
Figure 11 shows an alternative encoding scheme where parity bits are calculated for input data, the input data arranged in rows and columns, a parity bit calculated for each row and column.
Figure 12 shows components of a memory system including an encoder that provides the encoding shown in Figure 11 and a demodulator that provides raw likelihood values to a SISO decoder. Figure 13A shows a first horizontal iteration performed by the SISO decoder of Figure
12.
Figure 13B shows a first vertical iteration performed by the SISO decoder of Figure 12.
Figure 13C shows a second horizontal iteration performed by the SISO decoder of Figure 12.
Figure 13D shows a second vertical iteration performed by the SISO decoder of Figure 12.
Figure 14 shows a Low Density Parity Check (LDPC) parity check matrix used in a SISO decoder. Figure 15 shows an encoder/decoder having concatenated encoders and having concatenated decoders.
DETAILED DESCRIPTION OF ILLUSTRATED EMBODIMENTS
In many nonvolatile memories, data read from a memory array may have errors. That is, individual bits of input data that are programmed to a memory array may later be read as being in a different logical value. Figure 1 shows the relationship between a physical parameter indicating a memory cell state (threshold voltage, VT) and the logical values to which the memory cell may be programmed. In this example, only two states are stored in the cell. Thus, the cell stores one bit of data. Cells programmed to the logic 0 state generally have a higher threshold voltage than cells in the logic 1 (unprogrammed) state. In an alternative scheme, the logic 1 state is the unprogrammed state of the memory cell. The vertical axis of Figure 1 indicates the likelihood of reading a cell at any particular threshold voltage based upon expected threshold voltage distribution. A first likelihood function is shown for cells programmed to logic 1 and a second for cells programmed to logic 0. However, these functions have some degree of overlap between them. A discrimination voltage VD is used in reading such cells. Cells having a threshold voltage below VD are considered to be in state 1 , while those having a threshold voltage above VD are considered to be in state 0. As Figure 1 shows, this may not always be correct. Because of the overlap between functions, there is a non-zero likelihood that a memory cell programmed to a logic 1 state will be read as having a threshold voltage greater than VD and so will be read as being in a logic 0 state. Similarly, there is a non-zero likelihood that a memory cell programmed to a logic 0 state will be read as having a logic 1 state. Overlap between functions occurs for a number of reasons including physical defects in the memory array and disturbance caused to programmed cells by later programming or reading operations in the memory array. Overlap may also occur due to a general lack of ability to keep a large number of cells within a very tight threshold voltage range. Certain programming techniques may allow functions of threshold voltages to be narrowed (have smaller standard deviations). However, such programming may take more time. In some memory systems, more than one bit is stored in a memory cell. In general, it is desirable to store as many bits as possible in a memory cell. In order to efficiently use the available threshold voltage range, functions for adjacent states may be such that they significantly overlap. Nonvolatile memory systems commonly employ ECC methods to overcome errors that occur in data that is read from a memory array. Such methods generally calculate some additional ECC bits from input data to be stored in a memory array according to an encoding system. Other ECC schemes may map input data to output data in a more complex way. The ECC bits are stored generally along with the input data or may be stored separately. The input data and ECC bits are later read from the nonvolatile memory array together and a decoder uses both the data and ECC bits to check if any errors are present. In some cases, such ECC bits may also be used to identify a bit that is in error. The erroneous bit is then corrected by changing its state (changed from a "0" to a "1" or from a "1" to a "0"). Appending ECC bits to data bits is not the only way to encode data before storing it in a nonvolatile memory. For example, data bits may be encoded according to a scheme that provides the following transformations: 00 to 1111, 01 to 1100, 10 to 0011 and 11 to 0000.
Figure 2 shows an example of input data being stored in a memory system 200. Input data is first received by an ECC unit 201 that includes an encoder 203. The input data may be host data to be stored in memory system 200 or may be data generated by a memory controller. The example of Figure 2 shows four input data bits 1001. Encoder 203 then calculates ECC bits (1111) from the input data bits using an encoding scheme. One example of an encoding scheme is to generate ECC bits that are parity bits for selected groups of data bits.
Both the input data bits and the ECC bits are then sent to a modulation/demodulation unit 205 that includes a modulator 207. Modulator 207 converts the digital data sent by ECC unit 201 to a form in which it is written in a memory array 209. In one scheme, the digital data is converted to a plurality of threshold voltage values in a plurality of memory cells. Thus, various circuits used to convert digital data to a stored threshold voltage in a memory cell may be considered to form a modulator. In the example of Figure 2, each memory cell may hold one bit of data. Thus, each memory cell may have a threshold voltage in one of two ranges, one signifying a logic "1" state and the other signifying a logic "0" state as shown in Figure 1. The memory cells storing a logic "1" state have a threshold voltage that is less than VD (<VD) while the memory cells storing a logic "0" state have a threshold voltage that is greater than VD (>VD). Cells may be programmed and verified to a nominal threshold voltage higher than VD to ensure that, at least initially, there is some preferred separation between cells programmed to the two logic states.
Data may be stored in memory array 209 for some period of time. During this time, various events may occur to cause threshold voltages of memory cells to change. In particular, operations involving programming and reading may require voltages to be applied to word lines and bit lines in a manner that affects other previously programmed cells. Such disturbs are particularly common where dimensions of devices are reduced so that the interaction between adjacent cells is significant. Charge may also be lost over long periods of time. Such data retention failures can also cause data to change when read. As a result of such changes, data bits may be read out having different states than the data bits originally programmed. In the example of Figure 2, one input data bit 211 is read as having a threshold value less than VD (<VD) when it was originally written having a threshold value greater than VD (>VD).
The threshold voltages of memory cells are converted to bits of data by a demodulator 213 in modulation/demodulation unit 205. This is the reverse of the process performed by the modulator. Demodulator 213 may include sense amplifiers that read a voltage or current from a memory cell in memory array 209 and derive the state of the cell from the reading. In the example of Figure 2, a memory cell having a threshold voltage less than VD (<VD) gives a demodulated output of "1" and a memory cell having a threshold voltage that is greater than VD (>VD) gives a demodulated output of "0." This gives the output sequence 11011111 shown. The second bit 208 of this sequence is in error as a result of being stored in the memory array 209. The output of demodulator 213 is sent to a decoder 215 in the ECC unit 201. Decoder
215 determines from data bits and ECC bits if there are any errors. If a small number of errors is present that is within the correction capability of the code, the errors are corrected. If large numbers of errors are present, they may be identified but not corrected if they are within the detection capability of the code. If the number of errors exceeds the detection capability of the code, the errors may not be detected, or may result in an erroneous correction. In the example of Figure 2, the error in the second bit is detected and is corrected. This provides an output (1001) from decoder 215 that is identical to the input sequence. The decoding of memory system 200 is considered to be hard-input hard-output decoding because decoder 215 receives only data bits representing input data bits and ECC bits, and decoder 215 outputs a corrected sequence of data bits corresponding to input data bits (or fails to give an output if the number of errors is too high).
An alternative memory system to memory system 200 is shown in Figures 3 and 4. Figure 3 shows similar functions to those of Figure 1 with VD=O and with threshold voltages below VD representing logic 0 and voltages above VD representing logic 1. Instead of showing a single voltage VD dividing threshold voltages into two different ranges, here the threshold voltages are indicated by actual voltage numbers. The function corresponding to logic "1" is centered above 0 volts and the function corresponding to logic "0" is centered below 0 volts.
Figure 4 shows a memory system 421 using a data storage process that is similar to that of memory system 200 (using the same input data bits and ECC bits) with a different read process. In particular, instead of simply determining whether a threshold voltage is above or below a particular value, memory system 421 reads threshold voltages as shown in Figure 3. It will be understood that actual threshold voltage is not necessarily read. Other means of cell operation may be used to store and retrieve data (e.g. current sensing). Voltage sensing is merely used as an example. Generally, threshold voltage refers to a gate voltage at which a transistor turns on. Figure 4 shows a read occurring that provides more detailed information than the previous example. This may be considered a read with a higher resolution than that of Figure 2 (and a resolution that resolves more states than are used for programming). As in the previous example, errors occur in the read data. Here, the readings corresponding to the second and third bits are in error. The second and third bits were logic "0" and were stored by programming a cell to have a threshold voltage less than VD but the cells are read as having threshold voltages of 0.05 volts and 0.10 volts which is higher than VD (VD = 0 volts). The raw voltages read from memory array 423 of Figure 4 by a series of read operations are sent to a demodulator 425 in a modulation/demodulation circuit 427. The raw voltages have a finite resolution dictated by the resolution of the Analog-to-Digital conversion. Here, raw data is converted into likelihood data. In particular, each cell reading is converted into a likelihood that the corresponding bit is a one or a zero. The series of readings from the memory array (0.75, 0.05, 0.10, 0.15, 1.25, 1.0, 3.0, and 0.5 volts) can indicate not only the state of the cell, but can also be used to provide a degree of certainty as to that state. This may be expressed as a likelihood that a memory cell was programmed with a particular bit. Thus, readings that are close to 0 volts may give low likelihood values, while readings that are farther from 0 volts give higher likelihood values. The likelihood values shown are log likelihood ratios (explained in detail below). This provides negative numbers for cells in a logic 0 state and positive numbers for cells in a logic 1 state, with the magnitude of the number indicating the likelihood that the state is correctly identified. The second and third likelihood values (0.1, 0.2) indicate logic "1". The second and third values indicate likelihoods that are quite low. Likelihood values are sent to a decoder 429 in an ECC unit 431 (in some cases, obtaining likelihood values from raw values may be considered as being performed in the decoder). The decoder 429 performs decoding operations on likelihood values. Such a decoder may be considered a soft-input decoder. In general, soft-input refers to an input that includes some quality information related to data that are to be decoded. The additional information provided as a soft-input generally allows a decoder to obtain better results. A decoder may perform decoding calculations using a soft-input to provide calculated likelihood values as an output. This is considered a soft-output and such a decoder is considered a Soft- Input Soft-Output (SISO) decoder. This output can then be used again as input to the SISO decoder to iterate the decoding and improve results. A SISO decoder may form part of a larger decoder that provides a hard output to another unit. SISO decoders generally provide good performance and in some cases may provide better performance than is possible with hard-input hard-output decoding. In particular, for the same amount of overhead (number of ECC bits) a SISO decoder may provide greater error correction capability. In order to efficiently use a SISO decoder, a suitable encoding/decoding scheme may be implemented and demodulation is adapted to efficiently obtain a soft-input without excessive complexity and without requiring excessive time for reading data from the memory array.
In one embodiment, a soft-input for a SISO decoder is provided by reading data in a nonvolatile memory array with a resolution that resolves a larger number of states than were used in programming the memory. Thus, data may be written by programming a memory cell to one of two threshold voltage ranges and subsequently read by resolving three or more threshold voltage ranges. Typically, the number of threshold voltage ranges used in reading will be some multiple of the number of threshold voltage ranges used in programming (for example, twice as many). However, this is not always the case.
An encoder/decoder circuit (ECC unit) may be formed as a dedicated circuit or may this function may be performed by firmware in a controller. Typically, a controller is an Application Specific Integrated Circuit (ASIC) that has circuits designed for specific functions such as ECC and also has firmware to manage controller operations. Thus, an encoder/decoder may be formed by a combination of hardware and firmware in the memory controller. The modulator/demodulator circuits may be on a memory chip, on a controller chip, on a separate chip or some combination. Generally, modulation circuits will include at least some components on the memory chip (such as peripheral circuits connected to a memory array). While Figure 4 indicates threshold voltages being read to a high resolution (an analog read), the degree of resolution chosen may depend on a number of factors including the type of nonvolatile memory used.
Figure 5 shows a string 541 of a NAND flash memory array undergoing a read operation. A NAND flash memory is comprised of strings of memory cells connected in series, isolated by select transistors in groups collectively called blocks, the basic unit of erase. In order to read the selected cell, the other cells of the string are turned on hard so that the current flowing through the sting depends on the selected cell. Appropriate bias voltages are placed on the gates of the string select transistors 543, 545 at either end of string 541 (typically, one end is connected to ground) and one or more voltages are sequentially applied to the word line that extends over the selected cell. For a cell holding one bit of data, only a single voltage may be needed. For cells holding more than one bit (Multi Level Cells, or MLC), a voltage sequence typically consists of sequentially increasing voltage steps or a binary search pattern. Each step corresponds to a discrimination voltage. A cell storing two bits requires four states and a cell storing three bits requires eight states etc. A sense amplifier 547 attached to a bit line determines when the cell switches on and the word line voltage that first causes such switching indicates the threshold voltage range of the cell. The resolution of the read operation depends on the number of voltage steps provided. For example, a single bit read may require 25microseconds to complete a sensing operation, while a two bit read for the same memory requires 75 microseconds to complete the three sensing operations to fully resolve four states. More voltage steps provide a higher resolution but this requires more time. Fewer voltage steps increase speed but provide poorer resolution. Typically, read operations are performed with the same resolution used to perform program operations. Thus, if a program operation programs and verifies cells to one of four states, the read operation has sufficient resolution to resolve four threshold voltage ranges. This may require three voltage steps for a cell that has four possible states. Various structures of NAND flash memory systems and methods of operating NAND flash memory systems are described in US Patent Nos. 7,888,621; 7,092,290 and 6,983,428.
Figure 6A shows as example of a single-bit memory cell that is read with a high resolution that resolves more states than the number of states used in programming the memory. As before, the horizontal axis indicates threshold voltage (VT) and the vertical axis indicates likelihood of a cell having this threshold voltage for a given programmed state. In the example of Figure 1 a single read was performed to determine if a cell was programmed into one of two states. In contrast, here three reads are performed to determine if the cell is in one of four read threshold voltage ranges, 651-654. Thus, the cell is programmed to one of two threshold voltage ranges (corresponding to two logic states) and is later read with a resolution that identifies the cell as being in one of four threshold voltage ranges (four read states).
Voltages V1, V2, V3 chosen for performing reads are such that the four threshold voltage ranges 651-654 are not equal in size and reads are concentrated near where the two functions (for logic 1 and logic 0) overlap. One read (at a discrimination voltage V2) is similar to that of Figure 1 and indicates which state (0 or 1) the memory cell is in. The other two reads (at Vi and V3) are within the threshold voltage ranges of logic 0 and logic 1 but are not centered in these threshold voltage ranges. Instead, these reads are arranged closer to V2. The four read threshold voltage ranges 651-654 give an indication of the likelihood that a particular read bit is correct. Thus, for a logic 0, a reading below Vi (threshold voltage range 651) has a high likelihood of being correct, while a reading between Vi and V2 (threshold voltage range 652) has a lower likelihood of being correct. For logic 1, a reading between V2 and V3 (threshold voltage range 653) has a comparatively low likelihood of being correct, while a reading above V3 (threshold voltage range 654) has a higher likelihood of being correct. It can be seen that reading with a resolution that resolves a higher number of states than were used in programming allows a read operation to obtain likelihood information regarding the data being read.
Figure 6B shows an example of a two-bit MLC memory cell being read with a high resolution that resolves more states than the number of programmed states. Figure 6B shows a series of read operations being performed with increasing resolution. During READ 1 , the threshold voltage of the cell is resolved into one of four states corresponding to a threshold voltage less than V1, between Vi and V2, between V2 and V3, and greater than V3. This first read resolves the same number of states as were used in programming. A second read READ 2 is performed to give a higher resolution. READ 2 resolves a programmed state such as "10" into three read states that correspond to a central portion of the threshold voltage function (between V5 and V6) and two outer portions of the threshold voltage function (one between Vi and V5, the other between V6 and V2). A third read, READ 3, is performed to give higher resolution again. READ 3 resolves the read states of READ 2 so that outer portions are further resolved. In this example, read states corresponding to central portions are not further resolved. The read operations may be performed in the order READ 1, then READ 2, then READ 3 or in any other order. Alternatively, individual read steps may be performed in some other order so that they are combined in a single read operation. For example, read steps may be performed starting from the lowest threshold voltage and going up sequentially according to threshold voltage. The read steps of READ 1, READ 2 and READ 3 are arranged in a pattern having a higher density of read operations for outer portions of the threshold voltage function of a particular programmed state than for a central portion. This provides more information regarding outer portions of threshold voltage functions than central portions of such functions. This is because a cell having a threshold voltage in a central portion of a threshold voltage function for a particular state may be assumed to have a high likelihood of being in that state (close to zero likelihood of being in another state) so that further resolution is not required. Outer portions of a particular function may overlap a neighboring function. More information about such an overlap region (where likelihood values change) is desirable.
The above description relates to particular techniques for reading NAND flash memory cells. Other reading techniques may also be used. In some memories, a single read step may provide information regarding the programmed level of a memory cell. For example, in some NOR flash memories, the state of a memory cell is read by measuring the current through the cell under certain biasing conditions. In such a memory, a current mirror can be used to replicate the current from the cell, the replicated current can then be compared with several reference currents in parallel. Thus, a high resolution read may be performed in a single step.
Figure 7 shows how likelihood values related to individual bits may be derived from threshold voltage information from a cell storing more than one bit of data. In this case, individual likelihood values are assigned to each bit. Figure 7 shows a likelihood function for the four states (11, 10, 00, 01) of Figure 6. Figure 7 also shows a likelihood across all four threshold voltage ranges for the first bit (leftmost bit). Likelihood here is shown as the likelihood that a particular bit is a "1," likelihood could also be given in terms of likelihood that a bit is a "0." A likelihood level of 0 is shown. This is the level at which there is an equal likelihood of a 1 or a 0. Below the 0 level, there is a larger likelihood of a 0. Because the two states on the left "11" and "10" both have a "1" as the first bit, the likelihood on the left of this graph is high (>0). The two states on the right "00" and "01" both have a "0" as the first bit, so the likelihood on this side is low (<0). Figure 7 also shows a likelihood across all four threshold voltage ranges for the second bit (rightmost bit). This likelihood is high at either end of the threshold voltage range and low in the middle. Thus, the likelihood values for the two bits have very different patterns. Figure 7 shows a threshold voltage Vl that gives a first- bit likelihood Pl and a second bit likelihood P2. Here, Pl is large because threshold voltage Vl is not close to a threshold voltage associated with a state having a "0" as the first bit. However, P2 is small because Vl is close to the threshold voltage range for the "10" state. This indicates that, while this bit is probably a 1, the likelihood is little higher than the likelihood that it is a 0. Figure 7 shows that likelihood may be very different for different bits stored in the same cell. Even where a read operation indicates a threshold voltage that is in a region of overlap between threshold voltage ranges of neighboring states and thus has an increased risk of being misread, an individual bit may have a very high likelihood value where the overlap is between two states that both have the individual bit in common. Thus, it may be beneficial to determine likelihood on a bit-by-bit basis instead of a cell-by-cell basis. A demodulator may use correlation between threshold voltage and individual bit likelihood values like those shown in Figure 7 to provide raw likelihood values based on readings from a memory array. Where a read operation identifies a threshold voltage range for a cell, likelihood values for each bit may be associated with each such range. Likelihood values may be derived from a variety of information such as characteristics of memory cell behavior for a given technology or from experience of a given memory device. In some cases, likelihood values may vary at different stages during the lifetime of a device.
Likelihood may be expressed in different ways. One common way to express likelihood for binary data is as a Log Likelihood Ratio (LLR). The LLR associated with a particular bit is the log of the ratio of the likelihood that the bit is a "1" to the likelihood that the bit is a "0" for a particular reading x. Thus:
Figure imgf000014_0001
where P(d = l\x) is the likelihood that the bit is a "1" and P(d = θ\x) is the likelihood that the bit is a "0." LLR is a convenient way to express likelihood though other systems may also be used. In order to obtain likelihood information from readings from a memory, some conversion or demodulation is generally performed. One convenient manner of performing such demodulation is to use a lookup table that tabulates the relationship between threshold voltage (or some measured parameter in the memory) and the likelihood values of one or more bits. Where the resolution of the read operation divides the threshold voltage range of a memory cell into N read states and the cell stores R bits, a table may have N x R entries so that a likelihood is given for each bit for each read state. In this way, a high resolution read may provide likelihood information regarding data stored in the memory. Such raw likelihood data may be provided to a decoder as a soft-input to the decoder. Figure 8 shows soft-input data such as described above being supplied to a decoder system 861 that includes a SISO decoder 863. SISO decoders generally accept raw likelihood data and perform ECC calculations on the raw likelihood data to provide calculated likelihood data. The calculated likelihood data may be considered a soft-output. In many cases, such a soft-output is then provided as an input to the SISO decoder so that a second decoding iteration is performed. A SISO decoder may perform successive iterations until at least one predetermined condition is achieved. For example, a predetermined condition may be that all bits have a likelihood that is greater than a certain minimum value. A predetermined condition could also be an aggregate of likelihood values such as a mean likelihood value. A predetermined condition may be convergence of results from one iteration to the next (i.e. keep iterating until there is little improvement from additional iterations). A predetermined condition may be that a predetermined number of iterations are completed. Combinations of these conditions may also be used. Decoding is performed using an encoded pattern in the data that is the result of encoding performed by encoder 865 on the data before it was stored. Encoder 865 and decoder system 861 are both considered parts of ECC unit 867, which may be implemented in a memory system such as memory system 421 (ECC unit 867 is an example of a unit that may be used as ECC unit 431). Various encoding schemes are possible for use with a SISO decoder. Decoder 861 also includes a hart-soft converter 864 to convert a soft-output from SISO decoder 865 to a hard output.
In some cases, SISO decoding may give better error correction for the same amount of overhead data than hard-input hard-output decoding does. Figure 9 shows an exemplary encoding scheme that may be used in a SISO decoder such as SISO decoder 863. Data entries D11-D33 are arranged in rows and columns with parity bits calculated for each row and column. For example, Pi is calculated from the row comprising entries Dn, Di2 and Di3. Similarly, P2 is calculated from the row comprising entries D21, D22 and D23. P4 is calculated from the column Dn, D21 and D23. Figure 10 shows the outcome when the data encoded as shown in Figure 9 is later decoded in both a hard-input hard-output decoder and in a SISO decoder. In this example, the two logical states of a binary system are represented by -1 and +1 instead of 0 and 1 respectively. It will be understood that any suitable notation may be used and this notation is simply convenient for this example. For soft numbers, the sign indicates whether the bit is most likely a 0 or a 1 and the magnitude of the number indicates the likelihood that this is the correct value.
A signal 101 is shown as a group of bits that include data bits and parity bits calculated according to the encoding scheme of Figure 9. The signal is generally the output of an encoder that has suitable circuits to calculate parity bits. The signal may be sent to a modulator which then provides suitable voltages to memory cells to program the memory cells to states to record the signal data.
Noise 103 is shown affecting two data bits in this example. Noise is not limited to data bits and may also affect parity bits. Noise may be the result of some physical characteristic of particular cells or may be the result of disturbs that occur in memory when one cell is affected by operations carried out on other cells in the array. In this example, noise is considered additive so that data read from the memory reflects the signal data plus the noise, which then becomes the input data for decoding. But noise may have either positive or negative effects on the read value. Input 105 is the raw data obtained from a demodulator connected to a memory. For example, where a read operation is performed with a high resolution, input data may be generated in this form so that a bit of either signal or parity data is represented by 0.1 instead of a 1 or a 0. This may be considered a likelihood value with a positive value indicating a 1 and a negative value indicating a 0, the magnitude of the value indicating the likelihood that the indicated state is correct. Input 105 may be considered a soft-input because it includes more than a simple 0 or 1 value.
For a hard-input hard-output decoder, input 105 is converted to hard input 107 by replacing all positive values by +1 and replacing all negative values by -1. In a system using one's complement (l 's complement) logic to represent the soft-input likelihood values, the most significant bit represents the sign and may be used as the means of conversion. The hard-input hard-output decoder may attempt to correct the data using this hard input. However, parity calculations indicate an error in each of the second and third rows and an error in each of the second and third columns. There is no unique solution in this situation because D22 and D33 could be in error or alternatively D32 and D23 could be in error. The hard-input hard-output detector cannot determine which of these solutions is correct. Therefore, a hard-input hard-output decoder is unable to correct the data in this situation
In a first SISO decoding step, soft-input correction data 109 is generated for the first row from the input. Each entry of soft-input correction data 109 is calculated from the sign of the product of the other entries in the same row and the magnitude of the smallest entry in the row (calculation of likelihood values is described in more detail below). This gives an indication of what the other entries in the row indicate the entry should be and may be considered a calculated likelihood or an extrinsic likelihood (as opposed to the intrinsic likelihood of the Input). The soft-input correction data 109 is then added to Input 105 to obtain the soft-output 111 of the first row-iteration. The soft-output 111 of the first row- iteration thus combines the intrinsic and extrinsic likelihood values. The soft-output reflects both intrinsic likelihood information from the raw data and extrinsic likelihood information derived from the other entries in the same row that share the same parity bit. Looking at the sign of the soft-output 113 at this point (converting the soft-output to a hard-output) shows that the data is fully corrected. Thus, the soft-input soft-output decoding can correct this data where a hard-input hard-output decoder could not. Furthermore, the soft-input soft-output decoder can make this correction in one iteration using only row parity calculations. If correction was not completed, then further calculations could be performed using column parity calculations. If such a first column-iteration did not provide full correction, then a second row-iteration could be performed. Thus, a SISO may continue to work towards a solution where a hard-input decoder stops without finding a solution.
A second example is shown in Figure 11. As in the previous example, parity bits are calculated for both rows and columns of the input data (just two input data entries per row or column in this example). Here parity bits are calculated to be the sum modulo 2 of the input data bits of the row or column.
Figure 12 shows the input data being received by an encoder 121 in an ECC unit 123 that calculates the parity bits and appends them to the data. In this case, four bits of parity data are appended to four bits of input data. The input data and parity bits thus form encoded signal data that is sent to a modulator 125. Modulator 125 programs individual memory cells according to the signal data. In this case, two bits are stored in a memory cell of memory array
126, so the eight bits of signal data are stored in four cells having respective threshold voltage levels V1-V4. Subsequently the memory cells are read as having threshold voltage ranges Vl '-V4'. The read threshold voltage ranges are demodulated in a demodulator 127 to provide raw likelihood data (1.5, 1.0, 0.2, 0.3, 2.5, 2.0, 6.0, 1.0). This may be obtained using a lookup table or otherwise. In some cases, providing raw likelihood data is considered as a function within a decoder, but for the present case it is considered as taking place within demodulator
127. A raw likelihood value is obtained for each bit so that even though bits are stored in the same cell, they may have different raw likelihood values. In the present example, demodulator 127 provides likelihood values as Log Likelihood Ratio values, though likelihood may be expressed in other formats also. The raw likelihood values are positive for all data entries indicating that, if a hard-output was obtained directly from these entries, all data entries would be considered to be Is at this point (providing two errors). However, using a SISO decoder 129 in ECC unit 123, the data may be fully corrected. Figures 13A-13D show how SISO decoder 129 corrects input data. Decoder 129 is a particular example of a SISO decoder that may be used in a memory system such as memory system 421 (decoder 129 may be used as decoder 429)
Figure 13A shows a first horizontal iteration using row parity bits 131 to obtain first calculated likelihood values 133 from row likelihood values 132. In this case, LLRs are added to obtain calculated likelihood values 133. It can be shown that the sum of two LLRs in this example is given by the product of the signs of the two LLRs and (-1), multiplied by the smaller LLR value. LLR(Dl) ® LLR(D2) «(-1) x sgn[LLR(Dl)] x sgn[LLR(D2)] x min[LLR(Dl), LLR(D2)] Where © indicates LLR addition. Applying this LLR addition to the entries provides calculated likelihood values shown. For example, the calculated likelihood corresponding to entry Dn is 0.1 © 2.5 ~ -0.1, the calculated likelihood corresponding to entry Di2 is 1.5 © 2.5 ~ -1.5. Calculated (extrinsic) likelihood values 133 are then added to the raw (intrinsic) likelihood values 132 to obtain output likelihood values 135 from the first horizontal iteration. So the output likelihood value corresponding to entry Dl 1 is the raw likelihood value 1.5 plus the calculated likelihood value -0.1, giving 1.4. As can be seen, the output likelihood values 135 of the top row (1.4, -1.4) indicate relatively high likelihood values indicating that the correct bits are 1 and 0. However, likelihood values 135 on the bottom row (-0.1, 0.1) indicate low likelihood values that the bits are 0 and 1 respectively. These likelihood values indicate the correct input bits. However, such low likelihood values may not be considered good enough to terminate decoding at this point. So, additional iterations may be performed.
Figure 13B shows the output likelihood values 135 from the first horizontal iteration being subjected to a first vertical iteration using column parity bits 137. Calculated likelihood values 139 of the first vertical iteration are calculated in the same manner as before, this time along columns using column parity entries 137. Thus, Dl 1 is obtained from the LLR sum of D12 and P3 (-0.1 and 6.0, giving 0.1). D12 is obtained from the LLR sum of Dl 1 and P3 (1.4 and 6.0, giving -1.4). In this way, calculated likelihood values 139 are obtained for each entry. Next, the calculated (extrinsic) likelihood values 139 are added to the input likelihood values 135 (the output likelihood values of the first horizontal iteration) to obtain output likelihood values 141 of the first vertical iteration. The output likelihood values 141 obtained from the first vertical iteration (1.5, -1.5, -1.5, 1.1) may be considered sufficiently good to terminate decoding at this point. However, depending on the predetermined condition required to terminate decoding, more decoding iterations may be performed.
Figure 13C shows a second horizontal iteration being performed. The calculated likelihood values 139 from the first vertical iteration are first added to the raw likelihood values 132 to obtain input values 143. The input values 143 are then used with row parity entries 131 as before to obtain second horizontal calculated likelihood values 145. The second horizontal likelihood values 145 are then added to the input values 132 to obtain the output likelihood values 147 of the second horizontal iteration. The output likelihood values 147 of the second horizontal iteration do not provide an overall improvement in likelihood values from the output of the first vertical iteration. Figure 13D shows a second vertical iteration being performed. The output values 147 from the second horizontal iteration are used as input values for this iteration. Column parity entries 137 are used with the input values to obtain calculated likelihood values 149 as before. Calculated likelihood values 149 are then added to the input likelihood values 147 to obtain output likelihood values 151. Output values 151 from the second vertical iteration are shown to be improved compared with output values 141 from the first vertical iteration. Thus, it can be seen that additional iterations may provide additional improvement in the data.
Iterative decoding may cycle through iterations until some predetermined condition is met. For example, the predetermined condition may be that each likelihood value in an output set of likelihood values exceeds some minimum likelihood value. Alternatively, the predetermined condition may be some parameter derived from more than one likelihood value, such as a mean or average likelihood. The predetermined condition may simply be that a certain number of iterations are preformed. In some cases (discussed later) a SISO decoder provides output likelihood values that are then subject to another operation that indicates whether additional SISO iterations should be performed or not.
The example of Figures 13A-13D may be considered an example of a technique known as Turbo decoding. Horizontal and vertical parity bits provide two alternative encoding schemes that can be separately decoded. Using two such decoding schemes together and using the output from one decoding scheme as the input for the other, turbo coding generally provides high error correction capability.
Efficient decoding depends on having a suitable encoding/decoding scheme. Various schemes are known for encoding data in a manner that is suitable for subsequent decoding in a SISO decoder. Encoding/decoding schemes include, but are not limited to, turbo codes, product codes, BCH codes, Reed-Solomon codes, convolutional codes (see U.S. Patent Application Nos. 11/383,401 and 11/383,405), Hamming codes, and Low Density Parity Check (LDPC) codes.
LDPC codes are codes that have a parity check matrix that meets certain requirements, resulting in a sparse parity check matrix. This means that each parity check is performed over a relatively small number of bits. An example of a parity check matrix H for an LDPC code is shown in Figure 14. The conditions for an LDPC code are: (1) The number of Is in each row is the same and the number is small in comparison to the total number of entries in the row. (2) The number of Is in each column is the same and the number is small in comparison with the total number of entries in the column. (3) The number of Is in common between any two columns is not greater than 1 (the number of Is in common may only be zero or one). Irregular LDPC codes allow some deviation in the number of Is in columns and rows. Looking at H, each row has three Is, out of a total of seven entries in a row, so that condition (1) is met. Each column has three Is, out of a total of seven entries in a column, so that condition (2) is met. No two columns have more than one 1 in common, so that condition (3) is met. For example, the first, second and fourth columns all have a 1 as the top entry, but none of these columns has another 1 in common. Thus, matrix H defines an LDPC code. The code consists of all code words that satisfy matrix H. This means that seven different parity check conditions (defined by the seven rows) must be met. Each parity check condition looks at three entries in a word. For example, the first row indicates that the first, second and fourth entries in a word must have a sum modulo two of zero.
Data may be encoded according to an LDPC code by calculating certain parity bits to form a codeword. Thus, a codeword of the parity check matrix H may be formed of four data bits and three parity bits calculated from the four data bits. Each parity bit is calculated from a relatively small number of data bits, so encoding may be relatively simple, even where a large number of entries are encoded as a block.
A suitable LDPC code for memory applications uses a word of about 4,000-8,000 bits (1-2 sectors, where a sector is 512 bytes). For example, encoding according to the LDPC code may add approximately 12% to the unencoded data. The number of Is in a row of a parity check matrix for such a code may be about 32 out of about 4000, so that even though the word is large, the parity calculations are not excessively long. Thus, parity bits may be relatively easily calculated during encoding and parity may also be relatively easily checked during decoding. LDPC codes may be used with hard-input hard-output decoding or SISO decoding. As shown earlier, SISO decoding can sometimes improve performance over hard-input hard- output decoding. Raw likelihood values may be supplied to a SISO decoder as LLRs or in some other form. An LDPC can use a SISO decoder in an iterative manner. An entry is common to several parity groups so that a calculated likelihood value obtained from one group provides improved data for another parity group. Such calculations may be iteratively performed until some predetermined condition is met. LDPC decoding may sometimes provide poor results when the number of errors is very low. Correcting errors below a certain number becomes difficult creating an "error floor." One solution to this problem is to combine LDPC decoding with some other form of decoding. A hard-input hard-output decoder using BCH or some similar algebraic code may be added to an LDPC decoder. Thus, the LDPC decoder reduces the number of errors to some low level, and then the BCH decoder decodes the remaining errors. Decoders operating in series in this manner are referred to as "concatenated." Concatenated encoding is also performed before data is stored in the memory array in this case. Figure 15 shows an example of concatenated encoding and decoding in an ECC unit
155 that includes an encoding system 157 and a decoding system 159. Data is received by the ECC unit 155 and is first encoded in encoder A where it is encoded according to encoding scheme A. Then, the encoded data is sent to encoder B where it is encoded according to encoding scheme B. In the present example, encoding scheme A is a BCH encoding scheme that adds parity bits to the input data, increasing the amount of data by approximately 4% in one example. Encoding scheme B is an LDPC encoding scheme that adds additional parity bits to the encoded data from encoder A, adding an additional 12% to the data in this example. The doubly encoded data is then send to a modulation/demodulation unit and is programmed to a nonvolatile memory array. Subsequently, the doubly encoded data is read from the memory array and is demodulated to provide a soft-input to decoder B. Decoder B decodes data using encoding scheme B. Similarly, decoder A uses encoding scheme A. Decoder B of this example is a SISO decoder that performs one or more decoding iterations on the doubly encoded data. When some predetermined condition is met, decoder B sends output data to decoder A. The output data from decoder B does not generally include entries for parity bits added by encoder B. These entries have already been used by decoder B and are no longer necessary. The output of decoder B is a soft-output. This soft output may be converted to a hard-data in a soft-hard converter 161 that removes likelihood information and converts the data to binary information. This hard data is then provided as a hard-input to decoder A which performs hard-input, hard-output decoding. The hard-output from the second decoder is then sent out of ECC unit 155 as corrected data.
In one embodiment, the predetermined condition for terminating iterative decoding in decoder B is that decoder A indicates that the data is good. After an iteration is completed in decoder B, a soft-output may be converted to hard data and provided as a hard-input to decoder A. Decoder A then attempts to decode the data. If decoder A cannot decode the data, then decoder B performs at least one additional iteration. If decoder A can decode the data, then no more iterations are required in decoder B. Thus, in this example, decoder A provides a feedback 163 to decoder B to indicate when the decoder B should terminate. While the example of Figure 15 deals with concatenation of hard-input hard-output and SISO coding, other combinations may also be used. Two or more SISO decoders may be used in series and two or more hard-input hard-output decoders may also be used.
Soft-input in the examples described above is obtained by reading data with a higher resolution than was used to program the data. In other examples, other information may be used to derive soft-input data. Any quality information provided in addition to a simple 1 or 0 determination regarding a data bit stored in memory may be used to provide a soft-input. In some memory designs, a count is maintained of the number of times a block has been erased. Physical properties of the memory may change in a predictable manner as erase count increases, making certain errors more likely. An erase count may be used to obtain likelihood data where such a pattern is known. Other factors known to affect programmed data in a predictable way may also be used to obtain likelihood information. In this way, data may be read from the memory array with the same resolution used to program it and still be used to provide a soft-input. Various sources of likelihood information may be combined. Thus, likelihood information from reading data with a high resolution may be combined with likelihood data from another source. Thus, a soft-input is not limited to likelihood information obtained directly from reading the memory array.
The various examples above refer to flash memory. However, various other nonvolatile memories are currently in use and the techniques described here may be applied to any suitable nonvolatile memory systems. Such memory systems may include, but are not limited to, memory systems based on ferroelectric storage (FRAM or FeRAM), memory systems based on magnetoresistive storage (MRAM), and memories based on phase change (PRAM or "OUM" for "Ovonic Unified Memory").
All patents, patent applications, articles, books, specifications, other publications, documents and things referenced herein are hereby incorporated herein by this reference in their entirety for all purposes. To the extent of any inconsistency or conflict in the definition or use of a term between any of the incorporated publications, documents or things and the text of the present document, the definition or use of the term in the present document shall prevail. Although the various aspects of the present invention have been described with respect to certain preferred embodiments, it is understood that the invention is entitled to protection within the full scope of the appended claims.

Claims

THE CLAIMSWhat is claimed is:
1. A method of decoding data stored in a nonvolatile memory array, the data encoded according to a predetermined scheme, comprising: reading the encoded data from the nonvolatile memory array to obtain a plurality of bits; obtaining likelihood information regarding the plurality of bits; and calculating output likelihood values using the predetermined scheme, the output likelihood values calculated from the plurality of bits and the likelihood information.
2. The method of claim 1 wherein obtaining likelihood information regarding the plurality of bits includes reading the encoded data with a resolution that resolves more read states than the number of program states used to program the encoded data.
3. The method of claim 1 wherein calculating the output likelihood values occurs in a first iteration, and the output likelihood values are subsequently used to calculate additional output likelihood values using the predetermined scheme in at least one additional iteration.
4. The method of claim 3 wherein the additional output likelihood values are calculated in additional iterations until a predetermined condition is met.
5. The method of claim 1 wherein the output likelihood values are used to obtain hard data that is provided to a hard-input hard-output decoder.
6. A method of storing and retrieving data in a nonvolatile memory array comprising: receiving a plurality of input data bits to be stored in the nonvolatile memory array; calculating a plurality of redundant data bits from the plurality of input data bits; storing the plurality of input data bits and the plurality of redundant data bits in a plurality of cells in the nonvolatile memory array, where the plurality of cells are individually programmed to one of n states; subsequently reading the plurality of cells to obtain raw data, the reading resolving more than n states per cell; calculating a plurality of raw likelihood values from the raw data, the plurality of raw likelihood values corresponding to input data bits and to redundant data bits; calculating a first plurality of calculated likelihood values from the plurality of raw likelihood values, an individual one of the first plurality of calculated likelihood values calculated from at least one raw likelihood value corresponding to an input data bit and from at least one raw likelihood value corresponding to a redundant data bit; and calculating a plurality of output data bits from the first plurality of calculated likelihood values.
7. The method of claim 6 wherein calculating the plurality of output data bits includes calculating a second plurality of calculated likelihood values from the first plurality of calculated likelihood values.
8. The method of claim 7 wherein first plurality of calculated likelihood values and the second plurality of calculated likelihood values are calculated as steps in a turbo code that repeatedly calculates pluralities of likelihood values until predetermined conditions are met.
9. The method of claim 6 wherein the plurality of redundant data bits are generated by a Low Density Parity Check (LDPC) encoder.
10. The method of claim 6 further comprising performing a hard-input hard-output Error Correcting Code (ECC) operation on the plurality of output data bits.
11. The method of claim 10 wherein, if the ECC operation indicates a number of errors that is greater than a threshold number, then calculating a second plurality of likelihood values from the first plurality of likelihood values.
12. The method of claim 10 wherein, if the ECC operation indicates a number of errors that is below the threshold number, then correcting errors in the plurality of output data bits.
13. A method of reading and storing data in a nonvolatile memory array that stores at least two bits of data per memory cell comprising: receiving a plurality of input data bits to be stored in the nonvolatile memory array; calculating a plurality of redundant data bits from the plurality of input data bits; storing the plurality of input data bits and the plurality of redundant data bits in a plurality of cells in the nonvolatile memory, where the plurality of cells are individually programmed to one of a first number of program states that represent at least two bits per cell; subsequently reading the plurality of cells to obtain raw data, the reading resolving a second number of read states, the second number being greater than the first number; and subsequently calculating a plurality of raw likelihood values from the raw data, an individual one of the plurality of raw likelihood values corresponding to a single input data bit or to a single redundant data bit.
14. The method of claim 13 wherein the plurality of raw likelihood values are derived from a lookup table that individually correlates likelihood for each bit stored in a memory cell with raw data from the memory cell.
15. The method of claim 13 wherein the plurality of input data bits and the plurality of redundant data bits are stored by programming the plurality of cells to threshold voltage ranges that individually represent two or more bits.
16. The method of claim 15 wherein the threshold voltage ranges are mapped to bits according to a Gray code.
17 The method of claim 15 wherein the threshold voltage ranges are mapped to bits according to a binary encoding scheme.
18. The method of claim 13 wherein the reading is achieved by comparing a voltage from a cell with a predetermined pattern of reference voltages, the pattern having a higher density of reference voltages at an outer portion of a threshold voltage range of a program state than at the center of the threshold voltage range.
19. The method of claim 13 further comprising calculating a plurality of calculated likelihood values from the plurality of raw likelihood values, an individual calculated likelihood value derived from at least one raw likelihood value corresponding to an input data bit and from at least one raw likelihood value corresponding to a redundant data bit.
20. The method of claim 13 wherein a plurality of output data bits are calculated from the plurality of raw likelihood values according to a soft-input soft-output error correcting code.
21. The method of claim 20 wherein the error correcting code is a turbo code.
22. The method of claim 20 wherein the error correcting code is a LDPC code.
23. A method of reading data from a nonvolatile memory array in which data is stored in memory cells that are programmed to threshold voltage ranges that individually correspond to memory cell states, comprising: performing a plurality of read operations on a memory cell that is programmed to a threshold voltage range, the plurality of read operations performed according to a predetermined pattern, the predetermined pattern providing a higher density of read operations for a first portion of the threshold voltage range than for a second portion of the threshold voltage range; and deriving from the plurality of read operations at least one likelihood value that represents a likelihood of a bit programmed in the memory cell having a particular logical state.
24. The method of claim 23 wherein the at least one likelihood value is derived from the plurality of read operations using a lookup table that gives individual likelihood values for each bit stored in a memory cell.
25. The method of claim 23 wherein a likelihood value is obtained for each bit stored in a plurality of memory cells, the likelihood values providing a soft-input for a soft-input soft- output decoder.
26. The method of claim 25 further comprising passing output information from the soft- input soft output decoder to a hard-input hard-output decoder that performs further error correction.
27. A nonvolatile memory system comprising: a memory array including a plurality of cells that store a plurality of data bits and a plurality of parity bits that are calculated from the plurality of data bits according to an encoding scheme; a demodulator that reads the plurality of cells and derives raw likelihood values corresponding to the plurality of data bits and the plurality of parity bits; and a decoder that receives the raw likelihood values and calculates output likelihood values therefrom using the encoding scheme.
28. The nonvolatile memory system of claim 27 wherein the demodulator resolves a number of read states per cell that is greater than the number of program states used to program the plurality of cells.
29. The nonvolatile memory system of claim 27 wherein the decoder subsequently calculates additional likelihood values from the output likelihood values using the encoding scheme.
30. The nonvolatile memory system of claim 29 wherein the decoder calculates additional likelihood values in two or more iterations, the iterations performed until a predetermined condition is met.
31. The nonvolatile memory system of claim 30 further comprising a hard-input hard- output decoder.
32. The nonvolatile memory system of claim 27 wherein the encoding scheme uses turbo coding.
33. The nonvolatile memory system of claim 27 wherein the encoding scheme uses a Low Density Parity Check (LDPC) code.
34. The nonvolatile memory system of claim 27 further comprising a converter that converts a soft input to a hard output.
35. A nonvolatile memory system comprising: a nonvolatile memory array that stores two or more bits in an individual memory cell; and a demodulator that derives an individual likelihood value for each of the two or more bits stored in the individual memory cell.
36. The nonvolatile memory system of claim 35 further comprising a lookup table that is used to obtain the individual likelihood values for each of the two or more bits.
37. The nonvolatile memory system of claim 35 wherein the two or more bits include at least one parity bit that was added according to an encoding scheme.
38. The nonvolatile memory system of claim 35 wherein the demodulator derives the individual likelihood values by reading the individual memory cell with a resolution that identifies more than the number of program states of the individual memory cell.
39. The nonvolatile memory system of claim 35 further comprising a soft-input soft-output decoder that receives the individual likelihood values as input and calculates output likelihood values from the individual likelihood values.
40. The nonvolatile memory system of claim 35 wherein the demodulator reads the individual memory cell using a predetermined pattern of read operations, the pattern having at least one area of higher density and one area of lower density.
41. A nonvolatile memory system comprising: an array of nonvolatile memory cells that are individually programmed to one of two or more threshold voltage ranges that represent two or more states; and a demodulator that resolves an individual cell threshold voltage to an identified one of the two or more threshold voltage ranges and further resolves the individual cell threshold voltage within the identified threshold voltage range by providing a higher density of read operations for a first portion of the identified threshold voltage range than at for a second portion of the identified threshold voltage range, the demodulator deriving likelihood values from the read operations for the first and second portions.
42. The nonvolatile memory system of claim 41 further comprising a soft-input soft-output decoder that receives the likelihood values as input.
43. The nonvolatile memory system of claim 42 wherein the soft-input soft-output decoder calculates output likelihood values in multiple iterations until a predetermined condition is met.
44. The nonvolatile memory system of claim 43 further comprising a soft-hard converter that converts the output likelihood values to hard output values when the predetermined condition is met.
45. The nonvolatile memory system of claim 43 further comprising a hard-input hard- output decoder that receives the output likelihood values from the soft-input soft-output decoder, the hard-input hard-output decoder determining when the predetermined condition is met.
46. The nonvolatile memory system of claim 41 wherein nonvolatile memory cells are individually programmed to four or more threshold voltage ranges that represent four or more states to store two or more bits, the demodulator deriving likelihood values for each of the two or more bits.
PCT/US2007/078819 2006-09-28 2007-09-19 Nonvolatile memory with error correction based on the likehood the error may occur WO2008042593A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/536,286 2006-09-28
US11/536,286 US7818653B2 (en) 2006-09-28 2006-09-28 Methods of soft-input soft-output decoding for nonvolatile memory
US11/536,327 2006-09-28
US11/536,327 US7904783B2 (en) 2006-09-28 2006-09-28 Soft-input soft-output decoder for nonvolatile memory

Publications (1)

Publication Number Publication Date
WO2008042593A1 true WO2008042593A1 (en) 2008-04-10

Family

ID=39015990

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/078819 WO2008042593A1 (en) 2006-09-28 2007-09-19 Nonvolatile memory with error correction based on the likehood the error may occur

Country Status (3)

Country Link
KR (1) KR20090086523A (en)
TW (1) TWI353521B (en)
WO (1) WO2008042593A1 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075351A2 (en) * 2006-12-21 2008-06-26 Ramot At Tel Aviv University Ltd. Soft decoding of hard and soft bits read from a flash memory
WO2008121553A1 (en) * 2007-03-29 2008-10-09 Sandisk Corporation Non-volatile storage with decoding of data using reliability metrics based on multiple reads
WO2010002948A1 (en) 2008-07-01 2010-01-07 Lsi Corporation Methods and apparatus for soft demapping and intercell interference mitigation in flash memories
WO2010039859A1 (en) 2008-09-30 2010-04-08 Lsi Corporation Methods and apparatus for soft data generation for memory devices based on performance factor adjustment
US7805663B2 (en) 2006-09-28 2010-09-28 Sandisk Corporation Methods of adapting operation of nonvolatile memory
US7818653B2 (en) 2006-09-28 2010-10-19 Sandisk Corporation Methods of soft-input soft-output decoding for nonvolatile memory
US7904783B2 (en) 2006-09-28 2011-03-08 Sandisk Corporation Soft-input soft-output decoder for nonvolatile memory
US7904793B2 (en) 2007-03-29 2011-03-08 Sandisk Corporation Method for decoding data in non-volatile storage using reliability metrics based on multiple reads
US8099652B1 (en) 2010-12-23 2012-01-17 Sandisk Corporation Non-volatile memory and methods with reading soft bits in non uniform schemes
US8107306B2 (en) 2009-03-27 2012-01-31 Analog Devices, Inc. Storage devices with soft processing
WO2012087805A2 (en) 2010-12-23 2012-06-28 Sandisk Il Ltd. Non-volatile memory and methods with asymmetric soft read points around hard read points
WO2012087815A1 (en) 2010-12-23 2012-06-28 Sandisk Il Ltd. Non-volatile memory and methods with soft-bit reads while reading hard bits with compensation for coupling
US8429500B2 (en) 2010-03-31 2013-04-23 Lsi Corporation Methods and apparatus for computing a probability value of a received value in communication or storage systems
US8458114B2 (en) 2009-03-02 2013-06-04 Analog Devices, Inc. Analog computation using numerical representations with uncertainty
US20130176778A1 (en) * 2011-03-14 2013-07-11 Lsi Corporation Cell-level statistics collection for detection and decoding in flash memories
US8504885B2 (en) 2010-03-31 2013-08-06 Lsi Corporation Methods and apparatus for approximating a probability density function or distribution for a received value in communication or storage systems
WO2014022518A1 (en) * 2012-08-03 2014-02-06 Micron Technology, Inc. Memory cell state in a valley between adjacent data states
EP2707879A2 (en) * 2011-05-12 2014-03-19 Micron Technology, Inc. Programming memory cells
WO2014051734A1 (en) * 2012-09-28 2014-04-03 Intel Corporation Endurance aware error-correcting code (ecc) protection for non-volatile memories
US8775913B2 (en) 2010-03-31 2014-07-08 Lsi Corporation Methods and apparatus for computing soft data or log likelihood ratios for received values in communication or storage systems
WO2014113161A1 (en) * 2013-01-21 2014-07-24 Micron Technology, Inc. Determining soft data using a classification code
JP2014160534A (en) * 2008-08-08 2014-09-04 Marvell World Trade Ltd Memory access utilizing partial reference voltage
US9292377B2 (en) 2011-01-04 2016-03-22 Seagate Technology Llc Detection and decoding in flash memories using correlation of neighboring bits and probability based reliability values
US9898361B2 (en) 2011-01-04 2018-02-20 Seagate Technology Llc Multi-tier detection and decoding in flash memories
US10304550B1 (en) 2017-11-29 2019-05-28 Sandisk Technologies Llc Sense amplifier with negative threshold sensing for non-volatile memory
US10573379B2 (en) 2014-06-03 2020-02-25 Micron Technology, Inc. Determining soft data
US10643695B1 (en) 2019-01-10 2020-05-05 Sandisk Technologies Llc Concurrent multi-state program verify for non-volatile memory
CN111110247A (en) * 2020-01-13 2020-05-08 广东高驰运动科技有限公司 Monitoring method and monitoring device for motion data indexes
US11024392B1 (en) 2019-12-23 2021-06-01 Sandisk Technologies Llc Sense amplifier for bidirectional sensing of memory cells of a non-volatile memory
CN113129993A (en) * 2020-01-16 2021-07-16 华邦电子股份有限公司 Memory device and data reading method thereof

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101671326B1 (en) * 2010-03-08 2016-11-01 삼성전자주식회사 Nonvolatile memory using interleaving technology and program method thereof
TWI492234B (en) 2014-04-21 2015-07-11 Silicon Motion Inc Method, memory controller, and memory system for reading data stored in flash memory
TWI685850B (en) * 2018-08-22 2020-02-21 大陸商深圳大心電子科技有限公司 Memory management method and storage controller
JP7423757B2 (en) * 2019-08-22 2024-01-29 グーグル エルエルシー Sharding for synchronous processors

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5172338A (en) * 1989-04-13 1992-12-15 Sundisk Corporation Multi-state EEprom read and write circuits and techniques
WO2000041507A2 (en) * 1999-01-11 2000-07-20 Ericsson Inc. Reduced-state sequence estimation with set partitioning
US6279133B1 (en) * 1997-12-31 2001-08-21 Kawasaki Steel Corporation Method and apparatus for significantly improving the reliability of multilevel memory architecture
US20060114814A1 (en) * 2004-11-30 2006-06-01 Kabushiki Kaisha Toshiba Orthogonal frequency division demodulator, method and computer program product
WO2007133963A2 (en) * 2006-05-15 2007-11-22 Sandisk Corporation Nonvolatile memory with convolutional coding for error correction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5172338A (en) * 1989-04-13 1992-12-15 Sundisk Corporation Multi-state EEprom read and write circuits and techniques
US5172338B1 (en) * 1989-04-13 1997-07-08 Sandisk Corp Multi-state eeprom read and write circuits and techniques
US6279133B1 (en) * 1997-12-31 2001-08-21 Kawasaki Steel Corporation Method and apparatus for significantly improving the reliability of multilevel memory architecture
WO2000041507A2 (en) * 1999-01-11 2000-07-20 Ericsson Inc. Reduced-state sequence estimation with set partitioning
US20060114814A1 (en) * 2004-11-30 2006-06-01 Kabushiki Kaisha Toshiba Orthogonal frequency division demodulator, method and computer program product
WO2007133963A2 (en) * 2006-05-15 2007-11-22 Sandisk Corporation Nonvolatile memory with convolutional coding for error correction

Cited By (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7818653B2 (en) 2006-09-28 2010-10-19 Sandisk Corporation Methods of soft-input soft-output decoding for nonvolatile memory
US7904783B2 (en) 2006-09-28 2011-03-08 Sandisk Corporation Soft-input soft-output decoder for nonvolatile memory
US7805663B2 (en) 2006-09-28 2010-09-28 Sandisk Corporation Methods of adapting operation of nonvolatile memory
WO2008075351A3 (en) * 2006-12-21 2008-07-31 Univ Ramot Soft decoding of hard and soft bits read from a flash memory
WO2008075351A2 (en) * 2006-12-21 2008-06-26 Ramot At Tel Aviv University Ltd. Soft decoding of hard and soft bits read from a flash memory
WO2008121553A1 (en) * 2007-03-29 2008-10-09 Sandisk Corporation Non-volatile storage with decoding of data using reliability metrics based on multiple reads
US8468424B2 (en) 2007-03-29 2013-06-18 Sandisk Technologies Inc. Method for decoding data in non-volatile storage using reliability metrics based on multiple reads
US8966350B2 (en) 2007-03-29 2015-02-24 Sandisk Technologies Inc. Providing reliability metrics for decoding data in non-volatile storage
US7904793B2 (en) 2007-03-29 2011-03-08 Sandisk Corporation Method for decoding data in non-volatile storage using reliability metrics based on multiple reads
KR101628413B1 (en) 2008-07-01 2016-06-08 엘에스아이 코포레이션 Methods and apparatus for soft demapping and intercell interference mitigation in flash memories
US8788923B2 (en) 2008-07-01 2014-07-22 Lsi Corporation Methods and apparatus for soft demapping and intercell interference mitigation in flash memories
WO2010002948A1 (en) 2008-07-01 2010-01-07 Lsi Corporation Methods and apparatus for soft demapping and intercell interference mitigation in flash memories
KR20110041500A (en) * 2008-07-01 2011-04-21 엘에스아이 코포레이션 Methods and apparatus for soft demapping and intercell interference mitigation in flash memories
CN102132350A (en) * 2008-07-01 2011-07-20 Lsi公司 Methods and apparatus for soft demapping and intercell interference mitigation in flash memories
JP2014160534A (en) * 2008-08-08 2014-09-04 Marvell World Trade Ltd Memory access utilizing partial reference voltage
CN102203877A (en) * 2008-09-30 2011-09-28 Lsi公司 Method and apparatus for soft data generation for memory devices using decoder performance feedback
US9378835B2 (en) 2008-09-30 2016-06-28 Seagate Technology Llc Methods and apparatus for soft data generation for memory devices based using reference cells
WO2010039869A1 (en) 2008-09-30 2010-04-08 Lsi Corporation Methods and apparatus for soft data generation for memory devices using reference cells
KR101758192B1 (en) 2008-09-30 2017-07-14 엘에스아이 코포레이션 Methods and apparatus for soft data generation for memory devices
US9064594B2 (en) 2008-09-30 2015-06-23 Seagate Technology Llc Methods and apparatus for soft data generation for memory devices based on performance factor adjustment
WO2010039859A1 (en) 2008-09-30 2010-04-08 Lsi Corporation Methods and apparatus for soft data generation for memory devices based on performance factor adjustment
US8892966B2 (en) 2008-09-30 2014-11-18 Lsi Corporation Methods and apparatus for soft data generation for memory devices using decoder performance feedback
US8830748B2 (en) 2008-09-30 2014-09-09 Lsi Corporation Methods and apparatus for soft data generation for memory devices
WO2010039874A1 (en) 2008-09-30 2010-04-08 Lsi Corportion Method and apparatus for soft data generation for memory devices using decoder performance feedback
WO2010039866A1 (en) 2008-09-30 2010-04-08 Lsi Corporation Methods and apparatus for soft data generation for memory devices
US8458114B2 (en) 2009-03-02 2013-06-04 Analog Devices, Inc. Analog computation using numerical representations with uncertainty
US8179731B2 (en) 2009-03-27 2012-05-15 Analog Devices, Inc. Storage devices with soft processing
US8107306B2 (en) 2009-03-27 2012-01-31 Analog Devices, Inc. Storage devices with soft processing
US9036420B2 (en) 2009-03-27 2015-05-19 Analog Devices, Inc. Storage devices with soft processing
US8429500B2 (en) 2010-03-31 2013-04-23 Lsi Corporation Methods and apparatus for computing a probability value of a received value in communication or storage systems
US8775913B2 (en) 2010-03-31 2014-07-08 Lsi Corporation Methods and apparatus for computing soft data or log likelihood ratios for received values in communication or storage systems
US8504885B2 (en) 2010-03-31 2013-08-06 Lsi Corporation Methods and apparatus for approximating a probability density function or distribution for a received value in communication or storage systems
US8099652B1 (en) 2010-12-23 2012-01-17 Sandisk Corporation Non-volatile memory and methods with reading soft bits in non uniform schemes
WO2012087803A2 (en) 2010-12-23 2012-06-28 Sandisk Il Ltd. Non-volatile memory and methods with reading soft bits in non uniform schemes
US8782495B2 (en) 2010-12-23 2014-07-15 Sandisk Il Ltd Non-volatile memory and methods with asymmetric soft read points around hard read points
US8498152B2 (en) 2010-12-23 2013-07-30 Sandisk Il Ltd. Non-volatile memory and methods with soft-bit reads while reading hard bits with compensation for coupling
US9070472B2 (en) 2010-12-23 2015-06-30 Sandisk Il Ltd Non-volatile memory and methods with soft-bit reads while reading hard bits with compensation for coupling
WO2012087803A3 (en) * 2010-12-23 2012-08-23 Sandisk Il Ltd. Non-volatile multi-bit memory and methods with reading soft bits with non-uniformly arranged reference threshold voltages
WO2012087805A3 (en) * 2010-12-23 2012-08-09 Sandisk Il Ltd. Non-volatile memory and methods with asymmetric soft read points around hard read points
WO2012087805A2 (en) 2010-12-23 2012-06-28 Sandisk Il Ltd. Non-volatile memory and methods with asymmetric soft read points around hard read points
WO2012087815A1 (en) 2010-12-23 2012-06-28 Sandisk Il Ltd. Non-volatile memory and methods with soft-bit reads while reading hard bits with compensation for coupling
US9292377B2 (en) 2011-01-04 2016-03-22 Seagate Technology Llc Detection and decoding in flash memories using correlation of neighboring bits and probability based reliability values
US9898361B2 (en) 2011-01-04 2018-02-20 Seagate Technology Llc Multi-tier detection and decoding in flash memories
US10929221B2 (en) 2011-01-04 2021-02-23 Seagate Technology Llc Multi-tier detection and decoding in flash memories utilizing data from additional pages or wordlines
US20130176778A1 (en) * 2011-03-14 2013-07-11 Lsi Corporation Cell-level statistics collection for detection and decoding in flash memories
US9502117B2 (en) * 2011-03-14 2016-11-22 Seagate Technology Llc Cell-level statistics collection for detection and decoding in flash memories
EP2707879A4 (en) * 2011-05-12 2014-11-26 Micron Technology Inc Programming memory cells
US9030902B2 (en) 2011-05-12 2015-05-12 Micron Technology, Inc. Programming memory cells
EP2707879A2 (en) * 2011-05-12 2014-03-19 Micron Technology, Inc. Programming memory cells
EP3522166A1 (en) * 2011-05-12 2019-08-07 Micron Technology, Inc. Programming memory cells
US9064575B2 (en) 2012-08-03 2015-06-23 Micron Technology, Inc. Determining whether a memory cell state is in a valley between adjacent data states
US9990988B2 (en) 2012-08-03 2018-06-05 Micron Technology, Inc. Determining whether a memory cell state is in a valley between adjacent data states
EP2880658A4 (en) * 2012-08-03 2015-08-05 Micron Technology Inc Memory cell state in a valley between adjacent data states
US11450382B2 (en) 2012-08-03 2022-09-20 Micron Technology, Inc. Memory cell state in a valley between adjacent data states
US10811090B2 (en) 2012-08-03 2020-10-20 Micron Technology, Inc. Memory cell state in a valley between adjacent data states
WO2014022518A1 (en) * 2012-08-03 2014-02-06 Micron Technology, Inc. Memory cell state in a valley between adjacent data states
WO2014051734A1 (en) * 2012-09-28 2014-04-03 Intel Corporation Endurance aware error-correcting code (ecc) protection for non-volatile memories
US8990670B2 (en) 2012-09-28 2015-03-24 Intel Corporation Endurance aware error-correcting code (ECC) protection for non-volatile memories
US9065483B2 (en) 2013-01-21 2015-06-23 Micron Technology, Inc. Determining soft data using a classification code
WO2014113161A1 (en) * 2013-01-21 2014-07-24 Micron Technology, Inc. Determining soft data using a classification code
US9391645B2 (en) 2013-01-21 2016-07-12 Micron Technology, Inc. Determining soft data using a classification code
JP2016509420A (en) * 2013-01-21 2016-03-24 マイクロン テクノロジー, インク. Judgment of soft data using classification code
US10573379B2 (en) 2014-06-03 2020-02-25 Micron Technology, Inc. Determining soft data
US11170848B2 (en) 2014-06-03 2021-11-09 Micron Technology, Inc. Determining soft data
US11688459B2 (en) 2014-06-03 2023-06-27 Micron Technology, Inc. Determining soft data
US10304550B1 (en) 2017-11-29 2019-05-28 Sandisk Technologies Llc Sense amplifier with negative threshold sensing for non-volatile memory
US10643695B1 (en) 2019-01-10 2020-05-05 Sandisk Technologies Llc Concurrent multi-state program verify for non-volatile memory
US11024392B1 (en) 2019-12-23 2021-06-01 Sandisk Technologies Llc Sense amplifier for bidirectional sensing of memory cells of a non-volatile memory
CN111110247A (en) * 2020-01-13 2020-05-08 广东高驰运动科技有限公司 Monitoring method and monitoring device for motion data indexes
CN111110247B (en) * 2020-01-13 2023-05-26 广东高驰运动科技股份有限公司 Method and device for monitoring motion data index
CN113129993A (en) * 2020-01-16 2021-07-16 华邦电子股份有限公司 Memory device and data reading method thereof

Also Published As

Publication number Publication date
TW200823666A (en) 2008-06-01
TWI353521B (en) 2011-12-01
KR20090086523A (en) 2009-08-13

Similar Documents

Publication Publication Date Title
US7818653B2 (en) Methods of soft-input soft-output decoding for nonvolatile memory
US7904783B2 (en) Soft-input soft-output decoder for nonvolatile memory
WO2008042593A1 (en) Nonvolatile memory with error correction based on the likehood the error may occur
US8001441B2 (en) Nonvolatile memory with modulated error correction coding
US7805663B2 (en) Methods of adapting operation of nonvolatile memory
US7904780B2 (en) Methods of modulating error correction coding
US7558109B2 (en) Nonvolatile memory with variable read threshold
US7904788B2 (en) Methods of varying read threshold voltage in nonvolatile memory
US20080092015A1 (en) Nonvolatile memory with adaptive operation
JP5297380B2 (en) Statistical unit and adaptive operation in non-volatile memory with soft input soft output (SISO) decoder
EP2084709B1 (en) Nonvolatile memory with variable read threshold
US7840875B2 (en) Convolutional coding methods for nonvolatile memory
US8713401B2 (en) Error recovery storage along a memory string
US20070266296A1 (en) Nonvolatile Memory with Convolutional Coding
US8635508B2 (en) Systems and methods for performing concatenated error correction
US8990668B2 (en) Decoding data stored in solid-state memory
TWI385512B (en) Method of storing data and decoding data stored in a nonvolatile semiconductor memory array, nonvolatile semiconductor and flash memory systems, and method of managing data in flash memory
US9170881B2 (en) Solid state device coding architecture for chipkill and endurance improvement
CN112331244A (en) Soft-input soft-output component code decoder of generalized low-density parity-check code

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07842731

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020097008466

Country of ref document: KR

122 Ep: pct application non-entry in european phase

Ref document number: 07842731

Country of ref document: EP

Kind code of ref document: A1