5 NAND Flash Myths Not To Believe

By Imran Hirani

Director of Product Architecture

Phison Electronics

By Shane Green

Phison Electronics

April 21, 2023


5 NAND Flash Myths Not To Believe

NAND flash has been available for more than three decades. Today it’s used everywhere, from gaming consoles to mobile phones to server farms in data centers. You can find the technology in solid state drives (SSDs), USB drives, audio and video flash media devices, and much more. As NAND technology evolves and adoption increases across different market segments, it’s easy for people to pick up on bits of misinformation along the way and start believing things about NAND flash that are no longer—or never have been—true.

Myth 1: Temperature Doesn’t Age Flash Modules

Because heat accelerates physical and chemical processes, higher temperatures can have a significant effect on the lifespan, or endurance, of NAND flash modules. These modules store electrical charges that degrade over time, and the higher the heat, the faster they degrade. In fact, the way heat affects chemical reactions is well documented in the Arrhenius equation, which demonstrates how electrical charges degrade exponentially as temperature rises.

Today, as industry associations such as JEDEC Solid State Technology Association develop standards for flash memory, they are taking into consideration the way heat affects flash memory endurance. For instance, JEDEC specifies that commercial SSDs should be able to store data for at least one year at 30º Celsius (86º Fahrenheit). JEDEC has also set standards for drive endurance at higher temperatures, but the differences are vast: at 52º C (125º F), an SSD should retain data for at least 500 hours. And at 66º C (150º F), that same drive should retain data for 96 hours, or just four days.

High temperatures also affect performance. SSDs can get hot simply by operating, and those drives have a type of safety measure when temperatures get too high. It’s called thermal throttling, and it means that the drive automatically slows down transfer rates to prevent overheating. What that means for you is that the drive’s performance goes way down.

Myth 2: The Amount of Saved Data Doesn’t Affect Performance

This one might seem counterintuitive at first glance. If the question is whether larger capacity drives are faster, the answer is no — once there are a minimum number of NAND dies. So when comparing sizes of SSDs, say 2 TB versus 4 TB, you won’t see any real difference in performance. The difference occurs when it comes to how much free capacity is on the drive.

To explain, let’s look at the inner workings of an SSD. NAND flash memory is made up of blocks, and when you save information to your SSD, the drive has to find the next empty block to save that data. If you get on your computer the next day and revise that information, the SSD needs to copy the data from the existing block, factor in the changes and then deposit the new chunk of data to the next empty block. It can’t simply overwrite that original data. The block that stored the original data is tagged for deletion so it can be reused.

Revising data causes the SSD to recalculate again and again to place the updated information where it belongs. When you’re editing a single file in Word, for instance, those recalculations aren’t a big deal. But what happens when you’re playing a video game? Things are constantly changing in the game and the SSD must continually recalculate and lay down updated information in memory blocks in mere fractions of seconds. Data rewrites can easily get up into the thousands per second.

More empty blocks make it easier and faster for the SSD to find and replace data; so the capacity that really matters when it comes to performance is the ratio of how much empty drive space is available. That means a 2 TB SSD at 50% capacity will typically deliver better performance than a 4 TB SSD that’s full.

A longtime industry standard is to use the 70/30 rule of thumb. That means you should try to avoid performance problems by not using more than 70% of the drive’s total capacity. Keep at least 30% empty. If the drive gets close to 70% full, it’s time to upgrade to a larger drive.

It all boils down to this: the more data you save on an SSD, the slower it gets. Yes, in most cases that slowdown might be barely noticeable to you, but in write-intensive applications, such as gaming, animation, visual effects creation, code compiling, and cryptocurrency mining, the slowdown could matter.

Myth 3: Performance is Constant Over the Lifetime of the Flash Module

Over time, NAND flash drives begin to degrade. Each block that stores data is designed to last for a finite number of reads and writes. This is typically measured in program/erase (P/E) cycles. Every time data is written or erased to the drive, that’s a P/E cycle—and every P/E cycle causes a small amount of damage to the memory module’s oxide layer.

Most SSDs designed for enterprise use last three to five years. Cell density determines the number of P/E cycles the drive can handle:

  • Single-level cell (SLC): 100,000 P/E cycles
  • Multi-level cell (MLC): 10,000 P/E cycles
  • Enterprise MLC (eMLC): 30,000 P/E cycles
  • Triple-level cell (TLC): 3,000 P/E cycles
  • Quad-level cell (QLC): 1,000 P/E cycles

The more P/E cycles on the drive, the more damage it accrues. Erase cycles are the toughest on NAND memory cells because erasing data takes more electrical energy than other tasks. Eventually, the oxide layer of the cells can get so damaged that you start to see an increased number of bit errors, or the number of incorrect bits in a stream of data. There is software available that can reduce the performance issues as bit error rates rise, but ultimately the errors become too prevalent and the flash cell fails.

A brand-new NAND flash cell will perform the best. As P/E cycles degrade the cells, or wear-out occurs, the performance will gradually decrease until the cell reaches its designated limit and becomes unreliable. 

Myth 4: As PCIe Becomes More Popular, SATA is Disappearing

Serial ATA (SATA) and PCI Express (PCIe) are two types of interfaces on a hard drive or SSD. SATA came first, and PCIe was introduced by Intel in 1992. PCIe offered much better performance from the start, and today even the most basic PCIe SSD is two to three times faster than SATA III SSDs. That’s partly because PCIe SSDs have more than double the number of channels to transfer data than SATA SSDs have.

The rising popularity of PCIe SSDs has caused many computer users to wonder if SATA SSDs will become obsolete. But according to many industry experts, SATA SSDs aren’t on the way out quite yet.

Yes, SATA gets less attention overall these days, but for some use cases, it is still a smart choice. In fact, Greg Wong, founder and analyst at Forward Insights, has said that SATA technology is still in common use among original equipment manufacturers (OEMs) and many cloud providers, and that it’s often used for boot drives.

Wong also said that “Micron and others are going to continue to support SATA on the latest NAND technology,” and cited a recent report finding that around 20 million SATA SSDs are still shipping each year. That statistic is expected to continue through 2025.

PCIe and SATA don’t show many differences in performance in drives with capacities below 128 GB, so many enterprises use SATA — which can often be less expensive — in industrial and IoT applications.

SATA drives also consume less power, have a longer battery life, and don’t get as hot as PCIe drives.

Myth 5: UFS Memory is Primarily for Smartphones

Universal Flash Storage (UFS) was developed for use in smartphones. In fact, in its early days, it was only used in high-end smartphones. It was meant to replace embedded multimedia card (eMMC) storage, which was previously the most commonly used storage in smartphones.

UFS was a significant improvement over eMMC right out of the gate, because it can read and write data simultaneously. eMMC has to perform read and write operations separately. UFS also has more bandwidth. Today, most smartphones use UFS as the price for the technology has steadily decreased.

It’s not just for smartphones, though, and the industry is now beginning to use UFS in more ways. The increased reliability and higher data transfer speed of UFS is a benefit for all types of devices, from tablets and gaming consoles to e-readers, wearable devices, cameras, servers, printers, and more.

The latest generation of the technology is UFS 4.0, and many industry experts foresee it being used in embedded automotive applications as well as augmented reality (AR) and virtual reality (VR) applications.