PAMISS: Intel's Six Technology Pillars-Memory
업데이트 시간: 2021-05-10 11:52:32
In this post, we continue our review of Memory, Intel's third technology pillar.
Intel believes that memory performance growth has been linear for the past 40 years and therefore cannot keep up with the exponential growth of computing power.
Memory refers to short-term storage (e.g. DRAM), while Storage refers to long-term storage (e.g. hard disk).
Memory has high bandwidth (fast access speed) but low capacity, while Storage is the opposite. Secondary/tertiary Storage follows a similar pattern. It is worth mentioning that there has been a shift from traditional SRAM storage to DRAM storage (HBM) on FPGAs as storage space becomes increasingly insufficient.
The most important indicator in the roadmap of 3D NAND storage - Layer. and build a building is similar, the higher the number of layers, the greater the capacity naturally, the current industry mainstream manufacturers are committed to the development of 128 layers, and Intel came directly to 144 layers.
Another important indicator is the number of bits of information stored in each cell, which can be interpreted as how many people can live in each room.
SLC (Single-Level Cell) single room: each cell stores 1bit information, that is, only 0, 1 two voltage changes, simple structure, voltage control is also fast, reflecting the characteristics of long life, strong performance, P / E life between 10,000 to 100,000 times, but the disadvantage is the low capacity and high cost, after all, a Cell unit can only store 1bit information.
MLC (Multi-Level Cell) double earth: each cell stores 2bit information, requiring more complex voltage control, there are four variations of 00,01,10,11, which also means that the write performance, reliability performance is reduced. NAND flash memory other than SLC is MLC type, and we often say MLC refers to 2bit MLC, it should be called DLC is more appropriate).
TLC (Trinary-Level Cell) triple-earth: each cell stores 3bit information, there are 8 variations of voltage from 000 to 001, the capacity is again increased by 1/3 than MLC, the cost is lower, but the architecture is more complex, the P/E programming time is long, the writing speed is slow, and the P/E life is reduced to 1000-3000 times, and some cases will be lower.
QLC (Quad-Level Cell) Quad-Earth: There are 16 variations in voltage, and the capacity is increased by another third compared to TLC, that is, the writing performance and P/E life will be further reduced compared to TLC. Read speed, SATA interface in the two can reach 540MB / S, QLC poor performance in the write speed, because its P / E programming time is longer than MLC, TLC, slower, continuous write speed from 520MB / s down to 360MB / s, random performance is from 9500 IOPS down to 5000 IOPS, a loss of nearly half.
Still trading speed for space
3D XPoint is a storage technology developed by Intel and Micron (claimed to be 1000 times faster than NAND Flash), but Micron announced on March 17 that it will end all research and development of 3D Xpoint and sell its 3D Xpoint fab in Lehi, Utah...
Optane SSDs, on the other hand, are built on 3D XPoint technology, and the second generation uses a 4-layer architecture that can reach millions of reads and writes per second (IOPS).
We are launching the right product for the industry pain points (Gap) to make the "stairs" more even.
To its own Optane Persistent Memory (persistent memory) advertised, performance like Memory / persistent like Storage, which is frankly a compromise between Memory and Storage. There are two modes: Memory mode (similar to DRAM) and APP Direct mode (also known as "data persist at near-memory speed", which requires specific persistent memory-aware software/application APP. This mode keeps persistent memory unchanged, but can be byte-addressable like memory).
Comparison of the two models, image from techtarget
Intel Optane persistent memory is available in much higher capacities than traditional DRAM at 128GB, 256GB and 512GB, much larger than the commonly used 4GB to 32GB DRAM, although DRAM is also available in larger capacity DIMMs (128GB). Persistent memory is used on the same memory channel as DRAM and is placed in the slot closest to the CPU on each channel.
The name Rambo Cache (Rambo Cache) was taken by Raja Koduri, Intel's senior vice president and the lead developer of the Solo GPU. Compared to Intel's previously released technology naming, this name is not serious at all and shows Raja Koduri's little humor - Rambo It can be said to be a representative of the American hard-boiled opening movie. From the performance of the network, people think that Raja Koduri chose the name Rambo Cache (Rambo Cache), is to show off a little, the intention to combat rivals, tell friends Intel is still very powerful. Rambo Cache is essentially an intermediary layer used to connect CPU, GPU and HBM cache, based on Intel's EMIB packaging technology, which provides the ultimate memory bandwidth and FP64 floating point performance, and supports memory/cache ECC error correction and RAS (Reliability, Availability & Serviceability) at the highest level. As you can see from the graph, the Rambo Cache shows strong performance when processing double precision floating point from 8*8 to 4096*4096.
In a continuous "stairway teardown", all products find their proper place in the final "demand pyramid".
These are Intel's ideas on how to bring high quality solutions to the industry in terms of Memory, and then we'll look at what new and exciting technologies Intel has to offer in terms of Interconnect interconnect.