The Angelbird Wings PX1 M.2 Adapter Review: Do M.2 SSDs Need Heatsinks?
by Billy Tallis on December 21, 2015 8:00 AM ESTRandom Write
The random write test runs for a total of 18 minutes, starting with a queue depth of 1 and doubling QD every three minutes. The test is limited to a 16GB portion of the drive and only that portion is pre-filled with data, so this test doesn't reflect the steady-state behavior of a full drive. The main scores are based on the average of QD1, QD2 and QD4 results as larger queue depths are rare for client workloads. A more detailed breakdown is graphed further down the page.
The heatsink allows the 950 Pro to reach significantly higher random write speeds, and enables the larger 512GB model to pull ahead of the 256GB model. Power comsumption increased slightly, but given the performance boost, efficiency is improved.
256GB no heatsink | 512GB no heatsink | ||||||||
256GB with heatsink | 512GB with heatsink |
Without the heatsink, performance drops starting with a queue depth of 4, indicating that thermal throttling kicks in severely 6 to 9 minutes into the test. With the heatsink, we see that the 512GB model doesn't reach full performance until QD4.
Random Read
The timing and queue depth scaling for testing random reads are handled the same as for random writes, but reads are inherently faster and less power-hungry operations than writes for flash memory, so this is usually a less stressful test even when the drive's throughput is higher. The entire drive is filled before this test and the reads are not restricted to any portion of the drive.
The performance for random reads is unaffected by the heatsink, but the lower operating temperature leads to less transistor leakage and thus slightly improved efficiency.
256GB no heatsink | 512GB no heatsink | ||||||||
256GB with heatsink | 512GB with heatsink |
The heatsink produces no meaningful changes to the queue depth scaling behavior of random reads, further demonstrating that heat is not a problem for this test.
69 Comments
View All Comments
Haravikk - Monday, December 21, 2015 - link
I think I could only see this being useful if you were building a system loaded with SSDs in the PCIe slots; in a system with a GPU I'd expect the extra heat from that will easily result in worse performance than keeping the M.2 drive on the motherboard.In fact, for a single M.2 SSD system my preference is a motherboard with an M.2 slot on the back; this keeps it away from the worst heat generating components, and even though few cases provide proper airflow on the back of the motherboard, as long as your cooling is adequate it should never get too hot for the drive.
Even if you are building a system with a ton of SSDs, the main benefit is having the PCIe adapter IMO, it doesn't seem like the heatsink makes such a big difference that you're ever really going to notice it.
vFunct - Monday, December 21, 2015 - link
This is going to be mostly useful in servers, where sustained (non-burst) read/write is typical.Ethos Evoss - Saturday, December 26, 2015 - link
Now new NVMe M.2 SSDs NEED heatsink totally bcos PCIe 3.0 x4 has very very high bandwidth and generates 100 C degrees celsius !Ethos Evoss - Saturday, December 26, 2015 - link
https://www.youtube.com/watch?v=d3GlInzvHr8frowertr - Monday, December 21, 2015 - link
Really think M.2 is the future. No cables and small size sounds like a winner to me.ImSpartacus - Monday, December 21, 2015 - link
It's probably the future, but it'll take a while to get there.If you need a cheap ssd for a boring boot drive, then 2.5" is the way to go if you have anything close to resembling a budget.
frowertr - Monday, December 21, 2015 - link
Yeah I agree. But they will figure out how to get more capacity at lower costs packed onto the size factor soon enough. I just built a new Skylake build for my living room HTPC/Xbox one look a like, and I used the Samsung EVO m.2 drive. What a refreshing piece of hardware. Just clipping it onto the motherboard like RAM and not dealing with any cables whatsoever made me feel like I was living in the future. Can't believe how far HDDs have come since I started building computers in the mid-90s.Lonyo - Monday, December 21, 2015 - link
The only reason consumer SSDs are 2.5" is because that's what the space is. If you had a 1.8" drive slot, and 1.8" drives, then SSDs would be smaller. They are the size they are because 2.5" was around for mechanical drives before SSDs, so it allows drop in replacement.The problem with M2 is that you end up having a space limitation because you need to free up space on the motherboard to put the thing, which means either you skip something else, or you have a larger motherboard, and then you aren't really saving any space anyway.
DanNeely - Monday, December 21, 2015 - link
Using the 1.8" HDD form factor probably would have impacted higher end drives in prior years. It only has 60% of the areal size of a 2.5" model; and until fairly recently most high performance/capacity SSDs; used a full size 2.5" PCB. The only ones that were using cut down boards that would fit into a 1.8" housing without needing shrunk were lower end budget models. While it doesn't matter much now (Samsung's 2tb models use smaller PCBs that look like they'd almost fit in the smaller form factor unchanged); cropping off the largest size from the market a few years ago would've probably hurt adoption.MrSpadge - Monday, December 21, 2015 - link
I fail to see a good reason why SSDs have to become more expensive if you remove their case. Anything on that M.2 card is also in a 2.5" drive, yet it's no problem to fit the components onto that small PCB (as long as you're not trying to make very large drives).