Multiple companies offer high-capacity SSDs, but until recently, only one company offered high-performance 60 TB-class drives with a PCIe interface: Solidigm. As our colleagues from Blocks & Files discovered, Samsung quietly rolled out its BM1743 61.44 TB solid-state drive in mid-June and now envisions 120 TB-class SSDs based on the same platform.

Samsung's BM1743 61.44 TB features a proprietary controller and relies on Samsung's 7th Generation V-NAND (3D NAND) QLC memory. Moreover, Samsung believes that its 7th Gen V-NAND 'has the potential to accommodate up to 122.88 TB,' 

Samsung plans to offer the BM1743 in two form factors: U.2 for PCIe 4.0 x4 to address traditional servers and E3.S for PCIe 5.0 x4 interfaces to address machines designed to offer maximum storage density. BM1743 can address various applications, including AI training and inference, content delivery networks, and read-intensive workloads. To that end, its write endurance is 0.26 drive writes per day (DWPD) over five years.

Regarding performance, Samsung's BM1743 is hardly a champion compared to high-end drives for gaming machines and workstations. The drive can sustainably achieve sequential read speeds of 7,200 MB/s and write speeds of 2,000 MB/s. It can handle up to 1.6 million 4K random reads and 110,000 4K random writes for random operations.

Power consumption details for the BM1743 have not been disclosed, though it is expected to be high. Meanwhile, the drive's key selling point is its massive storage density, which likely outweighs concerns over its absolute power efficiency for intended applications, as a 60 TB SSD still consumes less than multiple storage devices offering similar capacity and performance.

As noted above, Samsung's BM1743 61.44 TB faces limited competition in the market, so its price will be quite high. For example, Solidigm's D5-P5336 61.44 TB SSD costs $6,905. Other companies, such as Kioxia, Micron, and SK Hynix, have not yet introduced their 60TB-class SSDs, which gives Samsung and Solidigm an edge for now.

UPDATE 7/25: We removed mention of Western Digital's 60 TB-class SSDs, as the company does not currently list any such drives on their website

Source: Samsung

POST A COMMENT

19 Comments

View All Comments

  • Golgatha777 - Monday, July 8, 2024 - link

    Still using my 16GB OPTANE drive to install Windows 10/11 from. I'll probably die before it does. Reply
  • sjkpublic@gmail.com - Saturday, July 6, 2024 - link

    60 TB SSD is next to worthless if it is only doing 600 MB/s. You can stack as many storage chips as you want in a small space. The issue is the controller/interface. You need a controller that can handle NVME speeds and ALSO do the trim. Gotta love the trim. Reply
  • sjkpublic@gmail.com - Saturday, July 6, 2024 - link

    The article calls this an SSD. This is really a PCIE device? Reply
  • Hresna - Sunday, July 7, 2024 - link

    A Solid State Drive with a PCIe interface, yes. Reply
  • Oxford Guy - Saturday, July 6, 2024 - link

    'To that end, its write endurance is 0.26 drive writes per day (DWPD) over five years.'

    And the read endurance? Since there are so many voltage states to manage and Samsung couldn't keep planar TLC under control (840 series), I wonder if latency will also increase.

    How often does this NAND need to be powered on, to prevent data loss?
    Reply
  • GeoffreyA - Sunday, July 7, 2024 - link

    Good question. A point that's often swept under the carpet. Reply
  • Silver5urfer - Sunday, July 7, 2024 - link

    Consumer M.2 standard is really complete rubbish.

    Not only we get absolute scraps but also worthless drives with no longevity or stability factor.

    Samsung dropped NAND ball long time back once they started PRO with TLC and with the latest 980 and 990 the firmware issues are insane, then the Endurance rating is literally 1/2 or a bit more than competing drives. Mind you SKHynix, WD also same garbage drives with poor endurance.

    Sabrent used to have 6.8PBW for a 4TB PCIe 3.0x4 NVMe consumer drive now not a single drive is having even 1/2 of that. Even their 8TB $1000 drive is just 5.6PBW.

    Seagate built 530 series PCIe4.0 with 5.1PBW but the drives were unstable mess.

    Meanwhile Enterprise TLC NAND SSDs even SATA6 have over 14PBW endurance. Once people realized they shot up in price to $1k+ mark now, esp Samsung.

    Intel DC series PCIe with 2.5" SSDs are also dropped to low prices but once people started to buy they went too high in pricing. Now Intel even exited that market and gave it all away to SKH who is making QLC garbage now.

    Intel squandered Optane tech as well, blew a ton of cash on useless BS baggage and killed best alternative to Solid State Storage medium that world has ever seen with insane Longevity, Performance, Latency what not.

    Users are also to blame, many people now shifted to streaming only or don't bother archiving so the M.2 SSDs peaked at 4TB tops. Let alone they realize M.2 is garbage and U.2 is way superior. Sadly only EVGA gave U.2 option not a single $1300 Mainstream mobo gives that connector, stupid adapters are a headache.

    What a messy industry, I still rely on HDDs, at-least they have increased capacity. 24TB is availble now. Soon 30TB.
    Reply
  • GeoffreyA - Sunday, July 7, 2024 - link

    I tend to agree. SSDs were supposed to replace HDDs but where has that happened? Instead, it's been greed, and rubbish getting pushed down to us but sold at the price of quality. Reply
  • phoenix_rizzen - Monday, July 8, 2024 - link

    Are there any motherboards that support vertical M.2 slots instead of horizontal?

    I've always wondered why M.2 (and CAMM2) was a horizontal standard. Sure, it makes it easier to screw it down, but the cards are light enough that a locking vertical connector (like locking PCIe slots for GPUs and other, larger 16x cards) should be doable.

    Would take up a lot less motherboard space if the slots were vertical. Be easier to cool, too, as you'd get airflow over both sides of the PCB.

    Currently, the only way to get a vertical M.2 setup is with a PCIe-to-M.2 add-in card, and that relies on either motherboard support for bifurcation or PLX chips on the card.

    All that fancy hot-swap backplanes in servers are vertical, after all.
    Reply

Log in

Don't have an account? Sign up now