International Online Workshop on

Recent Advances in SSD Research and Practice

September 26-27, 2024

Workshop News

9 Sep. 2024 Registration is open.

25 Aug. 2024 Workshop program finalized.

4 Aug. 2024 A special issue in The Journal of Supercomputing is arranged to publish papers based on the presented talks in the workshop.

4 Aug. 2024 The homepage of the International Online Workshop "Recent Advances in SSD Research and Practice" is launched.

The Workshop

Storage systems play a pivotal role in the digital landscape, acting as the backbone for data management, retrieval, and protection. The exponential growth of data generated by individuals and enterprises necessitates robust, scalable, and efficient storage solutions. Traditional storage systems, such as hard disk drives (HDDs), have evolved significantly, but the advent of solid-state drives (SSDs) has marked a substantial leap in terms of speed, reliability, and energy efficiency. SSDs, with their faster read/write capabilities and lower latency, have become essential for applications requiring high performance, such as databases, virtual machines, and real-time analytics.

The advances in storage technology extend beyond hardware improvements. Cloud storage has revolutionized data accessibility and management, offering scalable and flexible solutions that can accommodate the dynamic needs of businesses. Cloud providers offer a range of storage options, from object storage for unstructured data to block storage for databases and applications. These services not only provide vast amounts of storage space but also incorporate advanced features like automated backups, disaster recovery, and data encryption, enhancing data security and integrity.

These innovations are crucial for supporting the growing demands of big data analytics, artificial intelligence, and Internet of Things (IoT) applications in storage systems, where data processing speed and efficiency are paramount. These advancements not only enhance performance and reliability but also enable new possibilities in data-driven innovation and digital transformation.

This online workshop, Recent Advances in SSD Research and Practice, aims to bring together leading researchers, engineers, and graduate students in the field of storage systems, and is jointly supported by the Korean Institute of Information Scientists and Engineers (KIISE), and the Computer Society of Iran (CSI).

Speakers

Avatar

Zili Shao

Professor

Chinese University of Hong Kong

Avatar

Patrick P. C. Lee

Professor

Chinese University of Hong Kong

Avatar

Narasimha Reddy

Professor

Texas A&M University

Avatar

Sam H. Noh

Professor

Virginia Tech

Avatar

Li-Pin Chang

Professor

National Yang Ming Chiao Tung University

Avatar

Hossein Asadi

Professor

Sharif University of Technology

Avatar

John Kim

Professor

Korea Advanced Institute of Science and Technology (KAIST)

Avatar

Jisung Park

Assistant Professor

Pohang University of Science and Technology (POSTECH)

Avatar

Owens Walker

Assistant Professor

US Naval Academy

Avatar

Reza Salkhordeh

Postdoctoral Researcher

Johannes Gutenberg University of Mainz

Avatar

Arash Tavakkol

Principal Software Engineer

ApplyBoard Inc.

Workshop Program

Day 1, Thursday, 26 September 2024 (South Korea time)

Abstract: As memory technology matures, reliability becomes a critical concern, especially in the context of modern flash memory. While marching towards higher cell-bit density and advanced 3D architectures, flash memory faces major challenges with various types of errors. Addressing these errors efficiently and effectively is essential to maintaining data integrity. In this talk, I will explore error handling techniques in modern flash memory, focusing on different granularities of data such as bits, layers, and architecture-specific error domains. I will discuss efficient error correction using Low-Density Parity-Check (LDPC) codes, emphasizing strategies like optimal reference voltage placement, bit labeling, and multilevel soft sensing. Additionally, I will investigate how process variation in 3D flash architectures can be leveraged to accelerate LDPC decoding. Finally, I will cover parity protection mechanisms designed to protect against large extents of memory defects that go beyond the bit level.

Abstract: Storage optimization remains a pivotal concern in computer systems. In this talk, I will first summarize our work in optimizing storage systems for embedded and big data applications. Then, I will present our recent work in optimizing 𝐡+-tree by leveraging emerging computational storage drives with built-in transparent compression. Specifically, I will introduce a technique, called per-page logging based 𝐡+-tree, that can fundamentally resolve the logging dilemma, i.e., 𝐡+-tree speed performance can be improved by equipping it with a larger log, which nevertheless will degrade its crash recovery speed. Our key idea is to divide the large single log into many small (e.g., 4KB), highly compressible per-page logs, each being statically bounded with a 𝐡+-tree page. All per-page logs together form a very large over-provisioned log space for 𝐡+-tree to improve its operational speed performance. Meanwhile, during crash recovery, 𝐡+-tree does not need to scan any per-page logs, leading to a recovery latency independent from the total log size. We have developed and open-sourced a fully functional prototype. Our evaluation results show that our solution can significantly improve 𝐡+-tree operational throughput with minimum storage overheads based on transparent compression of computational storage drives.

Abstract: In this talk, I will make a case for showing how to achieve efficient reliable storage for the emerging ZNS SSDs. ZNS defines a new abstraction for host software to flexibly manage storage in flash-based SSDs as append-only zones. It also provides a Zone Append primitive to further boost the write performance of ZNS SSDs by exploiting intra-zone parallelism. However, making Zone Append effective for RAID storage across multiple ZNS SSDs is non-trivial since Zone Append offloads address management to ZNS SSDs and requires hosts to dedicatedly manage RAID stripes across multiple drives. We propose ZapRAID, a high-performance log-structured RAID system for ZNS SSDs by carefully exploiting Zone Append to achieve high write parallelism and lightweight stripe management. ZapRAID adopts a group-based data layout with a coarse-grained ordering across multiple groups of stripes, such that it can use small-size metadata for stripe management on a per-group basis under Zone Append. It further adopts hybrid data management to simultaneously achieve intra-zone and inter-zone parallelism through a careful combination of both Zone Append and Zone Write primitives. Our evaluation shows that ZapRAID achieves high performance in various operations. Finally, I will also discuss future research directions of applying ZNS SSDs to large-scale distributed storage.

Have a tea or coffee and then come back

Abstract: In this session, we'll explore the scalability issues in Solid-State Drives (SSDs), focusing on the challenges posed by the existing SSD micro-architecture. First, we'll briefly touch upon the fairness issue in modern multi-queue SSDs (MQ-SSDs). While new protocols like NVMe have improved performance by providing direct access to application-level I/O request queues, they've inadvertently introduced fairness problems among concurrently executing applications in modern highly-virtualized environments. We'll introduce FLIN (Flash-Level INterference-aware scheduler), a lightweight scheduling mechanism designed to restore fairness in MQ-SSDs without compromising performance. Next, we'll examine network-based SSD communication protocols, which replace the traditional multi-channel bus architecture with an interconnection network, enhancing scalability and performance while reducing signal integrity issues. Finally, we'll present a new mechanism designed to resolve path conflicts in network-based SSDs by employing path reservation techniques, and adaptive routing algorithms, further improving parallelism and energy efficiency with minimal overhead.

Abstract: The exponential growth in data-intensive applications necessitates efficient data transfer and memory management strategies in modern GPU computing. This paper explores the potential of GPUDirect Storage (GDS) as a high-throughput solution for direct data transfers between storage and GPU memory, bypassing CPU involvement and reducing data movement overhead. We study the intricacies of memory management within the GPU memory hierarchy, highlighting the challenges and solutions for optimizing data access patterns. A central focus is placed on intelligent caching and data prefetching mechanisms to predict and pre-load data into appropriate memory tiers, thereby minimizing latency and maximizing computational throughput. By integrating GDS with advanced memory management strategies, we demonstrate significant improvements in performance for various data-intensive applications.

Day 2, Friday, 27 September 2024 (South Korea time)

Abstract: The cost and scalability of future systems are often limited by data movement. As a result, a cost-effective approach to interconnecting components is necessary to enable a scalable, high-performance system. This talk will focus on rethinking SSD microarchitecture by focusing on the challenges of communication within an SSD system. In particular, I will present NetworkSSD where interconnection networks can be used to replace traditional flash bus to enable more efficient connectivity between the flash modules. In addition, I will discuss how DecoupledSSD can leverage interconnection networks within the SSD controller to provide efficient communication between on-chip components.

Abstract: I/O-intensive applications are increasingly demanding higher I/O rates in enterprise environments. To meet the rising performance demand of such applications, ultra-fast SSDs are emerging in the industry. Unfortunately, traditional SAN architectures do not scale in performance by just using ultra-fast SSDs in the storage backend. In this talk, I will address major performance bottlenecks of traditional storage architectures and also offer novel techniques that can be used to build a scalable storage architecture for emerging all-flash storage systems.

Abstract: This talk discusses challenges in designing ultra large-capacity NAND flash-based SSDs and introduces two white-box optimization approaches to address the challenges. NAND flash memory is the predominant technology for modern storage systems to meet the high-performance and large-capacity storage requirements from data-intensive applications. As a promising solution to reduce the total cost of ownership (TCO) of storage systems, there is increasing demand for ultra large-capacity SSDs that offer unprecedented single-device storage capacity (e.g., 128 TB). Even though some manufacturers have recently introduced their successful development of such large-capacity SSDs, there exist new technical challenges that need to be addressed to achieve high I/O performance and long SSD lifetime, which primarily originate from the reliability issues of high-density NAND flash memory. In this talk, I will present two recent works, each of which effectively improves the performance and lifetime of high-density NAND flash-based SSDs, respectively, based on deep understanding of underlying hardware components. Such white-box optimization approaches can effectively overcome the limitations of conventional black-box approaches, unlocking the full potential of NAND flash memory. First, I will introduce RiF (Retry-in-Flash), an in-flash processing technique to identify a read failure inside a NAND flash chip which can avoid the waste of SSD-internal bandwidth due to read retry. Second, I will present AERO (Adaptive ERase Operation) that dynamically adjusts the erase latency to be just enough for reliable operation, thereby enhancing SSD lifetime by minimizing the erase-induced cell stress.

Have a tea or coffee and then come back

Abstract: ZNS SSDs are emerging storage devices that allow the host to fully control the data placement and manage the SSD’s internal state. They reduce the cost of manufacturing the SSDs and allow for OS- and application-level optimizations. However, moving the FTL functionality to the OS is challenging and poses many obstacles. This talk will first present such challenges and the approaches the academia takes to mitigate them. Then the open problems that can be the target of future research will be discussed. Additionally, the areas in which ZNS SSDs can be beneficial, and they are expected to provide improvements will be presented. Overall, this talk familiarizes the audience with the state-of-the-art research on ZNS SSDs and provides a view of possible future research topics.

Abstract: Solid state drive (SSD) technologies continue to evolve. Phase change memory, as seen in Intel and Micron’s 3D XPoint, is a good example of an advancement in the field and was commercially available in Intel’s Optane-branded memory from 2017 to 2022. Like their NAND flash SSD counterparts, the firmware on an Optane SSD has the potential to make it difficult for the user to validate whether operations (e.g., read and write) are being performed as expected on the drive itself. Machine learning-based classification has proven to be a useful tool in validating embedded firmware operations and uncovering unanticipated behavior. In this work, we use power-based side-channel analysis to classify among four solid state drives from four different manufacturers employing phase change or 3D NAND memory technologies. We present sample waveforms in both the time domain and the frequency domain for these novel memory technologies and then use these to develop classifiers capable of classifying novel memory samples by operation (read vs. write) as well as by drive model and by drive technology. We achieve classification rates of 96.1% by operation, 98.3% by drive model, and 100% by technology employed. In addition, we demonstrate that the power-based side channel can be used to identify and investigate drive performance issues that impact read and write speeds.

Abstract: Log-structured systems are widely used in various applications because of its high write throughput. However, high garbage collection (GC) cost is widely regarded as the primary obstacle for its wider adoption. There have been numerous attempts to alleviate GC overhead, but with ad-hoc designs. This paper introduces MiDAS that minimizes GC overhead in a systematic and analytic manner. It employs a chain-like structure of multiple groups, automatically segregating data blocks by age. It employs analytical models, Update Interval Distribution (UID) and Markov-Chain-based Analytical Model (MCAM), to dynamically adjust the number of groups as well as their sizes according to the workload I/O patterns, thereby minimizing the movement of data blocks. Furthermore, MiDAS isolates hot blocks into a dedicated 𝐻𝑂𝑇 group, where the size of 𝐻𝑂𝑇 is dynamically adjusted according to the workload to minimize overall WAF. Our experiments using simulations and a proof-of-concept prototype for flash-based SSDs show that MiDAS outperforms state-of-the-art GC techniques, offering 25% lower WAF and 54% higher throughput, while consuming less memory and CPU cycles

Workshop Registration

You can register for the workshop by clicking here.

Organizers

Avatar

Jeong-A Lee

Professor

Chosun University

[email protected]

Avatar

Hamid Sarbazi-Azad

Professor

Sharif University of Technology

azad@{sharif.edu, ipm.ir} [email protected]

Contacts

For any inquiries or to get in touch with the organizers, please use the following email addresses:

(C) HPCAN Lab.Sharif University of Technology.