NVMe IP Core for Gen4 Datasheet

Features 1

Applications 2

General Description. 3

Functional Description. 5

NVMe. 7

·      NVMe Host Controller 7

·      Command Parameter 7

·      Data Buffer 8

·      NVMe Data Controller 8

PCIe. 8

·      PCIe Controller 8

User Logic. 8

Integrated Block for PCI Express 9

Core I/O Signals 10

Timing Diagram.. 15

Initialization. 15

Control interface of dgIF typeS. 16

Data interface of dgIF typeS. 17

IdenCtrl/IdenName. 19

Shutdown. 21

SMART. 22

Secure Erase. 24

Flush. 25

Error 26

Verification Methods 27

Recommended Design Experience. 27

Ordering Information. 27

Revision History. 27

 

 

 

 

Core Facts

Provided with Core

Documentation

Reference Design Manual

 Demo Instruction Manual

Design File Formats

Encrypted File

Instantiation Templates

VHDL

Reference Designs & Application Notes

Vivado Project,

See Reference Design Manual

Additional Items

Demo on VCK190, Alveo U50

Support

Support Provided by Design Gateway Co., Ltd.

 

 

Design Gateway Co.,Ltd

E-mail:    ip-sales@design-gateway.com

URL:       design-gateway.com

Features

·     Direct NVMe Gen4 SSD access without the need for CPU or external memory

·     Two data buffer modes: High speed (1 MB RAM) or Small memory (256 KB RAM), implemented by URAM

·     Simple user interface by dgIF typeS

·     Support seven commands: Identify, Shutdown, Write, Read, SMART, Secure Erase, and Flush

·     Supported NVMe devices

·     Base Class Code:01h (mass storage), Sub Class Code:08h (Non-volatile), Programming Interface:02h (NVMHCI)

·     MPSMIN (Memory Page Size Minimum): 0 (4KB)

·     MDTS (Maximum Data Transfer Size): At least 5 (128 KB) or 0 (no limitation)

·     MQES (Maximum Queue Entries Support): At least 15

·     LBA unit: 512 bytes or 4 KB

·     User clock frequency: At least the PCIe clock frequency (250 MHz for Gen4)

·     PCIe Hard IP interface: 256-bit AXI4 interface, configured by 4-lane PCIe Gen4

·     Available reference design: 1-ch demo and 2-ch RAID0 demo on VCK190 with AB17-M2FMC and AB18-PCIeX16 adapter boards

·     Customized service for following features

·     Additional NVMe commands

·     RAM type (BRAM) modification

 

 

Table 1: Example Implementation Statistics (Versal)

Family

Example Device

Buf

Mode

Fmax

(MHz)

CLB Regs

CLB LUTs

Slice1

IOB

BRAMTile1

URAM

Design

Tools

Versal AI Core

XCVC1902-VSVA2197-2MP-E-S

1MB

400

6497

4138

1173

-

4

32

Vivado2022.1

256KB

400

6483

4150

1378

-

4

8

Vivado2022.1

 

Table 2: Example Implementation Statistics (UltraScale+)

Family

Example Device

Buf

Mode

Fmax

(MHz)

CLB Regs

CLB LUTs

CLB1

IOB

BRAMTile1

URAM

Design

Tools

Alveo-U50

XCU50-FSVH2104-2-E

1MB

310

6750

4018

1127

-

4

32

Vivado2022.1

256KB

400

6736

4082

1058

-

4

8

Vivado2022.1

Notes: 1) Actual logic resource dependent on percentage of unrelated logic

 

Applications

 

Figure 1: NVMe IP Application

 

The NVMe IP Core for Gen4 integrated with Integrated Block for PCI Express (PCIe hard IP) from Xilinx provides an ideal solution for accessing NVMe Gen4 SSD without the need for CPU or external memory (DDR). With its inclusion of 1MB/256KB memory, it matches the applications which require vast storage capacity and high-speed performance. Each NVMe channel achieves transfer performance of 7500 MB/s. To increase transfer performance, RAID0 design is applied using multiple NVMe IPs and PCIe hard IPs, as shown in Figure 1. Using four-channel RAID0 system by four NVMe IPs and four NVMe SSDs boosts transfer speed up to four times (28 GB/s for writing and 30 GB/s for reading). This can accommodate high-speed data from the radar systems or high-resolution video data streams.

We also offer alternative IP cores for specific applications such as Multiple users, Random access, and PCIe switch.

Multiple User NVMe IP Core – Enables multiple users to access an NVMe SSD for high-performance write and read operation simultaneously.

https://dgway.com/muNVMe-IP_X_E.html

Random Access by Multiple User NVMe IP Core – Enables two users to write and read to the same NVMe SSD simultaneously, providing high random-access performance for applications with non-contiguous storage requirements.

https://dgway.com/rmNVMe-IP_X_E.html

NVMe IP Core for PCIe Switch – Access multiple NVMe SSDs via PCIe switch to extend storage capacity. Enables high-speed write and read access to shared storage.

https://dgway.com/NVMe-IP_X_E.html

NVMe IP Core with PCIe Gen3/Gen4 Soft IP – Access the NVMe SSD without using PCIe hard IP, bypassing the need for PCIe hard IP.

https://dgway.com/NVMeG4-IP_X_E.html

 

General Description

 

Figure 2: NVMe IP for Gen4 Block Diagram

 

NVMe IP for Gen4 is a complete host controller solution that enables access to an NVMe SSD using the NVM express standard. The physical interface for the NVMe SSD is PCIe, and the lower layer hardware is implemented using Integrated Block for PCI Express (PCIe hard IP) from Xilinx.

The NVMe IP core implements seven NVMe commands, including Identify, Shutdown, Write, Read, SMART, Secure Erase, and Flush command, and utilizes two user interface groups to transfer commands and data. The Control interface is used for transferring commands and their parameters, while the Data interface is used for transferring data when required by the command. For Write/Read commands, the Control interface and Data interface use dgIF typeS, which is our standard interface for the storage. The Control interface of dgIF typeS includes start address, transfer length, and request signals, and the Data interface uses the standard FIFO interface.

SMART, Secure Erase, and Flush command are Custom commands that use the Ctm Cmd I/F for control path and Ctm RAM I/F for data path. Meanwhile, the Identify command uses its own data interface – Iden RAM I/F, and the same Control interface as Write or Read command, as shown Figure 2.

If abnormal conditions are detected during initialization or certain command operation, the NVMe IP may assert an error signal. The error status can be read from the IP for more details. Once the error cause is resolved, both the NVMe IP and SSD must be reset.

To ensure continuous packet transmission until the end of the packet on the user interface of PCIe hard IP, the user logic clock frequency must be equal to or greater than the PCIe clock frequency (250 MHz). This requires that data is valid every clock cycle between the start and the end of the frame. Therefore, it can guarantee that the bandwidth on the user interface is equal to or greater than PCIe hard IP bandwidth.

Overall, the NVMe IP provides a comprehensive solution for accessing NVMe SSDs. The IP core comes with reference designs on FPGA evaluation boards, allowing users to evaluate the product’s capabilities before making a purchase. The reference designs on FPGA evaluation boards are available for evaluation before purchasing.

 

Functional Description

The NVMe IP operation is divided into three phases, including IP initialization, Operating command, and Inactive status, as shown in Figure 3. Upon de-asserting the IP reset, the initialization phase begins, and the user should execute the Identify command to check the device status and capacity. During the Operating command phase, the user can perform write and read operations, execute Custom commands such as SMART and Flush. Finally, before shutting down the system, it is recommended to execute the Shutdown command to ensure safe operation.

 

Figure 3: NVMe IP Operation Flow

 

 

The operation flow of NVMe IP is described as follows.

1)     The IP waits for PCIe to be ready by monitoring Linkup status from the PCIe IP core.

2)     The IP begins the initialization process by configuring PCIe and NVMe registers. Upon successful completion of the initialization, the IP transitions to the Idle state, where it awaits new command request from the user. If any errors during the initialization process are detected, the IP switches to the Inactive state, with UserError set to 1b.

3)     The first command from the user must be the Identify command (UserCmd=000b), which updates the LBASize (disk capacity) and LBAMode (LBA unit=512 bytes or 4 KB).

4)     The last command before powering down the system must be Shutdown command (UserCmd=001b). This command is recommended to guarantee the SSD is powered down in a proper sequence. Without the Shutdown command, the write data in the SSD cannot be guaranteed. After finishing the Shutdown command, both the NVMe IP and SSD change to the Inactive state, and no new command can be executed until the IP is reset.

5)     When executing a Write command (UserCmd=010b), the maximum data size for each command is limited to 128 KB. If the total data length from the user exceeds 128 KB, the IP automatically repeats the following steps, 5a) – 5b), until all data has been fully transferred.

a)     The IP waits until the write data, sent by the user, is sufficient for one command. The transfer size of each command in the NVMe IP is 128 KB, except for the last loop, which may be less than 128 KB.

b)     The IP sends the Write command to the SSD and then waits until the status response from the SSD. The IP returns to the Idle state only when all the data has been completely transferred. If not, the IP goes back to step 5a) to send the next Write command.

6)     Similar to the Write command, when executing a Read command (UserCmd=011b) with a transfer size exceeding 128 KB, the IP must iterate through the following steps, 6a) – 6b).

a)     If the remaining transfer size is zero, the IP proceeds to step 6c). Otherwise, it waits until there is sufficient free space in the Data buffer of the NVMe IP for one command (either 128 KB or the remaining transfer size for the last loop).

b)     The IP sends the Read command to the SSD and then returns to step 6a).

c)     The IP waits until all the data has been completely transferred from the Data buffer to the user logic and then returns to the Idle state. Therefore, the Data buffer becomes empty after the Read command is completed.

7)     When executing a SMART command (UserCmd=100b and CtmSubmDW0-15=SMART), 512-byte data is returned upon operation completion.

a)     The IP sends the Get Log Page command to retrieve SMART/Health information from the SSD.

b)     The 512-byte data response is received from the SSD, and the IP forwards this data through the Custom command RAM interface (CtmRamAddr=0x000 – 0x00F).

8)     When executing a Secure Erase command (UserCmd=100b and CtmSubmDW0-15=Secure Erase), no data transfer occurs during the operation.

a)     The IP sends the Secure Erase command to the SSD.

b)     The IP waits until the SSD returns a status response to confirm the completion of the operation.

9)     When executing a Flush command (UserCmd=110b), no data transfer occurs during the operation.

a)     The IP sends the Flush command to the SSD.

b)     The IP waits until the SSD returns a status response to confirm the completion of the operation.

 

To design NVMe host controller, NVMe IP implements two protocols: NVMe and PCIe. The NVMe protocol is used to interface with the user, while the PCIe protocol is used to interface with PCIe hard IP.  Figure 2 shows the hardware inside the NVMe IP which is split into two groups, NVMe and PCIe.

 

NVMe

The NVMe group supports seven commands, which are split into two categories - Admin commands and NVM commands. Admin commands include Identify, Shutdown, SMART, and Secure Erase, while NVM commands include Write, Read, and Flush. After executing a command, the status returned from the SSD is latched either to AdmCompStatus (for status returned from Admin commands) or IOCompStatus (for status returned from NVM commands), depending on the command type.

The parameters of Write or Read command are configured through the Control interface of dgIF typeS, while the parameters of SMART, Secure Erase, or Flush command are set by CtmSubmDW0-15 of the Ctm Cmd interface. The Data interface for Write or Read command is transferred using the FIFO interface, a part of dgIF typeS. The data for Write and Read commands is stored in the IP’s Data buffer. For other command types, the Data interface utilizes distinct interfaces - Identify I/F for the Identify command and Custom RAM I/F for the SMART command.

Further details of each submodule are described as follows.

·       NVMe Host Controller

The NVMe host controller serves as the core controller within the NVMe IP. It operates in two phases: the initialization phase and the command operation phase. The initialization phase runs once when the system is booted up, for configuring the NVMe register within the SSD. Once the initialization phase is completed, it enters the command operation phase. During this phase, the controller controls the sequence of transmitted and received packets for each command.

To initiate the execution of each command, the command parameters are stored in the Command Parameter, facilitating packet creation. Subsequently, the packet is forwarded to the AsyncCtrl for converting NVMe packets into PCIe packets. After each command operation is executed, a status packet is received from the SSD. The controller decodes the status value, verifying whether the operation was completed successfully or an error occurred. In cases where the command involves data transfer, such as Write or Read command, the controller must handle the order of data packets, which are created and decoded by the NVMe Data controller.

·       Command Parameter

The Command Parameter module creates the command packet sent to the SSD and decodes the status packet returned from the SSD. The input and output of this module are controlled by the NVMe host controller. Typically, a command consists of 16 Dwords (1 Dword = 32 bits). When executing Identify, Shutdown, Write, and Read commands, all 16 Dwords are created by the Command parameter module, which are initialized by the user inputs on dgIF typeS. When executing SMART, Secure Erase and Flush commands, all 16 Dwords are directly loaded via CtmSubmDW0-CtmSubmDW15 of Ctm Cmd interface.

·       Data Buffer

Two data buffer modes are supported: High speed mode, which uses 1 MB RAM, and Small memory mode, which uses 256 Kbyte RAM. The RAM is implemented using UltraRAM. The buffer stores data for transferring between UserLogic and SSD while operating Write and Read commands.

·       NVMe Data Controller

The NVMe data controller module is used when the command must transfer data such as Identify, SMART, Write, and Read commands. This module manages three data interfaces for transferring with the SSD.

1)     The FIFO interface is used with the Data buffer during the execution of Write or Read commands.

2)     The Custom RAM interface is used when executing SMART command.

3)     The Identify interface is used when executing Identify command.

The NVMe data controller is responsible for creating and decoding of data packets. Similar to the Command Parameter module, the input and output signals of the NVMe data controller module are controlled by the NVMe host controller.

 

PCIe

The PCIe protocol is the outstanding low-layer protocol for the high-speed application, and the NVMe protocol runs over it. Therefore, the NVMe layer can be operated after the PCIe layer completes the initialization. Two modules are designed to support the PCIe protocol - PCIe controller and AsyncCtrl. Additional details of these modules are provided below.

·       PCIe Controller

In initialization process, the PCIe controller sets up the PCIe environment of the SSD via the CFG interface. Subsequently, the PCIe packet is created or decoded through 256-bit Tx/Rx AXI4-Stream. The PCIe controller converts the command packet and data packet from the NVMe module into a PCIe packet, and vice versa.

·       AsyncCtrl

AsyncCtrl incorporates asynchronous registers and buffers designed to facilitate clock domain crossing. The user clock frequency must match or exceed the PCIe clock frequency to ensure sufficient bandwidth for continuous packet data transmission. The majority of the logic within the NVMe IP operates in the user clock domain, while the PCIe hard IP operates in the PCIe clock domain.

 

User Logic

The user logic can be implemented using a small state machine responsible for sending commands along with their corresponding parameters. For instance, simple registers are used to specify parameters for Write or Read command, such as address and transfer size. Two separate FIFOs are connected to manage data transfer for Write and Read commands independently.

When executing the SMART and Identify commands, each data output interface connects to a simple dual port RAM with byte enable capability. Both the FIFO and RAM have a data width of 256 bits, while their memory depth can be configured to different values. Specifically, the data size for the Identify command is 8 KB, while for the SMART command, it is 512 bytes.

 

Integrated Block for PCI Express

Within certain UltraScale+ and Versal devices, there are Integrated Block for PCI Express (PCIe hard IP) known as PCIe4C and PL PCIE4, respectively. These are designed to support the PCIe Gen4 protocol. Configured with a 256-bit data interface, both PCIe4C and PL PCIE4 can operate as 4-lane PCIe Gen4 connections, specifically for interfacing with NVMe IP.

Each NVMe IP connects to a single PCIe hard IP, controlling an NVMe Gen4 SSD. Therefore, the maximum number of SSDs connectable to an FPGA device is limited by the count of PCIe hard IPs within the FPGA.

The process to generate PCIe hard IP varies between UltraScale+ and Versal devices using the Xilinx tool. For UltraScale+ device, a single IP wizard is used to generate the PCIe hard IP integrated with the transceiver. However, in case of Versal device, two separate IP wizards are required – one for generating the PCIe hard IP and another for the Transceiver.

Further information on PCIe hard IP is provided in the following websites.

PG213: UltraScale+ Devices Integrated Block for PCI Express

https://www.xilinx.com/products/intellectual-property/pcie4-ultrascale-plus.html#documentation

PG343: Versal ACAP Integrated Block for PCI Express

https://www.xilinx.com/products/intellectual-property/pcie-versal.html#documentation

 

 

Figure 4: PCIe4C Integrated Block for PCI Express

 

Core I/O Signals

Table 3 provides detailed descriptions of configurable parameters, while Table 4 and Table 5 outline the I/O signals for NVMe IP.

 

Table 3: Core Parameters

Name

Value

Description

BufMode

0 or 1

Data buffer mode.

1-High speed mode by using 1 MB buffer.

0-Small memory mode by using 256 KB buffer.

 

Table 4: User logic I/O Signals (Synchronous to Clk signal)

Signal

Dir

Description

Control I/F of dgIF typeS

RstB

In

Synchronous reset. Active low. It should be de-asserted to 1b when the Clk signal is stable.

Clk

In

User clock to run the NVMe IP. The frequency of this clock must be equal to or greater than the PCIeClk frequency, which is output from the PCIe hard IP. PCIeClk frequency is equal to 250 MHz for PCIe Gen4.

UserCmd[2:0]

In

User Command. Valid when UserReq=1b. The possible values are

000b: Identify, 001b: Shutdown, 010b: Write SSD, 011b: Read SSD,

100b: SMART/Secure Erase, 110b: Flush, 101b/111b: Reserved

UserAddr[47:0]

In

The start address to write/read from the SSD in 512-byte units. Valid when UserReq=1b.

If the LBA unit = 4 KB, UserAddr[2:0] must always be set to 000b to align 4 KB unit.

If the LBA unit = 512 bytes, it is recommended to set UserAddr[2:0]=000b to align with 4 KB size (SSD page size). The 4KB address unalignment results in reduced write/read performance for most SSDs.

UserLen[47:0]

In

The total transfer size to write/read from the SSD in 512-byte units. Valid from 1 to (LBASize-UserAddr). If the LBA unit = 4 KB, UserLen[2:0] must always be set to 000b to align with the 4 KB unit. This parameter is applicable when UserReq=1b.

UserReq

In

Set to 1b to initiate a new command request and reset to 0b after the IP starts the operation, signaled by setting UserBusy to 1b. This signal can only be asserted when the IP is an Idle state (UserBusy=0b). Command parameters, including UserCmd, UserAddr, UserLen, and CtmSubmDW0-DW15, must be valid and stable when UserReq=1b. UserAddr and UserLen are inputs for Write/Read commands while CtmSubmDW0-DW15 are inputs for SMART, Secure Erase, or Flush commands.

UserBusy

Out

Set to 1b when the IP is busy.

New request must not be sent (UserReq to 1b) when the IP is busy.

LBASize[47:0]

Out

The total capacity of the SSD in 512-byte units. Default value is 0.

This value is valid after finishing the Identify command.

LBAMode

Out

The LBA unit size of the SSD (0b: 512 bytes, 1b: 4 KB). Default value is 0b.

This value is valid after finishing Identify command.

UserError

Out

Error flag. Asserted to 1b when the UserErrorType is not equal to 0b.

The flag is cleared to 0b by asserting RstB to 0b.

 

Signal

Dir

Description

Control I/F of dgIF typeS

UserErrorType[31:0]

Out

Error status.

[0] – An error when PCIe class code is incorrect.

[1] – An error from Controller capabilities (CAP) register, which can occur due to various reasons

- Memory Page Size Minimum (MPSMIN) is not equal to 0.

- NVM command set flag (bit 37 of CAP register) is not set to 1.

- Doorbell Stride (DSTRD) is not equal to 0.

- Maximum Queue Entries Supported (MQES) is less than 15.

More details of each register can be founded in the NVMeCAPReg signal.

[2] – An error when the Admin completion entry is not received within the specified timeout.

[3] – An error when the status register in the Admin completion entry is not 0 or when the phase tag/command ID is invalid. More details can be found in the AdmCompStatus signal.

[4] – An error when the IO completion entry is not received within the specified timeout.

[5] – An error when the status register in the IO completion entry is not 0 or when the phase tag is invalid. More details can be found in the IOCompStatus signal.

[6] – An error from unsupported LBA unit (not equal to 512 bytes or 4 KB).

[7] – Reserved

[8] – An error when the received TLP packet size is incorrect.

[9] – An error when PCIe hard IP detects an Error correction code (ECC) error from the internal buffer.

Bit[15:10] are mapped to Uncorrectable Error Status Register.

[10] – Mapped to Unsupported Request Error Status (bit[20]).

[11] – Mapped to Completer Abort Status (bit[15]).

[12] – Mapped to Unexpected Completion Status (bit[16]).

[13] – Mapped to Completion Timeout Status (bit[14]).

[14] – Mapped to Poisoned TLP Received Status (bit[12]).

[15] – Mapped to ECRC Error Status (bit[19]).

[23:16] - Reserved

Bit[30:24] are also mapped to Uncorrectable Error Status Register.

[24] – Mapped to Data Link Protocol Error Status (bit[4]).

[25] – Mapped to Surprise Down Error Status (bit[5]).

[26] – Mapped to Receiver Overflow Status (bit[17]).

[27] – Mapped to Flow Control Protocol Error Status (bit[13]).

[28] – Mapped to Uncorrectable Internal Error Status (bit[22]).

[29] – Mapped to Malformed TLP Status (bit[18]).

[30] – Mapped to ACS Violation Status (bit[21]).

[31] – Reserved

Note: Timeout period of bit[2]/[4] is set from TimeOutSet input.

 

 

 

 

Data I/F of dgIF typeS

UserFifoWrCnt[15:0]

In

Write data counter for the Receive FIFO. Used to monitor the FIFO full status. When the FIFO becomes full, data transmission from the Read command temporarily halts. If the data count of FIFO is less than 16 bits, the upper bits should be padded with 1b to complete the 16-bit count.

UserFifoWrEn

Out

Asserted to 1b to write data to the Receive FIFO when executing the Read command.

UserFifoWrData[255:0]

Out

Write data bus of the Receive FIFO. Valid when UserFifoWrEn=1b.

UserFifoRdCnt[15:0]

In

Read data counter for the Transmit FIFO. Used to monitor the amount of data stored in the FIFO. If the counter indicates an empty status, the transmission of data packets for the Write command temporarily pauses. When the data count of FIFO is less than 16 bits, the upper bis should be padded with 0b to complete the 16-bit count.

UserFifoEmpty

In

Unused for this IP.

UserFifoRdEn

Out

Asserted to 1b to read data from the Transmit FIFO when executing the Write command.

UserFifoRdData[255:0]

In

Read data returned from the Transmit FIFO.

Valid in the next clock after UserFifoRdEn is asserted to 1b.

NVMe IP Interface

IPVesion[31:0]

Out

IP version number

TestPin[31:0]

Out

Reserved to be IP Test point.

TimeOutSet[31:0]

In

Timeout value to wait for completion from SSD. The time unit is equal to 1/(Clk frequency).

When TimeOutSet is equal to 0, Timeout function is disabled.

AdmCompStatus[15:0]

Out

Status output from Admin Completion Entry

[0] – Set to 1b when the Phase tag or Command ID in Admin Completion Entry is invalid.

[15:1] – Status field value of Admin Completion Entry

IOCompStatus[15:0]

Out

Status output from IO Completion Entry

[0] – Set to 1b when the Phase tag in IO Completion Entry is invalid.

[15:1] – Status field value of IO Completion Entry

NVMeCAPReg[31:0]

Out

The parameter value of the NVMe capability register when UserErrorType[1] is asserted to 1b.

[15:0] – Maximum Queue Entries Supported (MQES)

[19:16] – Doorbell Stride (DSTRD)

[20] – NVM command set flag

[24:21] – Memory Page Size Minimum (MPSMIN)

[31:25] – Undefined

Identify Interface

IdenWrEn

Out

Asserted to 1b for sending data output from the Identify command.

IdenWrDWEn[7:0]

Out

Dword (32-bit) enable of IdenWrData. Valid when IdenWrEn=1b.

1b: This Dword data is valid, 0b: This Dword data is not available.

Bit[0], [1], …, [7] corresponds to IdenWrData[31:0], [63:32], …, [255:224], respectively

IdenWrAddr[7:0]

Out

Index of IdenWrData in 256-bit unit. Valid when IdenWrEn=1b.

0x00-0x7F: 4KB Identify controller data,

0x80-0xFF: 4KB Identify namespace data.

IdenWrData[255:0]

Out

4KB Identify controller data or Identify namespace data. Valid when IdenWrEn=1b.

 

Signal

Dir

Description

Custom interface (Command and RAM)

CtmSubmDW0[31:0] – CtmSubmDW15[31:0]

In

16 Dwords of Submission queue entry for SMART, Secure Erase, or Flush command.

DW0: Command Dword0, DW1: Command Dword1, …, and DW15: Command Dword15.

These inputs must be valid and stable when UserReq=1b and UserCmd=100b (SMART/Secure Erase) or 110b (Flush).

CtmCompDW0[31:0] –

CtmCompDW3[31:0]

Out

4 Dwords of Completion queue entry, output from SMART, Secure Erase, or Flush command.

DW0: Completion Dword0, DW1: Completion Dword1, …, and DW3: Completion Dword3

CtmRamWrEn

Out

Asserted to 1b for sending data output from Custom command such as SMART command.

CtmRamWrDWEn[7:0]

Out

Dword (32 bit) enable of CtmRamWrData. Valid when CtmRamWrEn=1b.

1b: This Dword data is valid, 1b: This Dword data is not available.

Bit[0], [1], …, [7] correspond to CtmRamWrData[31:0], [63:32], …, [255:224], respectively.

CtmRamAddr[7:0]

Out

Index of CtmRamWrData when SMART data is received. Valid when CtmRamWrEn=1b.

(Optional) Index to request data input through CtmRamRdData for customized Custom commands.

CtmRamWrData[255:0]

Out

512-byte data output from SMART command. Valid when CtmRamWrEn=1b.

CtmRamRdData[255:0]

In

(Optional) Data input for customized Custom commands.

 

Table 5: Physical I/O Signals for PCIe4C/PL PCIE4 (Synchronous to PCIeClk)

Signal

Dir

Description

PCIe System signal

PCIeRstB

In

Synchronous reset signal. Active low.

De-assert to 1b when PCIe hard IP is not in reset state.

PCIeClk

In

Clock output from PCIe hard IP (250 MHz for PCIe Gen4).

PCIeLinkup

In

Set to 1b when LTSSM state of PCIe hard IP is in L0 State.

Configuration Management Interface

PCIeCfgDone

In

Read/Write operation complete. Assert for 1 cycle when operation completes.

PCIeCfgRdEn

Out

Read enable. Asserted to 1b for a read operation.

PCIeCfgRdData[31:0]

In

Read data. Valid when PCIeCfgDone is asserted to 1b.

PCIeCfgWrEn

Out

Write enable. Asserted to 1b for a write operation.

PCIeCfgWrData[31:0]

Out

Write data which is used to configure the Configuration and Management registers.

PCIeCfgByteEn[3:0]

Out

Byte enable for write data, where bit[0], [1], [2], and [3] correspond to PCIeCfgWrData[7:0], [15:8], [23:16], and [31:24], respectively.

PCIeCfgAddr[9:0]

Out

Read/Write Address.

 

Signal

Dir

Description

Requester Request Interface

PCIeMtTxData[255:0]

Out

Requester request data bus.

PCIeMtTxKeep[7:0]

Out

Bit[i] indicates that Dword[i] of PCIeMtTxData contains valid data.

PCIeMtTxLast

Out

Asserted this signal in the last cycle of a TLP to indicate the end of the packet.

PCIeMtTxReady[3:0]

In

Asserts to accept data. Data is transferred when both PCIeMtTxValid and PCIeMtTxReady are asserted in the same cycle.

PCIeMtTxUser[61:0]

Out

Requester request user data. Valid when PCIeMtTxValid is high.

PCIeMtTxValid

Out

Asserted to drive valid data on PCIeMtTxData bus. NVMe IP keeps the valid signal asserted during the transfer of a packet.

Completer Request Interface

PCIeMtRxData[255:0]

In

Received data from PCIe hard IP.

PCIeMtRxKeep[7:0]

In

Bit[i] indicates that Dword[i] of PCIeMtRxData contains valid data.

PCIeMtRxLast

In

Asserts this signal in the last beat of a packet to indicate the end of the packet.

PCIeMtRxReady

Out

Indicates that NVMe IP is ready to accept data.

PCIeMtRxUser[74:0]

In

Sideband information for the TLP being transferred. Valid when PCIeMtRxValid is high.

PCIeMtRxValid

In

Asserts when PCIe hard IP drives valid data on PCIeMtRxData bus. PCIe hard IP keeps the valid signal asserted during the transfer of packet.

Completer Completion Interface

PCIeSlTxData[255:0]

Out

Completion data from NVMe IP.

PCIeSlTxKeep[7:0]

Out

Bit[i] indicates that Dword[i] of PCIeSlTxData contains valid data.

PCIeSlTxLast

Out

Asserted this signal in the last cycle of a packet to indicate the end of the packet.

PCIeSlTxReady[3:0]

In

Indicates that PCIe hard IP is ready to accept data.

PCIeSlTxUser[32:0]

Out

Sideband information for the TLP being transferred. Valid when PCIeSlTxValid is high.

PCIeSlTxValid

Out

Asserted to drive valid data on PCIeSlTxData bus. NVMe IP keeps the valid signal asserted during the transfer of a packet.

Requester Completion Interface

PCIeSlRxData[255:0]

In

Receive data from PCIe hard IP.

PCIeSlRxKeep[7:0]

In

Bit[i] indicates that Dword[i] of PCIeSlRxData contains valid data.

PCIeSlRxLast

In

Asserts this signal in the last beat of a packet to indicate the end of the packet.

PCIeSlRxReady

Out

Indicates that NVMe IP is ready to accept data.

PCIeSlRxUser[87:0]

In

Sideband information for the TLP being transferred. Valid when PCIeSlRxValid is high.

PCIeSlRxValid

In

Asserts when PCIe hard IP drives valid data on PCIeSlRxData bus. PCIe hard IP keeps the valid signal asserted during the transfer of packet.

 

 

Timing Diagram

 

Initialization

 

 

Figure 5: Timing diagram during initialization process

 

The initialization process of the NVMe IP follows the steps below as shown in timing diagram.

1)     De-asserts the RstB and PCIeRstB.

a)     De-assert RstB to 1b once the Clk signal is stable.

b)     De-assert PCIeRstB to 1b upon the completion of the PCIe reset sequence. Following this, the PCIe hard IP is ready to transfer data with the application layer.

2)     After the LTSSM state of the PCIe hard IP is L0 state, assert PCIeLinkup to 1b. After this, the NVMe IP initiates its own initialization process.

3)     Upon completion of the NVMe IP initialization process, the NVMe IP de-asserts UserBusy to 0b.

After completing all of the above steps, the NVMe IP is ready to receive commands from the user.

 

Control interface of dgIF typeS

The dgIF typeS signals can be split into two groups: the Control interface for sending commands and monitoring status, and the Data interface for transferring data streams in both directions.

Figure 6 shows an example of how to send a new command to the IP via the Control interface of dgIF typeS.

 

Figure 6: Control Interface of dgIF typeS timing diagram

 

1)     UserBusy must be equal to 0b before sending a new command request to confirm that the IP is Idle.

2)     Command and its parameters such as UserCmd, UserAddr, and UserLen must be valid when asserting UserReq to 1b to send the new command request.

3)     IP asserts UserBusy to 1b after starting the new command operation.

4)     After UserBusy is asserted to 1b, UserReq is de-asserted to 0b to finish the current request. New parameters for the next command could be prepared on the bus. UserReq for the new command must not be asserted to 1b until the current command operation is finished.

5)     UserBusy is de-asserted to 0b after the command operation is completed. Next, new command request can be initiated by asserting UserReq to 1b.

Note: The number of parameters used in each command is different. More details are described below.

 

Data interface of dgIF typeS

Data interface of dgIF typeS is applied for transferring data stream when operating Write or Read command, and it is compatible with a general FIFO interface. Figure 7 shows the data interface of dgIF typeS when transferring Write data to the IP in the Write command.

 

Figure 7: Transmit FIFO Interface for Write command

 

The 16-bit FIFO read data counter (UserFifoRdCnt) shows the total amount of data stored. If the amount of data is sufficient, 512-byte data (16x256-bit) is transferred.

In Write command, data is read from the Transmit FIFO until the total data are transferred completely. The details to transfer data are described as follows.

1)     Before starting a new burst transfer, the IP waits until at least 512-byte data is available in the Transmit FIFO by monitoring UserFifoRdCnt[15:4] that must be not equal to 0.

2)     The IP asserts UserFifoRdEn to 1b for 16 clock cycles to read 512-byte data from the Transmit FIFO.

3)     UserFifoRdData is valid in the next clock cycle after asserting UserFifoRdEn to 1b, and 16 data are continuously transferred.

4)     After reading the 16th data (D15), UserFifoRdEn is de-asserted to 0b.

5)     Repeat steps 1) – 4) to transfer the next 512-byte data until total amount of data size is equal to the transfer size specified in the command.

6)     After the total data is completely transferred, UserBusy is de-asserted to 0b.

 

 

Figure 8: Receive FIFO Interface for Read command

 

When executing the Read command, the data is transferred from the SSD to the Receive FIFO until the entire data is transferred. The steps for transferring a burst of data are below.

1)     Before starting a new burst transmission, UserFifoWrCnt[15:5] is checked to verify that there is enough free space in the Receive FIFO, indicated by the condition UserFifoWrCnt[15:5] all 1 or 2047. Also, the IP waits until the amount of received data from the SSD reaches at least 512 bytes. Once both conditions are satisfied, the new burst transmission begins.

2)     The IP asserts UserFifoWrEn to 1b for 16 clock cycles to transfer 512-byte data from the Data buffer to the user logic.

3)     Once the transfer of 512-byte data is completed, UserFifoWrEn is de-asserted to 0b for one clock cycle. If there is additional data remaining to be transferred, repeat steps 1) – 3) until the total data size matches the transfer size specified in the command.

4)     After total data is completely transferred, UserBusy is de-asserted to 0b.

 

IdenCtrl/IdenName

To ensure proper operation of the system, it is recommended to send the Identify command to the IP as the first command after the system boots up. This command updates important information about the SSD, such as its total capacity (LBASize) and LBA unit size (LBAMode), which are necessary for Write and Read commands to operate correctly. The following rules apply to the input parameters of these commands.

1)     The sum of the address (UserAddr) and transfer length (UserLen), inputs of Write and Read command, must not exceed the total capacity (LBASize) of the SSD.

2)     If LBAMode is 1b (LBA unit size is 4 KB), the three lower bit (bit[2:0]) of UserAddr and UserLen must be set to 0b to align with the 4 KB unit.

 

 

Figure 9: Identify command timing diagram

 

When executing the Identify command, the following steps are taken.

1)     Send the Identify command to the IP (UserCmd=000b and UserReq=1b).

2)     The IP asserts UserBusy to 1b after receiving the Identify command.

3)     The IP returns 4KB Identify controller data to the user with IdenWrAddr equal to 0-127 and asserts IdenWrEn. IdenWrData and IdenWrDWEn are valid at the same clock as IdenWrEn=1b.

4)     The IP returns 4KB Identify namespace data to the user with IdenWrAddr equal to 128-255. IdenWrAddr[7] can be used to determine the data type as Identify controller data or Identify namespace data.

5)     UserBusy is de-asserted to 0b after finishing the Identify command.

6)     The LBASize and LBAMode of the SSD are simultaneously updated with the values obtained from the Identify command.

 

 

Figure 10: IdenWrDWEn timing diagram

 

The signal IdenWrDWEn is 8-bit signal used to validate a 32-bit data signal. Some SSDs return the 4KB Identify controller data and Identify namespace data one word (32-bit) at a time instead of continuously. To forward 32-bit data, one bit of IdenWrDWEn is asserted to 1b in the write cycle, as illustrated in Figure 10. Each bit of IdenWrDWEn (IdenWrDWEn[0], [1], …, [7]) corresponds to each 32-bit data of IdenWrData (IdenWrData[31:0], [63:32], …, [255:224]).

 

Shutdown

The Shutdown command is a command that should be sent as the last command before the system is powered down. The SSD ensures that the data from its internal cache is written to the flash memory before the shutdown process finishes. After the shutdown operation is completed, the NVMe IP and the SSD become inactive status. If the SSD is powered down without executing the Shutdown command, the total count of unsafe shutdowns is increased, as returned data from the SMART command.

 

 

Figure 11: Shutdown command timing diagram

 

The process for executing the Shutdown command is described below.

1)     Ensure that the IP is in an Idle state (UserBusy=0b) before sending the Shutdown command. The user must set UserReq=1b and UserCmd=001b to request the Shutdown command.

2)     Once the NVMe IP runs the Shutdown command, UserBusy is asserted to 1b.

3)     To clear the current request, UserReq is de-asserted to 0b after UserBusy is asserted to 1b.

4)     UserBusy is de-asserted to 0b when the SSD is completely shut down. After the shutdown process is completed, the IP will not receive any further user commands.

 

SMART

The SMART command is the command to check the health of the SSD. When this command is sent, the SSD returns 512-byte health information. The SMART command parameters are loaded from the CtmSubmDW0-DW15 signals on the Custom command interface. The user must set the 16-dword data which is a constant value before asserting UserReq. Once the SMART data is returned, it can be accessed via the CtmRAM port, as shown in Figure 12.

 

 

Figure 12: SMART command timing diagram

 

Below are the details of how to run the SMART command.

1)     The NVMe IP must be Idle (UserBusy=0b) before sending the command request. All input parameters must be stable when UserReq is asserted to 1b for sending the request. The CtmSubmDW0-DW15 is set as a constant value for the SMART command by following values.

CtmSubmDW0                                         = 0x0000_0002

CtmSubmDW1                                         = 0xFFFF_FFFF

CtmSubmDW2 – CtmSubmDW5               = 0x0000_0000

CtmSubmDW6                                         = 0x2000_0000

CtmSubmDW7 – CtmSubmDW9               = 0x0000_0000

CtmSubmDW10                                       = 0x007F_0002

CtmSubmDW11 – CtmSubmDW15           = 0x0000_0000

2)     UserBusy is asserted to 1b after the NVMe IP executes the SMART command.

3)     UserReq is de-asserted to 0b to clear the current request. Next, user logic can change the input parameters for the next command request.

4)     512-byte SMART data is returned on CtmRamWrData signal with asserting CtmRamWrEn to 1b. CtmRamAddr is equal to 0-15 to be data index of 512-byte data. When CtmRamAddr=0, byte0-31 of SMART data is valid on CtmRamWrData. CtmRamWrDWEn is Dword enable for each 32-bit CtmRamWrData. If CtmRamWrDWEn=FFh, all 256 bits of CtmRamWrData are valid.

5)     UserBusy is de-asserted to 0b when finishing SMART command.

 

 

Figure 13: CtmRamWrDWEn timing diagram

 

Similar to Identify command, some SSDs returns only one Dword (32-bit) of data at a time instead of 512-byte data continuously. In such cases, one bit of CtmRamWrDWEn is asserted to 1b in the write cycle to be the valid signal of 32-bit CtmRamWrData. Each bit of CtmRamWrDWEn (bit[0], [1], …, [7]) corresponds to each 32 bits of CtmRamWrData (bit[31:0], [63:32], …, [255:224]).

 

Secure Erase

The Secure Erase is a command that erases all user data in the SSD. After the Secure Erase command is executed, the contents of the user data are indeterminate. Since executing this command may require long time for operation, users need to disable timer of the IP by setting ‘TimeoutSet’ signal to zero value.

 

Figure 14 Secure Erase command timing diagram

 

Below are the details of how to run the Secure Erase command.

1)     The IP must be in an Idle state (UserBusy=0b) before sending the command request. All input parameters must be stable when UserReq is asserted to 1b to send the request. TimeoutSet and CtmSubmDW0-DW15 are set as a constant value for the Secure Erase command by following values.

TimeoutSet                                              = 0x0000_0000 (Disable Timer)

CtmSubmDW0                                         = 0x0000_0080

CtmSubmDW1                                         = 0x0000_0001

CtmSubmDW2 – CtmSubmDW9               = 0x0000_0000

CtmSubmDW10                                       = 0x0000_0200

CtmSubmDW11 – CtmSubmDW15           = 0x0000_0000

2)     After the NVMe IP executes the Secure Erase command, UserBusy is asserted to 1b.

3)     UserReq is then de-asserted to 0b to clear the current request, the user logic can change the input parameters for the next command request.

4)     UserBusy is de-asserted to 0b when the Secure Erase command is completed. After finishing the operation, the ‘TimeoutSet’ can be changed to other values to enable Timeout function of the IP.

Note: Some SSDs may experience a decrease in performance after long data transfer. For such SSDs, the Secure Erase command can help restore their performance

 

Flush

The SSDs typically enhance write performance by caching write data before writing it to the flash memory. However, unexpected power loss can result data loss as cached data may not be stored in flash memory. To avoid data loss, the Flush command can be used to force the SSD controller to write cached data to the flash memory.

 

Figure 15: Flush command timing diagram

 

To execute the Flush command, the following details should be mentioned.

1)     The IP must be Idle (UserBusy=0b) before sending the command request, and all input parameters must be stable when UserReq is asserted to 1b for sending the request. The CtmSubmDW0-DW15 is set as a constant value with the following values for Flush command.

CtmSubmDW0                             = 0x0000_0000

CtmSubmDW1                             = 0x0000_0001

CtmSubmDW2 – CtmSubmDW15 = 0x0000_0000

2)     UserBusy is asserted to 1b after the NVMe IP executes the Flush command.

3)     UserReq is de-asserted to 0b to clear the current request, and the user logic can change the input parameters for the next command request.

4)     UserBusy is de-asserted to 0b when the Flush command is completed.

Using the Flush command ensures that all data from the previous Write command is guaranteed to be stored in flash memory, thus preventing data loss in the event of unexpected power loss.

 

Error

 

Figure 16: Error flag timing diagram

 

If an error occurs during the initialization process or when running some commands, the UserError flag is set to 1b. To check the type of error, the UserErrorType should be read. The NVMeCAPReg, AdmCompStatus, and IOCompStatus signals can be used to monitor the error details after UserError is set to 1b.

If an error occurs during the initialization process, it is recommended to read the NVMeCAPReg signal to check the capabilities of the NVMe SSD. If an error occurs while operating a command, it is recommended to read the AdmCompStatus and IOCompStatus signals.

The UserError flag is cleared only by the RstB signal. After the failure is resolved, RstB is asserted to 0b to clear the error flag.

 

Verification Methods

The NVMe IP Core functionality was verified by simulation and also proved on real board design by using VCK190 evaluation board and Alveo-U50 Accelerator Card.

 

Recommended Design Experience

Experience design engineers with a knowledge of Vivado Tools should easily integrate this IP into their design.

 

Ordering Information

This product is available directly from Design Gateway Co., Ltd. Please contact Design Gateway Co., Ltd. for pricing and additional information about this product using the contact information on the front page of this datasheet.

 

Revision History

Revision

Date

Description

2.0

22-Dec-2023

Add data buffer mode and update Core I/O signals and support Secure Erase command.

1.1

21-Jul-2022

Support VCK190 board

1.0

13-Sep-2021

Initial Release