NVMe IP Core for PCIe Gen4 Data Sheet

Features 1

Applications 2

General Description. 3

Functional Description. 5

NVMe. 7

·      NVMe Host Controller 7

·      Command Parameter 7

·      Data Buffer 7

·      Data Controller 8

PCIe. 8

·      PCIe Controller 8

·      AsyncCtrl 8

User Logic. 9

PCIe Hard IP (P-Tile/F-Tile Avalon-ST Intel FPGA for PCIe) 9

Core I/O Signals 10

Initialization. 14

Control interface of dgIF typeS. 15

Data interface of dgIF typeS. 16

IdenCtrl/IdenName. 18

Shutdown. 20

SMART. 21

Flush. 23

Error 24

Verification Methods 25

Recommended Design Experience. 25

Ordering Information. 25

Revision History. 25

 

 

 

 

  Core Facts

Provided with Core

Documentation

Reference Design Manual

Demo Instruction Manual

Design File Formats

Encrypted File

Instantiation Templates

VHDL

Reference Designs & Application Notes

Quartus Project,

See Reference Design Manual

Additional Items

Demo on

Agilex F-Series development kit

Support

Support provided by Design Gateway Co., Ltd.

 

 

Design Gateway Co.,Ltd

E-mail:    ip-sales@design-gateway.com

URL:       design-gateway.com

 

Features

·     Access one NVMe Gen4 SSD without CPU and external memory

·     Two data buffer modes: High speed (1 MB RAM) or Small memory (256 KB RAM)

·     Simple user interface by dgIF typeS

·     Support six commands, i.e., Identify, Shutdown, Write, Read, SMART, and Flush

·     Supported NVMe device

·     Base Class Code:01h (mass storage), Sub Class code:08h (Non-volatile), Programming Interface:02h (NVMHCI)

·     MPSMIN (Memory Page Size Minimum): 0 (4Kbyte)

·     MDTS (Maximum Data Transfer Size): At least 5 (128 Kbyte) or 0 (no limitation)

·     LBA unit: 512 bytes or 4096 bytes

·     User clock frequency: More than or equal to a half of PCIe clock frequency (250 MHz for Gen4)

·     PCIe Gen4 Hard IP: 128-bit interface (P-Tile/F-Tile Avalon-ST Intel FPGA for PCIe)

·     Available reference design: 1-ch demo and 4-ch RAID0 demo

·     Agilex F-Series development board with AB18-PCIeX16 adapter board

·     Customized service for following features

·     Additional NVMe commands such as Format, Write Zeroes, and Sanitize

 

 

Table 1: Example Implementation Statistics

Family

Example Device

Buf

Mode

Fmax

(MHz)

Logic utilization (ALMs)

Registers

Pin

Block

Memory bit

Design

Tools

Agilex F-Series

AGFB014R24A2E3VR0

1 MB

375

4939

10,485

-

8,496,896

Quartus 22.3

256 KB

375

3970

9,191

-

2,205,440

Notes: Actual logic resource dependent on percentage of unrelated logic

 

 

Applications

 

Figure 1: NVMe IP for Gen4 Application

 

NVMe IP Core for Gen4 integrated with PCIe Gen4 hard IP (P-Tile/F-Tile Avalon-ST Intel FPGA for PCIe) is ideal to access NVMe Gen4 SSD without CPU and external memory such as DDR. The Data buffer implemented by memory block is included in NVMe IP Core to store the transferred data between user logic and NVMe Gen4 SSD. One PCIe hard IP can be configured for connecting with four NVMe-IPs and four NVMe Gen4 SSDs by using 4-lane PCIe interface. Therefore, 4-ch RAID0 system can be designed to increase transfer speed up to four times of one SSD performance, as shown in Figure 1. Besides, the total storage capacity of RAID0 system is increased to four times. Now one Gen4 SSD write/read performance is about 6000 Mbyte/sec. Using four Gen4 SSDs can achieve up to 24 Gbyte/sec.

We also provide alternative IP cores for more specific applications such as Multiple users, Random access, PCIe switch, and Soft IP core.

Multiple User NVMe IP Core To access one NVMe SSD by multiple users. It is the solution for the application that needs to write and read the SSD with high-performance at the same time.

https://dgway.com/muNVMe-IP_A_E.html

Random Access NVMe IP Core To access NVMe SSD with multiple commands, individual address for each. Recommended for the application which requires non-contiguous storage area.

https://dgway.com/raNVMe-IP_A_E.html

NVMe IP Core for PCIe Switch For access one or multiple NVMe SSDs via PCIe switch.

https://www.dgway.com/NVMe-IP_A_E.html

NVMe IP Core with PCIe Soft IP When the selected FPGA does not have enough PCIe hard IP for the application.

https://www.dgway.com/NVMeG4-IP_A_E.html

 

General Description

 

Figure 2: NVMe IP for Gen4 block diagram

 

NVMe IP for Gen4 implements the host controller to access NVMe SSD following NVM express standard. Physical interface of NVMe SSD is PCIe which implements the low layer by PCIe Gen4 hard IP (P-Tile/F-Tile Avalon-ST Intel FGPA IP for PCIe).

NVMe IP supports six NVMe commands, i.e., Identify, Shutdown, Write, Read, SMART, and Flush command by using two user interface groups. First is Control interface for transferring command and the parameters. Another is Data interface for transferring data when the command must have the data transferring. Control interface and Data interface for Write/Read command use dgIF typeS format. Control interface of dgIF typeS consists of start address and transfer length with asserting the request signal while Data interface of dgIF typeS is the FIFO interface.

SMART and Flush command are Custom command which use Ctm I/F for control path and Ctm RAM I/F for data path. Identify command uses the same Control interface as Write or Read command, but uses its own data interface - Iden I/F, as shown in Figure 2.

While running initialization process or operating some commands, error signal may be asserted by NVMe IP if some abnormal conditions are found. The IP includes the error status to check more details of error condition. To recover error status, NVMe IP and SSD must be reset.

There is one limitation about clock frequency of user logic. Transmit packet to PCIe hard IP must be sent continuously until end of packet. Therefore, data must be valid every clock between start of frame and end of frame. To support this feature, user logic clock frequency must be more than or equal to a half of PCIe clock frequency. Therefore, the bandwidth on user side is higher than or equal to PCIe hard IP bandwidth. When running PCIe Gen4 speed, user logic clock must be more than or equal to 250MHz (PCIe clock frequency is set to 500MHz).

The reference designs on FPGA evaluation boards are available for evaluation before purchasing.

 

Functional Description

Figure 3 shows the operation flow of NVMe IP after IP reset is de-asserted. There are three phases, i.e., IP initialization, Operating command, and Inactive status.

After finishing IP initialization, the first command that user runs must be Identify command to check device status and capacity. After finishing all operations, the last command before shutdown system should be Shutdown command for safety operation.

 

 

Figure 3: NVMe IP operation flow

 

 

The operation flow of NVMe IP is described as follows.

1)     IP waits until PCIe is ready by monitoring Linkup status from PCIe IP core.

2)     IP begins the initialization process by configurating PCIe and NVMe registers. After that, the IP enters to the Idle state to wait for a new command request from user. If some errors are detected during initialization process, the IP enters to the Inactive state with asserting UserError to 1.

3)     The first command from user must be Identify command (UserCmd=000b) to update LBASize (disk capacity) and LBAMode (LBA unit=512 byte or 4 Kbyte).

4)     The last command before power down the system must be Shutdown command (UserCmd=001b). This command is recommended to guarantee SSD powered down in a good sequence. Without Shutdown command, Write data in SSD cannot be guaranteed. After finishing Shutdown command, NVMe IP and SSD change to the Inactive state. The new command cannot be operated until the IP is reset.

5)     For Write command (UserCmd=010b), the maximum data size of one command is 128 Kbyte. If total length from user is more than 128 Kbyte, the IP repeats step 5a) 5b) automatically until total data are completely transferred.

a)     The IP waits until Write data, sent by user, is enough for one command (transfer size of one command in NVMe IP is 128 Kbyte, except the last loop which could be less than 128 Kbyte).

b)     The IP sends Write command to SSD and then waits until the status is returned from SSD. The IP returns to the Idle state when total data are completely transferred. Otherwise, the IP goes back to step 5a) to send the next Write command.

6)     Similar to Write command, when running Read command (UserCmd=011b) which has the transfer size more than 128 Kbyte, the IP must repeat step 6a) 6b) many times.

a)     If remaining transfer size is equal to zero, the IP skips to step 6c). Otherwise, the IP waits until free space of data buffer in NVMe IP is enough for one command (128 Kbyte or remaining transfer size for the last loop).

b)     The IP sends Read command to SSD and then returns to step 6a).

c)     IP waits until all data are completely transferred from data buffer to user logic and then returns to the Idle state. Therefore, data buffer is empty after finishing Read command.

7)     For SMART command (UserCmd=100b), 512-byte data is returned after finishing the operation.

a)     IP sends Get Log Page command to read SMART/Health information from the SSD.

b)     512-byte data is returned from the SSD. The IP forwards the data through Custom command RAM interface (CtmRamAddr=0x000 0x01F).

8)     For Flush command (UserCmd=110b), there is no data transferring during the operation.

a)     IP sends Flush command to the SSD.

b)     IP waits until SSD returns status to complete the operation.

 

To design NVMe host controller, NVMe IP implements two protocols, i.e., NVMe protocol for interface with user and PCIe protocol for interface with PCIe hard IP. Figure 2 shows the hardware inside NVMe IP which is split into two groups, NVMe and PCIe. More details of each module are described as follows.

 

NVMe

Six commands that NVMe supports can be split in two command types - Admin command and NVM command. Admin command consists of three commands - Identify, Shutdown, and SMART command while NVM command consists of three commands - Write, Read, and Flush command. After finishing operating the command, the status returned from the SSD is latched to AdmCompStatus (status returned from Admin command) or IOCompStatus (status returned from NVM command), depending on the command types.

The parameters of Write or Read command are set by Control interface of dgIF typeS while the parameters of SMART or Flush command are set by CtmSubmDW0-15 of Ctm interface. Data interface for Write or Read command is transferred by FIFO interface, a part of dgIF typeS. The data of Write command and Read command are stored to the Data buffer inside the IP. The data interface of other commands has its own interface - Identify I/F for Identify command and Custom RAM I/F for SMART command.

The details of each submodule are described as follows.

·       NVMe Host Controller

NVMe host controller is the core controller in NVMe IP. Similar to the operation flow, the controller has two phases for operating. First is the initialization phase which is once run after the system is boot up for setting NVMe register inside the SSD. After finishing the initialization phase, the next phase is operating the command. The controller controls the order of transmitted packet and received packet for each command.

To start operating each command, the parameters of the command are latched to Command Parameter for creating the packet. After that, the packet is forwarded to AsyncCtrl to convert NVMe packet to PCIe packet. After each command operation is done, the status packet is returned from SSD. The controller decodes the status value and check whether the operation is complete or error. If the command needs to transfer the data such as Write command and Read command, the controller must handle the order of data packet that is created and decoded by Data controller.

·       Command Parameter

This module creates Command packet sent to SSD. Also, the status packet returned from SSD is decoded by this module. However, the input/output of this module are controlled by the NVMe host controller. Typically, the command consists of 16 Dwords (1 Dword = 32-bit). When running Identify, Shutdown, Write, and Read command, all 16 Dwords are created by Command parameters that are loaded from the user inputs on dgIF typeS. When running SMART and Flush command, all 16 Dwords are directly loaded via CtmSubmDW0-CtmSubmDW15 of Ctm interface.

·       Data Buffer

Two data buffer modes are supported, i.e., High speed mode which uses 1 Mbyte RAM and Small memory mode which uses 256 Kbyte RAM. The RAM is implemented by using memory block. The buffer stores data for transferring with the SSD while operating Write and Read command.

·       Data Controller

This module is operated when the command must transfer the data, i.e., Identify, SMART, Write, and Read command. There are three data interfaces for transferring with the SSD - FIFO interface with the Data buffer when running Write or Read command, Custom command RAM interface when running SMART command, and Identify interface when running Identify command. The data packet is created and decoded by this module. Similar to Command Parameter module, Data controller input and output signals are controlled by the NVMe host controller.

 

PCIe

The PCIe standard is the outstanding low-layer protocol for very high-speed application. The NVMe standard is the protocol which is run over PCIe protocol. In the initialization process, NVMe layer is setup after finishing PCIe layer setup. Two modules are designed to support PCIe protocol - PCIe controller and AsyncCtrl. More details of each module are described as follows.

·       PCIe Controller

In initialization process, PCIe controller sets up PCIe environment of SSD via Reconfig port which is Avalon-MM interface. After that, PCIe packet is created and decoded via 128-bit Tx/Rx Avalon-Stream. The command packet and the data packet from NVMe module are converted to be PCIe packet by PCIe controller and vice versa.

·       AsyncCtrl

AsyncCtrl includes asynchronous registers and asynchronous buffers with asymmetric data width to support clock domain crossing and different data width. The data width on user interface is two times of the data width on PCIe interface. Therefore, user clock frequency can be equal to or more than a half value of PCIe clock frequency for balancing the data bandwidth. Most logics in NVMe IP run on user clock domain while PCIe hard IP runs on PCIe clock domain.

 

User Logic

This module could be designed by using small state machine to send the commands and the parameters for each command. For example, the address and transfer size which are the parameters for Write or Read command are designed by using simple registers. While FIFO is connected for transferring data in Write and Read command. The data output interface when operating SMART command and Identify command connects to simple dual port RAM with byte enable. The data width of FIFO and RAM are 256-bit while the memory depth can be set by different value. Data size of Identify command is 8 Kbytes while data size of SMART command is 512 bytes.

 

PCIe Hard IP (P-Tile/F-Tile Avalon-ST Intel FPGA for PCIe)

To connect with NVMe IP, three interfaces of PCIe Hard IP (P-Tile/F-Tile Avalon-ST Intel FPGA for PCIe) are applied, i.e., Avalon-ST RX Port (rx_st_*) for received packet, Avalon-ST Tx Port (tx_st_*) for transmitted packet, and Hard IP Reconfiguration (hip_reconfig_*) for PCIe configuration.

PCIe Hard IP implements Transaction layer, Data Link layer, and Physical layer of PCIe protocol. The maximum number of SSDs connecting to one FPGA device is limited by the numbers of PCIe Hard IP Block. One PCIe Hard IP supports up to 16-lane PCIe Gen4 interface which can be configured to four of 4-lane PCIe Gen4 interface. Therefore, one PCIe Hard IP can connect up with four NVMe IPs and four NVMe Gen4 SSD.

More details of PCIe Hard IP are described in following document.

P-Tile Avalon-ST Intel FPGA for PCIe

https://www.intel.com/content/www/us/en/docs/programmable/683059/

F-Tile Avalon-ST Intel FPGA for PCIe

https://www.intel.com/content/www/us/en/docs/programmable/683140/

 

Core I/O Signals

Descriptions of Core parameter and the signal I/Os are provided in Table 2 - Table 5.

 

Table 2: Core Parameters

Name

Value

Description

BufMode

0 or 1

Data buffer mode.

1-High speed mode by using 1 MB buffer, 0-Small memory mode by using 256 KB buffer

 

Table 3: User logic I/O Signals (Synchronous to Clk signal)

Signal

Dir

Description

Control I/F of dgIF typeS

RstB

In

Synchronous reset signal. Active low. De-assert to 1 when Clk signal is stable.

Clk

In

System clock for running NVMe IP. The frequency must be more than or equal to a half of PCIeClk frequency, clock output from PCIe Hard IP. Therefore, when using 500 MHz PCIe hard IP for Gen4 speed, Clk frequency must be more than or equal to 250MHz.

UserCmd[2:0]

In

User Command. Valid when UserReq=1.

(000b: Identify, 001b: Shutdown, 010b: Write SSD, 011b: Read SSD,

 100b: SMART, 110b: Flush, 101b/111b: Reserved)

UserAddr[47:0]

In

Start address to write/read SSD in 512-byte unit. Valid when UserReq=1.

In case LBA unit = 4 Kbyte, UserAddr[2:0] must be always set to 000b to align 4 Kbyte unit.

In case LBA unit = 512 byte, it is recommended to set UserAddr[2:0]=000b to align 4 Kbyte size (SSD page size). Write/Read performance of most SSDs is reduced when start address is not aligned to page size.

UserLen[47:0]

In

Total transfer size to write/read SSD in 512-byte unit. Valid from 1 to (LBASize-UserAddr).

In case LBA unit = 4 Kbyte, UserLen[2:0] must be always set to 000b to align 4 Kbyte unit.

Valid when UserReq=1.

UserReq

In

Assert to 1 to send the new command request and de-assert to 0 after IP starts the operation by asserting UserBusy to 1. This signal can be asserted when the IP is Idle (UserBusy=0). Command parameters (UserCmd, UserAddr, UserLen, and CtmSubmDW0-DW15) must be valid and stable when UserReq=1. UserAddr and UserLen are inputs for Write/Read command while CtmSubmDW0-DW15 are inputs for SMART/Flush command.

UserBusy

Out

Asserted to 1 when IP is busy.

New request must not be sent (UserReq to 1) when IP is busy.

LBASize[47:0]

Out

Total capacity of SSD in 512-byte unit. Default value is 0.

This value is valid after finishing Identify command.

LBAMode

Out

LBA unit size (0: 512byte, 1: 4 Kbyte). Default value is 0.

This value is valid after finishing Identify command.

UserError

Out

Error flag. Asserted to 1 when UserErrorType is not equal to 0.

The flag is de-asserted to 0 by asserting RstB to 0.

 

Signal

Dir

Description

Control I/F of dgIF typeS

UserErrorType[31:0]

Out

Error status.

[0] Error when PCIe class code is not correct.

[1] Error from CAP (Controller capabilities) register which may be caused from

- MPSMIN (Memory Page Size Minimum) is not equal to 0.

- NVM command set flag (bit 37 of CAP register) is not set to 1.

- DSTRD (Doorbell Stride) is not 0.

- MQES (Maximum Queue Entries Supported) is more than or equal to 7.

More details of each register can be checked from NVMeCAPReg signal.

[2] Error when Admin completion entry is not received until timeout.

[3] Error when status register in Admin completion entry is not 0 or phase tag/command ID is invalid. Please see more details from AdmCompStatus signal.

[4] Error when IO completion entry is not received until timeout.

[5] Error when status register in IO completion entry is not 0 or phase tag is invalid. Please see more details from IOCompStatus signal.

[6] Error when Completion TLP packet size is not correct.

[7] Error when PCIe Hard IP detects PCIe uncorrectable fatal error.

[8] Error from Unsupported Request (UR) flag in Completion TLP packet.

[9] Error from Completer Abort (CA) flag in Completion TLP packet.

[15:10] Reserved

[16]- Error from unsupported LBA unit (LBA unit is not equal to 512 byte or 4 Kbyte)

[31:17] Reserved

Note: Timeout period of bit[2]/[4] is set from TimeOutSet input.

Data I/F of dgIF typeS

UserFifoWrCnt[15:0]

In

Write data counter of Receive FIFO. Used to check full status. When full status is detected, the returned data transmission from Read command may be paused.

If the size of FIFO data count is less than 16-bit, please fill 1 to upper bit.

UserFifoWrEn

Out

Asserted to 1 to write data to Receive FIFO when running Read command.

UserFifoWrData[255:0]

Out

Write data bus of Receive FIFO. Valid when UserFifoWrEn=1.

UserFifoRdCnt[15:0]

In

Read data counter of Transmit FIFO. Used to check data size stored in FIFO. The transmitted data packet for Write command may be paused when the counter shows empty status.

If FIFO data count is less than 16-bit, please fill 0 to upper bit.

UserFifoEmpty

In

The signal is unused for this IP.

UserFifoRdEn

Out

Asserted to 1 to read data from Transmit FIFO when running Write command.

UserFifoRdData[255:0]

In

Read data returned from Transmit FIFO.

Valid in the next clock after UserFifoRdEn is asserted to 1.

 

Signal

Dir

Description

NVMe IP Interface

IPVesion[31:0]

Out

IP version number

TestPin[31:0]

Out

Reserved to be IP Test point.

TimeOutSet[31:0]

In

Timeout value to wait completion from SSD. Time unit is equal to 1/(Clk frequency).

When TimeOutSet is equal to 0, Timeout function is disabled.

AdmCompStatus[15:0]

Out

Status output from Admin Completion Entry

[0] Set to 1 when Phase tag or Command ID in Admin Completion Entry is invalid.

[15:1] Status field value of Admin Completion Entry

IOCompStatus[15:0]

Out

Status output from IO Completion Entry

[0] Set to 1 when Phase tag in IO Completion Entry is invalid.

[15:1] Status field value of IO Completion Entry

NVMeCAPReg[31:0]

Out

The parameter value of the NVMe capability register when UserErrorType[1] is asserted to 1.

[15:0] MQES (Maximum Queue Entries Supported)

[19:16] DSTRD (Doorbell Stride)

[20] NVM command set flag

[24:21] MPSMIN (Memory Page Size Minimum)

[31:25] Undefined

Identify Interface

IdenWrEn

Out

Asserted to 1 for sending data output from Identify command.

IdenWrDWEn[7:0]

Out

Dword (32-bit) enable of IdenWrData. Valid when IdenWrEn=1.

1: This dword data is valid, 0: This dword data is not available.

Bit[0], [1], , [7] corresponds to IdenWrData[31:0], [63:32], , [255:224] respectively.

IdenWrAddr[7:0]

Out

Index of IdenWrData in 256-bit unit. Valid when IdenWrEn=1.

0x00-0x7F is 4Kbyte Identify controller data,

0x80-0xFF is 4Kbyte Identify namespace data.

IdenWrData[255:0]

Out

4Kbyte Identify controller data or Identify namespace data. Valid when IdenWrEn=1.

Custom interface

CtmSubmDW0[31:0] CtmSubmDW15[31:0]

In

16 Dwords of Submission queue entry for SMART/Flush command.

DW0: Command Dword0, DW1: Command Dword1, , and DW15: Command Dword15.

These inputs must be valid and stable when UserReq=1 and UserCmd=100b (SMART) or 110b (Flush).

CtmCompDW0[31:0]

CtmCompDW3[31:0]

Out

4 Dwords of Completion queue entry, output from SMART/Flush command.

DW0: Completion Dword0, DW1: Completion Dword1, , and DW3: Completion Dword3

CtmRamWrEn

Out

Asserted to 1 for sending data output from custom command such as SMART command.

CtmRamWrDWEn[7:0]

Out

Dword (32-bit) enable of CtmRamWrData. Valid when CtmRamWrEn=1.

1: This Dword data is valid, 0: This Dword data is not available.

Bit[0], [1], , [7] corresponds to CtmRamWrData[31:0], [63:32], , [255:224] respectively.

CtmRamAddr[7:0]

Out

Index of CtmRamWrData when SMART data is received. Valid when CtmRamWrEn=1.

(Optional) Index to request data input through CtmRamRdData for customized custom commands.

CtmRamWrData[255:0]

Out

512-byte data output from SMART command. Valid when CtmRamWrEn=1.

CtmRamRdData[255:0]

In

(Optional) Data input for customized custom commands.

 

Table 4: Physical I/O Signals for PCIe Hard IP (Synchronous to PCIeClk signal)

Signal

Dir

Description

Clock and Reset

PCIeRstB

In

Synchronous reset signal. Active low.

De-assert to 1 when PCIe Hard IP is not in reset state.

PCIeClk

In

Clock output from PCIe Hard IP.

It is recommended to configure as 500 MHz to achieve the best performance. Using lower clock speed reduces the PCIe interface bandwidth which may show the lower performance.

PCIe Hard IP Rx Interface

PCIeRxValid

In

Assert to 1 to indicate that PCIeRxData is valid.

De-assert to 0 within 27 clock cycles after PCIeRxReady is de-asserted to 0.

PCIeRxSOP

In

Assert to 1 to indicate that this is the first cycle of the TLP.

Valid when PCIeRxValid is asserted to 1.

PCIeRxEOP

In

Assert to 1 to indicate that this is the last cycle of the TLP.

Valid when PCIeRxValid is asserted to 1.

PCIeRxReady

Out

Asserted to 1 to indicate that NVMe IP is ready to accept data.

If it is de-asserted, PCIe Hard IP continues to send data up to 27 clock cycles.

PCIeRxHd[127:0]

In

Received header data. Valid when PCIeRxValid and PCIeRxSOP are asserted to 1.

PCIeRxData[127:0]

In

Received data. Valid when PCIeRxValid is asserted to 1.

PCIe Hard IP Tx Interface

PCIeTxValid 

Out

Asserted to 1 to indicate that PCIeTxData is valid.

When PCIeTxReady is de-asserted to 0, PCIeTxValid must be de-asserted to 0 within 3 clock cycles. When PCIeTxReady is re-asserted to 1, PCIeTxValid may be re-asserted to 1 after 3 clock cycles to continue packet transmission.

PCIeTxSOP

Out

Asserted to 1 to indicate first cycle of a TLP. Valid when PCIeTxValid is asserted to 1.

PCIeTxEOP

Out

Asserted to 1 to indicate last cycle of a TLP. Valid when PCIeTxValid is asserted to 1.

PCIeTxReady

In

Asserted to 1 to indicate that PCIe Hard IP is ready to accept data.

If it is de-asserted, PCIeTxValid must be de-asserted within 3 clock cycles.

If it is asserted, PCIeTxValid may be asserted after 3 clock cycles.

PCIeTxHd[127:0]

Out

Transmitted header data. Valid when PCIeTxValid and PCIeTxSOP are asserted to 1.

PCIeTxData[127:0]

Out

Transmitted data. Valid when PCIeTxValid is asserted to 1.

 

Table 5: Hard IP Reconfiguration Signals (Synchronous to CfgClk signal)

Signal

Dir

Description

System signal

CfgRstB

In

Synchronous reset signal. Active low. De-assert to 1 when Clk signal is stable.

CfgClk

In

Reconfiguration clock. The frequency range is 100-125 MHz.

PCIeLinkup

In

Asserted to 1 when LTSSM state of PCIe hard IP is in L0 State.

Reconfiguration Interface

CfgAddr[20:0]

Out

Reconfiguration address for writing or reading.

CfgWr

Out

Asserted to 1 to send Write request.

CfgWrData[7:0]

Out

Write data. Valid when CfgWrite=1.

CfgRd

Out

Asserted to 1 to send Read request

CfgRdData[7:0]

In

Read data. Valid when CfgRdValid=1.

CfgRdValid

In

Asserted to 1 when CfgRdData is valid.

CfgWaitRequest

In

Asserted to 1 to indicate that the IP core is not ready to respond to a request.

 

 

Timing Diagram

 

Initialization

 

 

Figure 4: Timing diagram during initialization process

 

The steps of the initialization process are as follows.

1)     Wait until Clk is stable and then de-assert RstB to 1.

2)     Wait until CfgClk is stable and then de-assert CfgRstB to 1.

3)     PCIe hard IP de-asserts PCIeRstB to 1 after PCIe reset sequence is done. PCIe hard IP is ready to transfer data with the application layer.

4)     Assert PCIeLinkup to 1 after LTSSM state of PCIe hard IP is L0 state. Though LTSSM state is run on PCIeClk, PCIeLinkup must be generated on CfgClk domain. Asynchronous register must be applied. After that, NVMe IP starts initialization process.

5)     UserBusy is de-asserted to 0 after NVMe IP completes initialization process.

After finishing all above steps, NVMe IP is ready to receive the command from user.

 

 

Control interface of dgIF typeS

dgIF typeS signals are split into two groups. First group is control interface for sending command with the parameters and monitoring the status. Second group is data interface for transferring data stream in both directions. Figure 5 shows Control interface of dgIF typeS.

 

 

Figure 5: Control interface of dgIF typeS timing diagram

 

1)     Before sending new command request to the IP, UserBusy must be equal to 0 to confirm that IP is Idle.

2)     Command and the parameters such as UserCmd, UserAddr, and UserLen must be valid when asserting UserReq to 1 for sending the new command request.

3)     IP asserts UserBusy to 1 after starting the new command operation.

4)     After UserBusy is asserted to 1, UserReq is de-asserted to 0 to finish the current request. New parameters for the next command could be prepared on the bus. UserReq for the new command must not be asserted to 1 until the current command operation is done.

5)     UserBusy is de-asserted to 0 after the command operation is completed. New command request can be asserted.

Note: The number of parameters using in each command are different, described as follows.

 

 

Data interface of dgIF typeS

Data interface of dgIF typeS is applied for transferring data stream when operating Write command or Read command. The interface is compatible to general FIFO interface. Figure 6 shows the data interface of dgIF typeS when transferring Write data to the IP in Write command. 16-bit FIFO read data counter (UserFifoRdCnt) shows total amount of data stored. If the amount of data is enough, 512-byte data (16x256-bit) is transferred.

 

 

Figure 6: Transmit FIFO interface for Write command

 

In Write command, data is read from Transmit FIFO until total data are transferred completely. The details to transfer data are described as follows.

1)     Before starting a new burst transfer, UserFifoRdCnt[15:4] is monitored. The IP waits until at least 512-byte data is available in Transmit FIFO (UserFifoRdCnt[15:4] is not equal to 0).

2)     The IP asserts UserFifoRdEn to 1 for 16 clock cycles to read 512-byte data from Transmit FIFO.

3)     UserFifoRdData is valid in the next clock cycle after asserting UserFifoRdEn to 1. 16 data are continuously transferred.

4)     UserFifoRdEn is de-asserted to 0 after reading the 16th data (D15). Repeat step 1) - 4) to transfer the next 512-byte data until total data size is equal to the transfer length, set by the user.

5)     After total data is completely transferred, UserBusy is de-asserted to 0.

 

 

Figure 7: Receive FIFO interface for Read command

 

In Read command, data is transferred from SSD to Receive FIFO until total data are completely transferred. The details to transfer a burst of data are described as follows.

1)     Before starting the new burst transmission, UserFifoWrCnt[15:5] is monitored. The IP waits until the free space of Received FIFO is enough (UserFifoWrCnt[15:5] is not equal to all 1 or 2047). After received data from the SSD is more than or equal to 512 bytes, the new burst transmission begins.

2)     The IP asserts UserFifoWrEn to 1 for 16 clock cycles to transfer 512-byte data from the data buffer to user logic.

3)     After finishing transferring 512-byte data, UserFifoWrEn is de-asserted to 0. Repeat step 1) 3) to transfer the next 512-byte data until total data size is equal to the transfer length, set by the user.

4)     After total data is completely transferred, UserBusy is de-asserted to 0.

 

 

IdenCtrl/IdenName

It is recommened to send Identify command to the IP as the first command after the system boots up. This command updates the necessary information of SSD, i.e., total capacity (LBASize) and LBA unit size (LBAMode). The SSD information is applied to be the limitation of the input parameters when operating Write and Read command, described as follows.

1)     The sum of the address (UserAddr) and transfer length (UserLen), inputs of Write and Read command, must not be more than total capacity (LBASize) of the SSD.

2)     If LBAMode of the SSD is equal to 1 (LBA unit size is 4 Kbyte), the three lower bit (bit[2:0]) of UserAddr and UserLen must be always equal to 0 to align 4 Kbyte unit.

 

 

Figure 8: Identify command timing diagram

 

The details when running Identify command are shown as follows.

1)     Send Identify command to the IP (UserCmd=000b and UserReq=1).

2)     The IP asserts UserBusy to 1 after running Identify command.

3)     4096-byte Identify controller data is returned to user. IdenWrAddr is equal to 0-127 with asserting IdenWrEn. Also, IdenWrData and IdenWrDWEn are valid at the same clock as IdenWrEn=1.

4)     4096-byte Identify namespace data is returned. IdenWrAddr is equal to 128-255. IdenWrAddr[7] can be applied to check data type which is Identify controller data or Identify namespace data.

5)     UserBusy is de-asserted to 0 after finishing the Identify command.

6)     LBASize and LBAMode of the SSD are simultaneously updated.

 

 

Figure 9: IdenWrDWEn timing diagram

 

IdenWrDWEn is 8-bit signal to be valid signal of 32-bit data. Some SSDs do not return 4-Kbyte Identify controller data and Identify namespace data continuously, but return only one dword (32-bit) at a time. Therefore, one bit of IdenWrDWEn is asserted to 1 in the write cycle to write 32-bit data, as shown in Figure 9. IdenWrDWEn[0], [1], , [7] corresponds to IdenWrData[31:0], [63:32], , [255:224], respectively.

 

 

Shutdown

Shutdown command is recommended to send as the last command before the system is powered down. When Shutdown command is issued, SSD flushes the data from the internal cache to flash memory. After Shutdown command is done, NVMe IP and SSD are inactive until the system is powered down. If the SSD is powered down without Shutdown command, the total count of unsafe shutdowns (returned data of SMART command) is increased.

 

 

Figure 10: Shutdown command timing diagram

 

The details when running Shutdown command are shown as follows.

1)     Before sending the command request, the IP must be Idle (UserBusy=0). To send Shutdown command, user asserts UserReq to 1 with UserCmd=001b.

2)     UserBusy is asserted to 1 after NVMe IP runs Shutdown command.

3)     UserReq is de-asserted to 0 to clear the current request after UserBusy is asserted to 1.

4)     UserBusy is de-asserted to 0 when the SSD is shutdown completely. After that, the IP does not receive any command requested from user.

 

 

SMART

SMART command is the command to check the SSD health. After sending SMART command, 512-byte health information is returned from the SSD. SMART command loads the parameters from CtmSubmDW0-DW15 signals on Custom command interface. User sets 16-dword data as constant value for SMART command before asserting UserReq. After that, the SMART data is returned via CtmRAM port as shown in Figure 11.

 

 

Figure 11: SMART command timing diagram

 

The details when running SMART command are shown as follows.

1)     Before sending the command request, the IP must be Idle (UserBusy=0).

All input parameters must be stable when UserReq is asserted to 1 for sending the request.

CtmSubmDW0-DW15 is set as constant value by following value for SMART command.

CtmSubmDW0                                         = 0x0000_0002

CtmSubmDW1                                         = 0xFFFF_FFFF

CtmSubmDW2 CtmSubmDW5               = 0x0000_0000

CtmSubmDW6                                         = 0x2000_0000

CtmSubmDW7 CtmSubmDW9               = 0x0000_0000

CtmSubmDW10                                       = 0x007F_0002

CtmSubmDW11 CtmSubmDW15           = 0x0000_0000

2)     Assert UserBusy to 1 after NVMe IP runs SMART command.

3)     UserReq is de-asserted to 0 to clear the current request. Next, user logic can change the input parameters for the next command request.

4)     512-byte SMART data is returned on CtmRamWrData signal with asserting CtmRamWrEn to 1. CtmRamAddr is equal to 0-15 to be data index of 512-byte data. When CtmRamAddr=0, byte0-31 of SMART data is valid on CtmRamWrData. CtmRamWrDWEn is dword enable for each 32-bit CtmRamWrData. If CtmRamWrDWEn=FFh, all 256 bits of CtmRamWrData are valid.

5)     UserBusy is de-asserted to 0 when finishing SMART command.

 

 

Figure 12: CtmRamWrDWEn timing diagram

 

Similar to Identify command, some SSDs do not return 512-byte data continuously but return only one dword (32-bit) at a time. Therefore, one bit of CtmRamWrDWEn is asserted to 1 in the write cycle to be the valid signal of 32-bit CtmRamWrData. CtmRamWrDWEn[0], [1], , [7] corresponds to CtmRamWrData[31:0], [63:32], , [255:224], respectively.

 

 

Flush

Most SSDs accelerate write performance by storing write data to cache before flushing to the flash memory by the SSD controller. If power is down unexpectedly, the data in the cache may be lost and not stored to the flash memory. Flush command is the command to force the SSD controller to flush data from the cache. After sending Flush command, all data in previous Write command can be guaranteed.

 

 

Figure 13: Flush command timing diagram

 

The details for running Flush command are shown as follows.

1)     Before sending the command request, the IP must be Idle (UserBusy=0).

All input parameters must be stable when UserReq is asserted to 1 for sending the request.

CtmSubmDW0-DW15 is set as constant value by following value for Flush command.

CtmSubmDW0                             = 0x0000_0000

CtmSubmDW1                             = 0x0000_0001

CtmSubmDW2 CtmSubmDW15 = 0x0000_0000

2)     UserBusy is asserted to 1 after NVMe IP runs Flush command.

3)     UserReq is de-asserted to 0 to clear the current request. Next, user logic can change the input parameters for the next command request.

4)     UserBusy is de-asserted to 0 when Flush command is done.

 

 

Error

 

Figure 14: Error flag timing diagram

 

When the error is found while running initialization process or operating some commands, UserError flag is asserted to 1. UserErrorType is read to check the error type. NVMeCAPReg, AdmCompStatus, and IOCompStatus are valid for monitoring error details after UserError is asserted to 1.

When the error is found while running initialization process, it is recommended to read NVMeCAPReg to check capability of NVMe SSD. When the error is found while operating the command, it is recommended to read AdmCompStatus and IOCompStatus.

UserError flag is cleared by RstB signal only. After the failure is solved, RstB is asserted to 0 to clear the error flag.

 

Verification Methods

The NVMe IP Core for Gen4 functionality was verified by simulation and also proved on real board design by using Agilex F-series FPGA Development board.

 

Recommended Design Experience

Experience design engineers with a knowledge of QuartusII Tools should easily integrate this IP into their design.

 

Ordering Information

This product is available directly from Design Gateway Co., Ltd. Please contact Design Gateway Co., Ltd. For pricing and additional information about this product using the contact information on the front page of this datasheet.

 

Revision History

Revision

Date

Description

2.0

29-Sep-22

Support two buffer modes

1.0

2-Jun-21

New release