More servers require faster semiconductor speeds in high performance computing technologies, to meet the growing demand for big data analytics and autonomous mobility.
But a disparity in transfer speed between processors and memory chips have long been an obstacle for further computing system performance improvement, often creating a network bottleneck within a chipset.
This is why memory chip engineers should get a load of the next-generation interconnect standard blueprints, although they are far from being commercialized.
"Member companies know the path I'm on will allow me to grow, and then we try to find the specifications ahead of time," said Richard Solomon, vice president of Peripheral Component Interconnect Special Interest Group, a chip industry association that advocates PCI Express standard, in an interview with The Korea Herald.
South Korean powerhouses such as Samsung Electronics and SK hynix -- both of which are PCI-SIG members -- are no exception, he added.
PCI Express is one of the world’s semiconductor standards under which different types of chips like graphic cards and solid-state drive storage could be connected with one another to ensure a seamless interchip data transmission. Its technology blueprint is "three to four years ahead of where the industry needs that much bandwidth," according to Solomon.
Storage products like solid-state drives had less scalability than processors in terms of bandwidth.
For storage products like solid-state drive chips, the customary level of the PCI Express could not satisfy such cutting-edge requirements due to stricter upper caps of four for the available number of lanes, regardless of form factors.
This stands in contrast with graphic cards, which can increase transfer speed simply by adding multiple lanes -- usually eight, 16 or more -- of the information highway, so that the bandwidth can go up while using the customary level of the PCI Express.
In order to ensure a 256 gigabyte-per-second data speed in a data center, for example, a chip component can have 16 lanes of 16 gigabyte-per-second highways, under the existing standard of PCI Express. But SSDs need a highway of at least 64 gigabyte-per-second per lane in terms of speed, which exists only in theory -- urging memory chip engineers to speed up work on the commercialization.
"As the capacity grows they naturally have more chips, which means they can naturally get more bandwidth. But (memory and storage products) are stuck with that 4-lane defined standard form factor," Solomon said.
The memory chip bottleneck has been a cause for concern for both Samsung Electronics and SK hynix, who have held about a combined 75 percent of market share for SSDs used for internet servers in the world.
Over the past few years, these companies have been working on a groundbreaking interconnect standard called CXL, designed for memory capacity and bandwidth expansion. In particular, Samsung unveiled memory prototypes that boast 512 gigabyte-per-second speed by applying the CXL interconnect standard in May.
Solomon said that the two different chip interconnect standards, PCI Express and CXL, can complement each other on the same physical chipset layer, as CXL is more focused on cache coherence, while PCI Express aims to optimize the signaling mechanism.
Solomon visited Korea as one of the speakers at the PCI-SIG Developers Conference Asia-Pacific 2022 held Monday in Seoul. It was the first time PCI-SIG hosted such a conference in Korea. Previously, the Asia-Pacific conference had been held in Taiwan and Japan.
Korea has 17 PCI-SIG members out of over 900 globally as of September.