As everyone knows, technology is constantly driving the transformation of data centers. What are the drivers of this change? In 2017, nearly 250 million users first logged into the Internet, and this number increased by 7% in 2018. There are 11 new users per second watching social media, and each person spends about 6 hours a day online.
The reason for the constant change of the data center is actually very simple, just for “profit”! Now, almost all companies have their own official website, and in 2017, e-commerce helped companies get nearly 1.5 trillion US dollars. However, if your site loads more than three seconds, you may lose nearly a quarter of your visitors. A one-second delay can lose 11% of page views and 7% of business opportunities!
As a result, server computing speeds have grown over the past few years and will continue to grow in the future. Server computing speed also drives the sales and development of transceivers. As you can see from Figure 1, the 1G connection quickly becomes a thing of the past, and 10G will soon disappear. 25G transceivers are currently on the market, but will be replaced by 50G transceivers in the next few years. In addition, many very large and cloud data centers are expected to use 100 G server port speeds in the coming years. These higher server speeds can be achieved with 2- or 8-core parallel optical transceivers for 40G, 100G, 200G and 400G channel rates.
Figure 1. Global server shipments (source: Dell’Oro Group)
Higher transfer rates through different technologies
Transceiver manufacturers use several different technologies to achieve an increase in transmission rate.
The first is to increase the baud rate, but this method is suitable for low data rates. At higher data rate transmissions, signal-to-noise ratio becomes a more difficult problem to solve.
The second method is to increase the number of fibers. Extend 2 cores to 8 cores.
A third method is to use multiple sources and multiplexed signals, commonly referred to as wavelength division multiplexing or demultiplexing.
The fourth method is to change the format of the modulation and use pulse amplitude modulation (PAM4) to achieve higher data rates. However, no matter which method is used, the last fiber link (Figure 2) used is either 2-core or 8-core.
Figure 2: Migration path
2 cores or 8 cores?
So are we choosing a duplex (2-core) or parallel-transport (8-core) solution? Let’s discuss it in terms of price, power, density, and flexibility.
First, 2-core duplex transceivers must develop new components to achieve higher data rates, while parallel optical transceivers can build next-generation transceivers using existing technology. At the same time, parallel optical transceivers can be used with four uncooled laser emitters or one laser emitter with waveguides and regulators. Therefore, the 8-core parallel link is not only cheap but also has lower overall power consumption.
Figure 3: Parallel transceivers have lower power and cost
Second, power consumption is the largest operating expense in the data center, so using low-energy product solutions will help reduce operating costs. A 10 G transceiver has less than 1 W of power, while a 40 G parallel optical transceiver consumes 1.5 W. A 40G transceiver is equivalent to four 10G transceivers, but the power consumption is reduced by 60%! And the cooling system also requires power. Therefore, the energy saving of electronic equipment will also bring about energy saving of the cooling system, thereby achieving overall power saving.
Finally, in high-density solutions, the use of parallel optical links helps reduce total cost of ownership. A 36-port high-density QSFP converter card, each port can be used as four 10G ports. A QSFP converter that can support up to 144 10G links, reducing the number of line cards and reducing the number of power supplies, cooling equipment, monitoring equipment, controllers and software licenses!
In order to achieve these cost savings, the structured cabling system must support 8-core connections! Using the base-8 structured cabling system will make the cabling system more flexible, and the network with higher data rates will be smoother, most of the original fiber optic accessories and converter modules can continue to be used.
Cabling Deployment Structured cabling is not a new concept. Data centers are continually moving from past temporary connections to pre-terminated multi-fiber connectors such as trunk cables. Data center fiber cabling systems typically use 12 to 144 core MTP/MPO pre-terminated cables as the backbone cable. But the ever-increasing data center size and evolution of the network architecture require higher-core fiber optic cables, such as 288, 432, and even 576-core cables. The use of high core count cables can greatly increase the density of fiber optic cables deployed in a limited bridge space. At the same time, due to the reduction in the number of cables, the deployment time is reduced and the installation cost is reduced.
Figure 4 depicts the deployment scenario for three different core-number cables, occupying the same slot space.
Deployed on a 370 x 12-core MTP cable with a total core count of 4,440 cores
Deployed with 95 x 144-core MTP cable with a total core count of 13,680 cores
Deployed with 56 x 288 core MTP cable with a total core count of 16,128 cores
Figure 4: Fill rate in a linear groove (12” x 6”) with different core count cables
Data centers are gradually expanding in size, and individual buildings are no longer able to meet the needs of very large data centers. Very large data centers often include multiple buildings, and the campus network environment requires cabling infrastructure including high-core pre-linked fiber optic cables or ordinary fiber optic cables as the backbone. The number of cores in these trunk cables sometimes exceeds 864, up to 1728 or 3456 core fiber.
Structured wiring plans
To meet high core count deployment requirements, there are multiple solutions that will use the multi-core connector MTP/MPO. These connectors offer faster installation times and provide an evolution path from 2-core transceivers to 8-core transceivers. Separate deployments with structured cabling and multi-core connectors reduce total cost of ownership.
- High core MTP/MPO trunk cable
When deploying the backbone cable in the same equipment room, for example, from the MDA to the HAD or EDA area. The MTP pre-terminated backbone cable is a key component in the deployment of high-core fiber optic cable and is the most cost-effective solution. Smooth migration to the 40/100/200/400GbE transmission system is possible in the future. In addition, the MTP fiber pre-terminated trunk is installed, and the terminal can be a single MTP port or an MTP-LC module.
Figure 5: High-core pre-terminated cable (432-core MTP-to-MTP)
- High core MTP/MPO pigtail backbone cable
There are two application scenarios for the pigtail backbone cable:
1) When the cable routing needs to pass through a small pipe, and the pipe size is small, the MTP joint is not allowed to pass safely.
2) When pre-terminated fiber optic cable is deployed, the specific length and path of the cable deployment are uncertain, or there is a branch requirement.
When installing and deploying the pigtail trunk cable, pay attention to the exposed part of the cable. The end of the bare fiber can be terminated with a quick connector or a welded pigtail.
- High core optical cable
Some applications and deployment scenarios may require ultra-high core cable. For example, when deploying 864, 1728, and 3456 core cables, you will face the challenge of routing pipes. The ribbon cable has a small outer diameter (OD) and is suitable for deployment on crowded pipelines.
The ends of such cables can be terminated using a variety of fiber optic connectors, pigtail assemblies, pigtail boxes, and the like. This type of fiber optic cable can result in increased deployment time compared to MTP pre-terminated fiber optic cables. Because the field termination at the end of the cable requires a lot of time, the optical performance may not be as good as the factory pre-terminated cable.