In this series:

The need to increase bandwidth, data throughput, and low latency in digital systems never ceases. The requirement for systems with greater performance constantly emerges whenever researchers, scientists, and engineers push the boundaries of science in their field by discovering new and innovative techniques. In this post, we explore the data throughput issue in modern digital systems by looking at some specific applications.

In wireless communications, while most people are amazed with the speed of the Internet on their 4G-LTE mobile phones, researchers are working on tomorrow’s 5G technologies that will deliver even higher data rates and quality of service to users. A concept called massive multiple-input/multiple-output (MIMO) is actively being explored. In massive MIMO, signals from hundreds of antenna need to be aggregated in a central processing unit where algorithms leverage the special characteristics of the high number of antennas. In massive MIMO systems, traditional switched data buses (e.g. PCI Express) become a data bottleneck for increasing the number of antenna while preserving full bandwidth. We explored this issue and proposed solutions in a previous blog post.

Medical imaging is another field where data throughput is a concern, especially for scientists working on imaging algorithms and systems with growing numbers of channels. Like with telecom’s massive MIMO, the algorithms rely on the capacity of the system to aggregate the data from a growing number of channels into one central processing unit. Again, the limited capacity of traditional switched data buses becomes an issue. Some solutions were presented in this blog post.

In some specific cases, like with high energy physics instrumentation, latency matters more than throughput. When experiments dealing with astronomical amount of energy are conducted (e.g. particle accelerators), the reaction time of the instrumentation and control systems is critical. This leads to a demand for systems capable of carrying digital signals with increasingly low latency. Switched data buses are not an option; we’ve discussed some solutions in the following blog post:

As a solution to these challenges, Nutaq introduced the Xilinx Aurora interface and data transfer parallelization. Aurora is a link-layer protocol used to move data across point-to-point serial links. Nutaq Aurora Board Support Development Kit (BSDK) and Model-Based Design Kit (MBDK) cores provide ready-to-use implementations of the Xilinx Aurora 8b/10b communication protocol for the Perseus601x and Perseus611x AMC carrier boards.

In this blog series, we’ll explore the throughput and latency performances of Nutaq Aurora BSDK/MBDK cores in various scenarios.