My application requires four DVI or SDI framegrabbers. Because of USB bandwidth limitations I’ve had to work around in the past, I purchased a HighPoint RocketU 1244A 4-port USB 3.2 card which has a separate controller for each port. The card is installed in a Dell PowerEdge R740 running Windows 10. There is an 8-lane PCIe 3.0 connection to the card, and 2-lane connections to each controller. This provides 1.969 GB/s of bandwidth to each controller, with each controller being capable of 1.212 GB/s. Testing with USB 3.0 test hardware demonstrates a one direction simultaneous transfer rate of at least 430 MB/s on each port. My video streams are 1920x1080@60. Capturing them at 30 fps and being sent as RGB 24, the data rate would be ~187 MB/s. So the hardware is more than capable of supporting 30 fps on all four framegrabbers at the same time. However, I am only able to get the full frame rate in the Capture tool when one or two framegrabbers are attached. When a third framegrabber is attached, the frame rate falls drastically. Looking at the reported frame rate in the Capture tool windows, the rates add up to between 55 and 74 fps. No software other than the Capture tool (Version 3.32.4.2) is accessing the framegrabbers. It would seem that the drivers and/or Capture tool are somehow limiting the total frame rate to something way less than what the hardware can support. Is there an explanation for this? Is there a fix for this?
Hello Brawls - I’m sorry you’re running into this issue. Theoretically, there should be nothing in our software or capture cards that would cause the issues you’re seeing. As each additional DVI2USB3.0 capture card added to the system results in lower framerates may suggest there is a bottleneck somewhere in the system; limiting actual data throughput
I’m afraid to say that It’s hard for me to say exactly where this bottleneck may exist as we have not tested the HighPoint RocketU 1244A 4-port USB 3.2 or a Dell PowerEdge R740, connecting multiple DVI2USB3.0 capture cards. One possible explanation is that the PCI lane being used is not directly connected to the CPU, but to a chipset on the MOBO. That might cause a bottleneck.
Curious, if you connect all four of the DVI2USB3.0 and/or SDI2USB3.0 to the system, capturing on all, what colour are the LEDs? Do they begin to turn red, or not turn on at all?
I’d be interested to know if Dell has any advice regarding what you’re facing. Have you reached out to Dell?
One other thing we could try, assuming all capture cards can be connected to the system at the exact same time and remain in an on state, is to capture multiple feeds into OBS, or an equivalent program. For example, four 1080p feeds inside a 4K frame. This would use Direct Show, resulting in a chroma subsampled output from the capture cards; less data/second. Is it possible to receive higher frame rates from the capture cards outside of Epiphan Capture tool?
Looking forward to hearing from you,
Sorry for the delay, but here is some additional information I’ve gathered.
The Dell PowerEdge R740 is a server with two Intel Xeon Silver 4210 CPUs running at 2.2GHz with 64GB RAM. Testing of the HighPoint RocketU 1244A with USB 3.0 hardware has shown that the limiting hardware factor in the system is USB 3.0 speed, which is way below the actual PCIe bandwidth available. (Roughly 8GB/s is available to the card, and there is no problem sustaining USB 3.0 bulk transfers at 430MB/s x4 = 1720MB/s.)
The tested configuration contains all DVI2USB 3.0 devices. The LED’s are flashing blue before capturing video, and solid green while capturing video.
We have not reached out to Dell at this point since the system is performing as expected, except in this one aspect of frame rate performance.
The Dell has two CPUs, each with 10 cores / 20 threads, for a total of 20 cores & 40 threads. Within Task Manager, the 40 threads show up as 40 CPUs. On one occasion I observed that most of the processing was being done with only one CPU (out of 40). With one Capture window actively capturing video, the frame rate was 30fps and the one CPU being used was about half loaded. When a second window started capturing, the one CPU was about fully loaded and the frame rate for both windows was 30fps. When a third window started capturing, the one CPU was completely saturated, and no other CPU appeared to pick up any load. The frame rate for all three windows had dropped to around 19fps.
Based on that behavior, it appeared at first that some critical thread was likely CPU bound and limiting overall frame rate. However, when I repeated the exercise to find out what thread was being limited, I didn’t see any threads consuming more than a tiny amount of any CPU’s resources. In addition, the one CPU that seemed to have been saturated before was now showing only about half as much load.
I installed OBS to see if it would behave similar to the Capture windows, and I observed the same frame rate slowdown. I couldn’t see how to make OBS display the frame rate of incoming video, but the reduced frame rate was obvious visually. (As a side note, I could see that the OBS workload was evenly spread among the many CPU’s.)
So, I am back to square one for now. None of the threads of the v2ugui2.exe processes appear to be CPU limited in any way, and my hardware is demonstrated to be capable of much more bandwidth than the framegrabbers could require. I am wondering now if there is a problem with the USB or DirectShow protocols and isochronous transfers that reserve bandwidth. Could the USB/DirectShow protocol be limiting throughput based on some faulty calculation of available bandwidth? Does anyone know how to view or debug the bandwidth that the USB/DirectShow protocol is allocating to each device? Any other ideas would be appreciated as well.
I just checked with our product managers - there should not be anything inside the DVI2USB3.0 software/hardware that would limit the data throughput if multiple devices are connected, capturing at the same time. Being that the same issue is seen when using DirectShow further indicates this.
At this point, I do believe there is a bottleneck somewhere in the system that is limiting data throughput; I would suggest reaching out to Dell to find out where that might be.
I hope they can offer more insight into this!