FPS drop when Video Output is enabled

Hi all,

I am experiencing a sharp drop in FPS when I enable Video Outputs. My file setup:

  • 7x mp4 video files, 60 minutes each, 24 fps, 2160x3840 portrait format, ~4 GB each file
  • 7x Video Reader --> Render to Canvas nodes in one layer
  • 7x canvases (2160 x 3840)
  • 7x Video Outputs, one canvas mapped to each
  • 7x physical screens

As soon as I enable one of the video outputs, my FPS drops from 60 to about 47, even when the timeline is not playing. If I enable a second output, it drops to 36, then 26 with a third output, and on from there.

The GPU and other system resources seem fine:

Questions / solutions

  • file size is too big for each video file?
  • should I consolidate my videos into a 2D layer / single texture, rather than using 7x Render to Canvas nodes?
  • this is just too many pixels for LightAct to push at a good frame rate?

Video here of this test.



I would recommend to use a structured approach in trying to pinpoint what’s the cause of this issue. For example, you mentioned that the FPS drops when you enable one of the video outputs. If I’m reading this correctly, this means that all 7 video files are being read, played and rendered to their respective canvases perfectly fine with a high framerate as long as you don’t enable a video output? If this is true, then I’m guessing that your issue is not related to the video files at all but to the GPUs and the hardware side of things, right?

Also, since you’ve been testing this setup for quite a while, did you experience this on your other computer too?

Last but not least, does the issue happen even if you are trying to render, say, 7 generative content nodes (say Color palette) instead of 7 x video file?


Hi Mitja,

Appreciate these ideas! Yes, I am able to read, play, render the video files to their canvases, and map the video outputs with a consistent 60 fps frame rate, as long as the video outputs are disabled. But when I enable the Video Output, the frame rate drops.

You’re right – I have been testing this for a while, and yes, I have seen the same FPS issue on my testing PC. However, I have not had access to the production PC, with 7x outputs, until this week. My testing PC has only 2x video outs, and did not have nearly the same GPU / CPU specs as the production PC – I’m running a Quadro M2200, believe it or not, and our production PC has 2x A6000 – so I had been hoping the dramatic increase in GPU speed / VRAM would solve the FPS issue.

I tested with the Color Sine node passing the texture to each video screen, and the result was the same. 60 FPS until I enabled the video outputs. This tells me it is not an issue with our mp4s being too heavy or something – does that sound right to you?

Video here.of the test.
Video here showing the hardware performance / GPU usage.

Are there any other LightAct parameters I should be considering, or a different render pipeline / workflow I should test? I am going to look into possible issues on the GPU side.


Hi Scott,

We can definitely rule out video files as a cause because if it was, then we wouldn’t see a FPS drop when using a Color Sine node.

I would bet that the issue is somehow connected to the fact that you are using 2 GPUs. Judging from the fact that FPS drops when you enable the outputs, I would say that the texture has to stream from one GPU to the other and this slows down the performance. Are you using NVLink at all?

What I would also be interested in is this:

  • try to find out on which GPU LightAct is running and see if FPS drops when you enable only outputs on the same gpu.
  • just for testing purposes, remove one GPU and see how FPS is behaving.

I’m sorry I can’t be of more help here, but we never use multiple GPUs so my experience is a bit limited. @John might be able to help you more though as he has set up a multiple GPU setup successfully it seems.

All best,

1 Like

Hi smsmith,

I can’t see your video. Can you reshare it enabling “all users getting the link” ?
We use a computer with 3 GPU
1 x RTX4000 for monitoring
2 x RTX5000 with NVlink (but nvlink juste increase performance in our case) with 2 x 4 output.
In nvidia mozaic, I set a desktop of the 8 output and set it in “primary screen” (very important, I think it’s your problem).
In nvidia driver you have to set LightAct to run on the right GPU where your output are connected (so RTX5000 in our case).
When you launch LightAct, the app is opening in the primary screen. I use Shift + windows + arrow left/right combo button to display the lightact windows on the monitoring screen. Sometimes, I can still have FPS drop depend of what I run. Then I need to click one time every where in the output display in order to focus the output windows of lightact.

1 Like

Hi Jonathan, Mitja,

I’ve reshared the links – apologies for the access issue.

Video here of the original test, with 7x 4K videos rendered to 7x canvases.

Video here of the Color Sine test.
Video here showing the hardware performance / GPU usage during the Color Sine test.

Thanks for taking the time. @John I’m going to take some time to think through your setup and will get back with the results!


Thanks again for the reply and for sharing your experience.

I am not using NVIDIA Mosaic currently. I was trying instead to have 8 individual displays set up in the Windows “Display” options. The setup is like this:

Display 1 is the GUI (1080p), and 2-8 are the exhibition displays (4K UHD).

I’m not against Mosaic, I just am inexperienced with this kind of system. Should I be using NVIDIA Mosaic instead? If I understand correctly, with Mosaic I would only have 2x displays visible, and I would be able to use Mosaic to spread Display 2 over outputs 2-8. Is that correct? Something like this:

(I am making these diagrams myself, since I don’t have access to the production PC at the moment, sorry if they look strange!)


yep, that’s it. And don’t forget to focus the output windows. With the 2 desktop system. You will have 1 windows of lightact and the second one is the big windows in full screen which cover the entire second desktop (which need to be set as primary). It’s this lightact windows which need to be focus. I think, with teamviewer, you need to switch to the second desktop and just click everywhere.

1 Like

@John great – thanks so much. I will have access to the PC again in a couple of days to test Mosaic.

@meetya I did some testing yesterday and learned a couple things –

  • The FPS drop still happens even when one of the GPUs is disabled. I did a test with 3x outputs from LightAct (the 4th GPU output was used for the LightAct GUI). The FPS dropped from 60 to 30 when I enabled the video outputs.
  • I changed the Power Management Mode for LightAct in the NVIDIA Control Panel to “Prefer Maximum Performance”, and this did not change the FPS issue.
  • The GPUs are linked by an SLI bridge, not NVLink. I realize LightAct might not be optimized for multiple GPUs anyway, but is there any chance the SLI bridge is a problem? Should we get an NVLink?

Will get back to you guys once I’ve tested Mosaic. Fingers crossed.


the bridge between car is the same for SLI or NVlink. You have to select in nvidia link which mode you want:
SLI - GPU share -> all the output of one of the both card are disabled.
NvLink - VRAM share -> all the output works.

You should try the SLI Mode to force Lightact to manage only 1 card to see what happens.
But you definately need the NVlink mode. From our part, we just see a slight better performance.

1 Like

Ah – Thanks @John. I just learned from our AV integrator that there actually is no bridge between the two graphics cards. So I guess I can’t do the VRAM share.

Still, appreciate your XP sharing!