• Changing RCF's index page, please click on "Forums" to access the forums.

Adaptive sync (gsync or freesync) mega thread

Do Not Sell My Personal Information

TyGuy

Make America a good boy again.
Joined
Apr 17, 2005
Messages
14,339
Reaction score
13,728
Points
123
I just found the best and most comprehensive gsync study out there and wanted to put it here for anybody that currently has an adaptive sync display or plans on getting one. This technology will be coming to televisions sometime soon too; so it could apply to anybody eventually. The new xbox has support for currently non existent free sync televisions.

@gourimoko
@Jack Brickman
@Cassity14
@Cratylus






G-SYNC 101: Input Lag & Optimal Settings
Published June 19, 2017 by jorimt

Share
blur-busters-gsync-101-header.png


Test Setup
High Speed Camera Casio Exilim EX-ZR200 w/1000 FPS 224x64px video capture
Display Acer Predator XB252Q 240Hz w/G-SYNC (1920×1080)
Mouse Razer Deathadder Chroma modified w/external LED
.
Nvidia Driver 381.78
Nvidia Control Panel Default settings (“Prefer maximum performance” enabled)
.
OS Windows 10 Home 64-bit (Creators Update)
Motherboard ASRock Z87 Extreme4
Power Supply EVGA SuperNOVA 750 W G2
Heatsink Hyper 212 Evo w/2x Noctua NF-F12 fans
CPU i7-4770k @4.2GHz w/Hyper-Threading enabled (8 cores, unparked: 4 physical/4 virtual)
GPU EVGA GTX 1080 FTW GAMING ACX 3.0 w/8GB VRAM & 1975MHz Boost Clock
Sound Card Creative Sound Blaster Z (optical audio)
RAM 16GB G.SKILL Sniper DDR3 @1866 MHz (dual-channel: 9-10-9-28, 2T)
SSD (OS) 256GB Samsung 850 Pro
HDD (Games) 5TB Western Digital Black 7200 RPM w/128 MB cache
.
Test Game #1 Overwatch w/lowest settings, “Reduced Buffering” enabled
Test Game #2 Counter-Strike: Global Offensive w/lowest settings, “Multicore Rendering” disabled
Introduction
The input lag testing method used in this article was pioneered by Blur Buster’s Mark Rejhon, and originally featured in his 2014 Preview of NVIDIA G-SYNC, Part #2 (Input Lag) article. It has become the standard among testers since, and is used by a variety of sources across the web.

mouse-led-animated.png


Middle Screen vs. First On-screen Reaction
In my original input lag tests featured in this thread on the Blur Busters Forums, I measured middle screen (crosshair-level) reactions at a single refresh rate (144Hz), and found that both V-SYNC OFF and G-SYNC, at the same framerate within the refresh rate, delivered frames to the middle of the screen at virtually the same time. This still holds true.



However, while middle screen measurements are a common and fully valid input lag testing method, they are limited in what they can reveal, and do not account for the first on-screen reaction, which can mask the subtle and not so subtle differences in frame delivery between V-SYNC OFF and various syncing solutions; a reason why I opted to capture the entire screen this time around.

Due to the differences between the two test methods, V-SYNC OFF results generated from first on-screen measurements, especially at lower refresh rates (for reasons that will later be explained), can appear to have up to twice the input lag reduction of middle screen readings:



As the diagram shows, this is because the measurement of the first on-screen reaction is begun at the start of the frame scan, whereas the measurement of the middle screen reaction is begun at crosshair-level, where, with G-SYNC, the in-progress frame scan is already half completed, and with V-SYNC OFF, can be at various percentages of completion, depending on the given refresh rate/framerate offset.

When V-SYNC OFF is directly compared to FPS-limited G-SYNC at crosshair-level, even with V-SYNC OFF’s framerate at up to 3x times above the refresh rate, middle screen readings are virtually a wash (the results in this article included). But, as will be detailed further in, V-SYNC OFF can, for a lack of better term, “defeat” the scanout by beginning the next frame scan in the previous scanout.

With V-SYNC OFF at -2 FPS below the refresh rate, for instance (the scenario used to compare V-SYNC OFF directly against G-SYNC in this article), the tearline will continuously roll upward, which means, when measured by first on-screen reactions, its advantage over G-SYNC can be anywhere from 0 to 1/2 frame, depending on the ever-fluctuating position of the tearline between samples. With middle screen readings, the initial position of the tearline(s), and thus, its advantage, is effectively ignored.

These differences should be kept in mind when inspecting the upcoming results, with the method featured in this article being the best case scenario for V-SYNC OFF, and the worst case scenario for synced when directly compared to V-SYNC OFF, G-SYNC included.

Test Methodology
To further facilitate the first on-screen reaction method, I’ve changed sample capture from muzzle flash to strafe for Overwatch (credit goes to Battle(non)sense for the initial suggestion) and look for CS:GO, which triggers horizontal updates across the entire screen. The strafe/look mechanics are also more consistent from click to click, and less prone to the built-in variable delay experienced from shot to shot with the previous method.

To ensure a proper control environment for testing, and rule out as many variables as possible, the Nvidia Control Panel settings (but for “Power management mode” set to “Prefer maximum performance”) were left at defaults, all background programs were closed, and all overlays were disabled, as was the Creators Update’s newly introduced “Game Mode,” and .exe Compatibility option “fullscreen optimizations,” along with the existing “Game bar” and “Game DVR” options.

To guarantee extraneous mouse movements didn’t throw off input reads during rapid clicks, masking tape was placed over the sensor of the modified test mouse (Deathadder Chroma), and a second mouse (Deathadder Elite) was used to navigate the game menus and get into place for sample capture.

To emulate lower maximum refresh rates on the native 240Hz Acer Predator XB252Q, “Preferred refresh rate” was set to “Application-controlled” when G-SYNC was enabled, and the refresh rate was manually adjusted as needed in the game options (Overwatch), or on the desktop (CS:GO) before launch.

And, finally, to validate and track the refresh rate, framerate, and the syncing solution in use for each scenario, the in-game FPS counter, Nvidia Control Panel’s G-SYNC Indicator, and the display’s built-in refresh rate meter were active at all times.

Testing was performed with a Casio Exilim EX-ZR200 capable of 1000 FPS high speed video capture (accurate within 1ms), and a Razer Deathadder Chroma modified with an external LED (credit goes to Chief Blur Buster for the mod), which lights up on left click, and has a reactive variance of <1ms.

blur-busters-gsync-101-overwatch-method.png


To compensate for the camera’s low 224×64 pixel video resolution, a bright image with stark contrast between foreground and background, and thin vertical elements that could easily betray horizontal movement across the screen, were needed for reliable discernment of first reactions after click.

For Overwatch, Genji was used due to his smaller viewmodel and ability to scale walls, and an optimal spot on the game’s Practice Range was found that met the aforementioned criteria. Left click was mapped to strafe left, in-game settings were at the lowest available, and “Reduced Buffering” was enabled to ensure the lowest input latency possible.

blur-busters-gsync-101-csgo-method.png


For CS:GO, a custom map provided by the Blur Busters Forum’s lexlazootin was used, which strips all unnecessary elements (time limits, objectives, assets, viewmodel, etc), and contains a lone white square suspended in a black void, that when positioned just right, allows the slightest reactions to be accurately spotted via the singular vertical black and white separation. Left click was mapped to look left, in-game settings were at the lowest available, and “Multicore Rendering” was disabled to ensure the lowest input latency possible.

For capture, the Acer Predator XB252Q (LED fixed to its left side) was recorded as the mouse was clicked a total of ten times. To average out differences between runs, this process was repeated four times per scenario, and each game was restarted after each run.

Video Player
00:09
00:29
Once all scenarios were recorded, the .mov format videos, containing ten samples each, were inspected in QuickTime using its built-in frame counter and single frame stepping function via the arrows keys. The video was jogged through until the LED lit up, at which point the frame number was input into an Excel spreadsheet. Frames (thanks to 1000 FPS video capture, represent a literal 1ms each) were then stepped through until the first reaction was spotted on-screen, where, again, the frame number was input into the spreadsheet. This generated the total delay in milliseconds from left click to first on-screen reaction, and the process was repeated per video, ad nauseam.

All told, 508 videos weighing in at 17.5GB, with an aggregated (albeit slow-motion) 45 hour and 40 minute runtime, were recorded across 2 games and 6 refresh rates, containing a total of 42 scenarios, 508 runs, and 5080 individual samples. My original Excel spreadsheet is available to inspect and/or download from the embed below:

To preface, the following results and explanations assume that the native resolution w/default timings are in use on a single monitor in exclusive fullscreen mode, paired with a single-GPU desktop system that can sustain the framerate above the refresh rate at all times.

This article does not seek to measure the impact of input lag differences incurred by display, input device, CPU or GPU overclocks, RAM timings, disk drives, drivers, BIOS, OS, or in-game graphical settings. And the baseline numbers represented in the results are not indicative of, and should not be expected to be replicable on other systems, which will vary in configuration, specs, and the games being run.

This article seeks only to measuring the impact V-SYNC OFF, G-SYNC, V-SYNC, and Fast Sync, paired with various framerate limiters, have on frame delivery and input lag, and the differences between them; the results of which are replicable across setups.

+/- 1ms differences between identical scenarios in the following charts are usually within margin of error, while +/- 1ms differences between separate scenarios are usually measurable, and the error margin may not apply. And finally, all mentions of “V-SYNC (NVCP)” in the denoted scenarios signify that the Nvidia Control Panel’s “Vertical sync” entry was set to “On,” and “V-SYNC OFF” or “G-SYNC + V-SYNC ‘Off'” signify that “Use the 3D application setting” was applied w/V-SYNC disabled in-game.

So, without further ado, onto the results…

Input Lag: Not All Frames Are Created Equal
When it is said that there is “1 frame” or “2 frames” of delay, what does that actually mean? In this context, a “frame” signifies the total time a rendered frame takes to be displayed completely on-screen. The worth of a single frame is dependent on the display’s maximum native refresh rate. At 60Hz, a frame is worth 16.6ms, at 100Hz: 10ms, 120Hz: 8.3ms, 144Hz: 6.9ms, 200Hz: 5ms, and 240Hz: 4.2ms, continuing to decrease in worth as the refresh rate increases.

With double buffer V-SYNC, there is typically a 2 frame delay when the framerate exceeds the refresh rate, but this isn’t always the case. Overwatch, even with “Reduced Buffering” enabled, can have up to 4 frames of delay with double buffer V-SYNC engaged.

blur-busters-gsync-101-vsync-all-hz.png


The chart above depicts anywhere from 3 to 3 1/2 frames of added delay. At 60Hz, this is significant, at up to 58.1ms of additional input lag. At 240Hz, where a single frame is worth far less (4.2ms), a 3 1/2 frame delay is comparatively insignificant, at up to 14.7ms.

In other words, a “frame” of delay is relative to the refresh rate, and dictates how much or how little of a delay is incurred per, a constant which should be kept in mind going forward.

G-SYNC Ceiling vs. V-SYNC: Identical or Fraternal?
As described in G-SYNC 101: Range, G-SYNC doesn’t actually become double buffer V-SYNC above its range (nor does V-SYNC take over), but instead, G-SYNC mimics V-SYNC behavior when it can no longer adjust the refresh rate to the framerate. So, when G-SYNC hits or exceeds its ceiling, how close is it to behaving like standalone V-SYNC?


Pretty close. However, the G-SYNC numbers do show a reduction, mainly in the minimum and averages across refresh rates. Why? It boils down to how G-SYNC and V-SYNC behavior differ whenever the framerate falls (even for a moment) below the maximum refresh rate. With double buffer V-SYNC, a fixed frame delivery window is missed and the framerate is locked to half the refresh rate by a repeated frame, maintaining extra latency, whereas G-SYNC adjusts the refresh rate to the framerate in the same instance, eliminating latency.

As for triple buffer V-SYNC, it typically introduces an additional frame of delay over double buffer V-SYNC when the framerate exceeds the maximum refresh rate, and won’t be included in results due to the fact that G-SYNC is based on a double buffer, and when the rarely supported triple buffer option is available for use, it actually has two entirely separate methods, dependent in part on whether the game engine is based on OpenGL or DirectX.

Suffice to say, even at its worst, G-SYNC beats V-SYNC.

G-SYNC Ceiling vs. FPS Limit: How Low Should You Go?
Blur Busters was the world’s first site to test G-SYNC in Preview of NVIDIA G-SYNC, Part #1 (Fluidity) using an ASUS VG248QE pre-installed with a G-SYNC upgrade kit. At the time, the consensus was limiting the fps from 135 to 138 at 144Hz was enough to avoid V-SYNC-level input lag.

However, much has changed since the first G-SYNC upgrade kit was released; the Minimum Refresh Range wasn’t in place, the V-SYNC toggle had yet to be exposed, G-SYNC did not support borderless or windowed mode, and there was even a small performance penalty on the Kepler architecture at the time (Maxwell and later corrected this).

My own testing in my Blur Busters Forum thread found that just 2 FPS below the refresh rate was enough to avoid the G-SYNC ceiling. However, now armed with improved testing methods and equipment, is this still the case, and does the required FPS limit change depending on the refresh rate?


As the results show, just 2 FPS below the refresh rate is indeed still enough to avoid the G-SYNC ceiling and prevent V-SYNC-level input lag, and this number does not change, regardless of the maximum refresh rate in use.

To leave no stone unturned, an “at” FPS, -1 FPS, -2 FPS, and finally -10 FPS limit was tested to prove that even far below -2 FPS, no real improvements can be had. In fact, limiting the FPS lower than needed can actually slightly increase input lag, especially at lower refresh rates, since frametimes quickly become higher, and thus frame delivery becomes slower due to the decrease in sustained framerates.

As for the “perfect” number, going by the results, and taking into consideration variances in accuracy from FPS limiter to FPS limiter, along with differences in performance from system to system, a -3 FPS limit is the safest bet, and is my new recommendation. A lower FPS limit, at least for the purpose of avoiding the G-SYNC ceiling, will simply rob frames.
 
Great post, but wish they'd also talk about FreeSync....

I'm a big supporter and booster of FreeSync; and yet I use a G-Sync monitor because it's just the better technology at the moment (but adds a $200-$250 premium to your monitor).

Here's to hoping that FreeSync can catch up...
 
Monitor will be the next point of emphasis for me, so I've been trying to learn about this technology. Thanks for posting! Need to get my 1080 at a resolution it deserves.
 
Monitor will be the next point of emphasis for me, so I've been trying to learn about this technology. Thanks for posting! Need to get my 1080 at a resolution it deserves.

I'm currently running this bad boy:
asus-rog-swift-pg348q-2.jpg


ASUS RoG Swift PG348Q ... 3440x1440 UltraWide 100hz G-Sync IPS panel...

Best monitor I've ever used... hands down.

It's actually very good for productivity not only because it's ultrawide but because the 100hz can be used on the desktop, with a button press.. just "click" and the fucker is 100hz..

IPS is a must for me; and the color reproduction is fantastic.

If you're a hardcore gamer, you'd probably want to look at something else with higher refresh rates. I think there's also a 200hz version of this monitor coming out and if it's around the $1,200 price point I'll sell mine and upgrade to that one.

IMHO, the LG UC97 and UC98's are great, but FreeSync only, and only 75hz.. but they have way more features like PiP, splitscreen, and a gang of inputs.

Nonetheless, I've managed to do everything I've wanted to do on the ASUS without any real issues.
 
I'm currently running this bad boy:
asus-rog-swift-pg348q-2.jpg


ASUS RoG Swift PG348Q ... 3440x1440 UltraWide 100hz G-Sync IPS panel...

Best monitor I've ever used... hands down.

It's actually very good for productivity not only because it's ultrawide but because the 100hz can be used on the desktop, with a button press.. just "click" and the fucker is 100hz..

IPS is a must for me; and the color reproduction is fantastic.

If you're a hardcore gamer, you'd probably want to look at something else with higher refresh rates. I think there's also a 200hz version of this monitor coming out and if it's around the $1,200 price point I'll sell mine and upgrade to that one.

IMHO, the LG UC97 and UC98's are great, but FreeSync only, and only 75hz.. but they have way more features like PiP, splitscreen, and a gang of inputs.

Nonetheless, I've managed to do everything I've wanted to do on the ASUS without any real issues.

I see this monitor used as a standard quite a bit. IPS is a must for me as well. I would have to get used to using a single ultra wide screen as opposed to the two monitor setup I have now. I'd love to add a third, but my desk space simply doesn't have room. I am definitely looking to get into 1440p though.
 
I see this monitor used as a standard quite a bit. IPS is a must for me as well. I would have to get used to using a single ultra wide screen as opposed to the two monitor setup I have now. I'd love to add a third, but my desk space simply doesn't have room. I am definitely looking to get into 1440p though.

I made the transition from two monitors to just the ultrawide. It took some getting used to, but I have to tell you -- I would NEVER go back.. I LOVE the ultrawide.

You know what's crazy? I love the curve.. I hate curved TVs, and thought I'd hate a curved monitor; but I was wrong, it's really a useful feature, as it keeps the screen equidistant from you regardless of viewing angle.

If IPS is a must, you might wanna look into the latest version of that monitor that's coming out, not sure if it's coming in 2017 or not? But I SWEAR by this monitor; seriously..
 
G-SYNC vs. V-SYNC OFF: At the Mercy of the Scanout
Now that the FPS limit required for G-SYNC to avoid V-SYNC-level input lag has been established, how does G-SYNC + V-SYNC and G-SYNC + V-SYNC “Off” compare to V-SYNC OFF at the same framerate?


The results show a consistent difference between the three methods across most refresh rates (240Hz is nearly equalized in any scenario), with V-SYNC OFF (G-SYNC + V-SYNC “Off,” to a lesser degree) appearing to have a slight edge over G-SYNC + V-SYNC. Why? The answer is tearing…

With any vertical synchronization method, the delivery speed of a single, tear-free frame (barring unrelated frame delay caused by many other factors) is ultimately limited by the scanout. As mentioned in G-SYNC 101: Range, The “scanout” is the total time it takes a single frame to be physically drawn, pixel by pixel, left to right, top to bottom on-screen.

With a fixed refresh rate display, both the refresh rate and scanout remain fixed at their maximum, regardless of framerate. With G-SYNC, the refresh rate is matched to the framerate, and while the scanout speed remains fixed, the refresh rate controls how many times the scanout is repeated per second (60 times at 60 FPS/60Hz, 45 times at 45 fps/45Hz, etc), along with the duration of the vertical blanking interval (the span between the previous and next frame scan) from frame to frame.

The scanout speed itself, both on a fixed refresh rate and variable refresh rate display, is dictated by the current maximum refresh rate of the display:

blur-busters-gsync-101-scanout-speed-diagram.png
As the diagram shows, the higher the refresh rate of the display, the faster the scanout speed becomes. This also explains why V-SYNC OFF’s input lag advantage, especially at the same framerate as G-SYNC, is reduced as the refresh rate increases; single frame delivery becomes faster, and V-SYNC OFF has less of an opportunity to defeat the scanout.

V-SYNC OFF can defeat the scanout by starting the scan of the next frame(s) within the previous frame’s scanout anywhere on screen, and at any given time:



This results in simultaneous delivery of more than one frame scan in a single scanout (tearing), but also a reduction in input lag; the amount of which is dictated by the positioning and number of tearline(s), which is further dictated by the refresh rate/sustained framerate ratio (more on this later).

As noted in G-SYNC 101: Range, G-SYNC + VSYNC “Off” (a.k.a. Adaptive G-SYNC) can have a slight input lag reduction over G-SYNC + V-SYNC as well, since it will opt for tearing instead of aligning the next frame scan to the next scanout when sudden frametime variances occur.

To eliminate tearing, G-SYNC + VSYNC is limited to completing a single frame scan per scanout, and it must follow the scanout from top to bottom, without exception. On paper, this can give the impression that G-SYNC + V-SYNC has an increase in latency over the other two methods. However, the delivery of a single, complete frame with G-SYNC + V-SYNC is actually the lowest possible, or neutral speed, and the advantage seen with V-SYNC OFF is the negative reduction in delivery speed, due to its ability to defeat the scanout.

Bottom-line, within its range, G-SYNC + V-SYNC delivers single, tear-free frames to the display the fastest the scanout allows; any faster, and tearing would be introduced.
 
G-SYNC vs. V-SYNC w/FPS Limit: So Close, Yet So Far Apart
On the subject of single, tear-free frame delivery, how does standalone double buffer V-SYNC compare to G-SYNC with the same framerate limit?


As the results show, but for 60Hz (remember, a “frame” of delay is relative to the refresh rate), the numbers are relatively close. So what’s so great about G-SYNC’s ability to adjust the refresh rate to the framerate, if the majority of added input latency with V-SYNC can be eliminated with a simple FPS limit? Well, as the title of this section hints, it’s not quite that cut and dry…

While it’s common knowledge that limiting the FPS below the refresh rate with V-SYNC prevents the over-queuing of frames, and thus majority of added input latency, it isn’t without its downsides.

Unlike G-SYNC, V-SYNC must attempt to time frame delivery to the fixed refresh rate of the display. If it misses a single one of these delivery windows below the maximum refresh rate, the current frame must repeat once until the next frame can be displayed, locking the framerate to half the refresh rate, causing stutter. If the framerate exceeds the maximum refresh rate, the display can’t keep up with frame output, as rendered frames over-queue in both buffers, and appearance of frames is delayed yet again, which is why an FPS limit is needed to prevent this in the first place.

When an FPS limit is set with V-SYNC, the times it can deliver frames per second is shrunk. If, for instance, the FPS limiter is set to 59 fps on a 60Hz display, instead of 60 frames being delivered per second, only 59 will be delivered, which means roughly every second a frame will repeat.

As the numbers show, while G-SYNC and V-SYNC averages are close over a period of frames, evident by the maximums, it eventually adds up, causing 1/2 to 1 frame of accumulative delay, as well as recurring stutter due to repeated frames. This is why it is recommended to set a V-SYNC FPS limit mere decimals below the refresh rate via external programs such as RTSS.

That said, an FPS limit is superior to no FPS limit with double buffer V-SYNC, so long as the framerate can be sustained above the refresh rate at all times. However, G-SYNC’s ability to adjust the refresh rate to the framerate eliminates this issue entirely, and, yet again, beats V-SYNC hands down.

G-SYNC vs. Fast Sync: The Limits of Single Frame Delivery
Okay, so what about Fast Sync? Unlike G-SYNC, it works with any display, and while it’s still a fixed refresh rate syncing solution, its third buffer allows the framerate to exceed the refresh rate, and it utilizes the excess frames to deliver them to the display as fast as possible. This avoids double buffer behavior both above and below the refresh rate, and eliminates the majority of V-SYNC input latency.

Sounds ideal, but how does it compare to G-SYNC?


Evident by the results, Fast Sync only begins to reduce input lag over FPS-limited double buffer V-SYNC when the framerate far exceeds the display’s refresh rate. Like G-SYNC and V-SYNC, it is limited to completing a single frame scan per scanout to prevent tearing, and as the 60Hz scenarios show, 300 FPS Fast Sync at 60Hz (5x ratio) is as low latency as G-SYNC is with a 58 FPS limit at 60Hz.

However, the less excess frames are available for the third buffer to sample from, the more the latency levels of Fast Sync begin to resemble double buffer V-SYNC with an FPS Limit. And if the third buffer is completely starved, as evident in the Fast Sync + FPS limit scenarios, it effectively reverts to FPS-limited V-SYNC latency, with an additional 1/2 to 1 frame of delay.

Unlike double buffer V-SYNC, however, Fast Sync won’t lock the framerate to half the maximum refresh rate if it falls below it, but like double buffer V-SYNC, Fast Sync will periodically repeat frames if the FPS is limited below the refresh rate, causing stutter. As such, an FPS limit below the refresh rate should be avoided when possible, and Fast Sync is best used when the framerate can exceed the refresh rate by at least 2x, 3x, or ideally, 5x times.

So, what about pairing Fast Sync with G-SYNC? Even Nvidia suggests it can be done, but doesn’t go so far as to recommend it. But while it can be paired, it shouldn’t be…

Say the system can maintain an average framerate just above the maximum refresh rate, and instead of an FPS limit being applied to avoid V-SYNC-level input lag, Fast Sync is enabled on top of G-SYNC. In this scenario, G-SYNC is disabled 99% of the time, and Fast Sync, with very few excess frames to work with, not only has more input lag than G-SYNC would at a lower framerate, but it can also introduce uneven frame pacing (due to dropped frames), causing recurring microstutter. Further, even if the framerate could be sustained 5x above the refresh rate, Fast Sync would (at best) only match G-SYNC latency levels, and the uneven frame pacing (while reduced) would still occur.

That’s not to say there aren’t any benefits to Fast Sync over V-SYNC on a standard display (60Hz at 300 FPS, for instance), but pairing Fast Sync with uncapped G-SYNC is effectively a waste of a G-SYNC monitor, and an appropriate FPS limit should always be opted for instead.

Which poses the next question: if uncapped G-SYNC shouldn’t be used with Fast Sync, is there any benefit to using G-SYNC + Fast Sync + FPS limit over G-SYNC + V-SYNC (NVCP) + FPS limit?

-SYNC + Fast Sync + FPS limit over G-SYNC + V-SYNC (NVCP) + FPS limit?
blur-busters-gsync-101-gsyncvsync-vs-gsyncfastsync-w-fps-limit.png


The answer is no. In fact, unlike G-SYNC + V-SYNC, Fast Sync remains active near the maximum refresh rate, even inside the G-SYNC range, reserving more frames for itself the higher the native refresh rate is. At 60Hz, it limits the framerate to 59, at 100Hz: 97 FPS, 120Hz: 116 FPS, 144Hz: 138 FPS, 200Hz: 189 FPS, and 240Hz: 224 FPS. This effectively means with G-SYNC + Fast Sync, Fast Sync remains active until it is limited at or below the aforementioned framerates, otherwise, it introduces up to a frame of delay, and causes recurring microstutter. And while G-SYNC + Fast Sync does appear to behave identically to G-SYNC + V-SYNC inside the Minimum Refresh Range (<36 FPS), it’s safe to say that, under regular usage, G-SYNC should not be paired with Fast Sync.
 
V-SYNC OFF: Beyond the Limits of the Scanout
It’s already been established that single, tear-free frame delivery is limited by the scanout, and V-SYNC OFF can defeat it by allowing more than one frame scan per scanout. That said, how much of an input lag advantage can be had over G-SYNC, and how high must the framerate be sustained above the refresh rate to diminish tearing artifacts and justify the difference?


Quite high. Counting first on-screen reactions, V-SYNC OFF already has a slight input lag advantage (up to a 1/2 frame) over G-SYNC at the same framerate, especially the lower the refresh rate, but it actually takes a considerable increase in framerate above the given refresh rate to widen the gap to significant levels. And while the reductions may look significant in bar chart form, even with framerates in excess of 3x the refresh rate, and when measured at middle screen (crosshair-level) only, V-SYNC OFF actually has a limited advantage over G-SYNC in practice, and most of it is in areas that one could argue, for the average player, are comparatively useless when something such a a viewmodel’s wrist is updated 1-3ms faster with V-SYNC OFF.

This is where the refresh rate/sustained framerate ratio factors in:


As shown in the above diagrams, the true advantage comes when V-SYNC OFF can allow not just two, but multiple frame scans in a single scanout. Unlike syncing solutions, with V-SYNC OFF, the frametime is not paced to the scanout, and a frame will begin scanning in as soon as it’s rendered, regardless whether the previous frame scan is still in progress. At 144Hz with 1000 FPS, for instance, this means with a sustained frametime of 1ms, the display updates nearly 7 times in a single scanout.

In fact, at 240Hz, first on-screen reactions became so fast at 1000 FPS and 0 FPS, that the inherit delay in my mouse and display became the bottleneck for minimum measurements.

So, for competitive players, V-SYNC OFF still reigns supreme in the input lag realm, especially if sustained framerates can exceed the refresh rate by 5x or more. However, while at higher refresh rates, visible tearing artifacts are all but eliminated at these ratios, it can instead manifest as microstutter, and thus, even at its best, V-SYNC OFF still can’t match the consistency of G-SYNC frame delivery.
 
In-game vs. External FPS Limiters: Closer to the Source
Up until this point, an in-game framerate limiter has been used exclusively to test FPS-limited scenarios. However, in-game framerate limiters aren’t available in every game, and while they aren’t required for games where the framerate can’t meet or exceed the maximum refresh rate, if the system can sustain the framerate above the refresh rate, and a said option isn’t present, an external framerate limiter must be used to prevent V-SYNC-level input lag instead.

In-game framerate limiters, being at the game’s engine-level, are almost always free of additional latency, as they can regulate frames at the source. External framerate limiters, on the other hand, must intercept frames further down the rendering chain, which can result in delayed frame delivery and additional input latency; how much depends on the limiter and its implementation.

RTSS is a CPU-level FPS limiter, which is the closest an external method can get to the engine-level of an in-game limiter. In my initial input lag tests on my original thread, RTSS appeared to introduce no additional delay when used with G-SYNC. However, it was later discovered disabling CS:GO’s “Multicore Rendering” setting, which runs the game on a single CPU-core, caused the discrepancy, and once enabled, RTSS introduced the expected 1 frame of delay.

Seeing as the CS:GO still uses DX9, and is a native single-core performer, I opted to test the more modern “Overwatch” this time around, which uses DX11, and features native multi-threaded/multi-core support. Will RTSS behave the same way in a native multi-core game?


Yes, RTSS still introduces up to 1 frame of delay, regardless of the syncing method, or lack thereof, used. To prove that a -2 FPS limit was enough to avoid the G-SYNC ceiling, a -10 FPS limit was tested with no improvement. The V-SYNC scenario also shows RTSS delay stacks with other types of delay, retaining the FPS-limited V-SYNC’s 1/2 to 1 frame of accumulative delay.

Next up is Nvidia’s FPS limiter, which can be accessed via the third-party “Nvidia Inspector.” Unlike RTSS, it is a driver-level limiter, one further step removed from engine-level. My original tests showed the Nvidia limiter introduced 2 frames of delay across V-SYNC OFF, V-SYNC, and G-SYNC scenarios.


Yet again, the results for V-SYNC and V-SYNC OFF (“Use the 3D application setting” + in-game V-SYNC disabled) show standard, out-of-the-box usage of both Nvidia’s v1 and v2 FPS limiter introduce the expected 2 frames of delay. The limiter’s impact on G-SYNC appears to be particularly unforgiving, with a 2 to 3 1/2 frame delay due to an increase in maximums at -2 FPS compared to -10 FPS, meaning -2 FPS with this limiter may not be enough to keep it below the G-SYNC ceiling at all times, and it might be worsened by the Nvidia limiter’s own frame pacing behavior’s effect on G-SYNC functionality.

Needless to say, even if an in-game framerate limiter isn’t available, RTSS only introduces up to 1 frame of delay, which is still preferable to the 2+ frame delay added by Nvidia’s limiter with G-SYNC enabled, and a far superior alternative to the 2-6 frame delay added by uncapped G-SYNC.
 
G-SYNC Fullscreen vs. Borderless/Windowed: DWM Woes?
Requested by swarna in the Blur Busters Forums, is a scenario that investigates the effects of the DWM (Desktop Windows Manager, “Aero” in Windows 7) on G-SYNC in borderless and windowed mode.

Unlike exclusive fullscreen, which bypasses the DWM composition entirely, borderless and windowed mode rely on the DWM, which, due to its framebuffer, adds 1 frame of delay. The DWM can’t be disabled in Windows 10, and uses it’s own form of triple buffer V-SYNC (very similar to Fast Sync) that overrides all standard syncing solutions when borderless or windowed mode are in use.

To make sure this was the case, all combinations of NVCP and in-game V-SYNC, as well as the Windows 10 “Game Mode” and “fullscreen optimization” settings were tested to see if DWM could be disabled, and tearing could be introduced; it could not be, so Game Mode and fullscreen optimizations were disabled once again, and NVCP V-SYNC was re-enabled across scenarios for consistency’s sake.

The question is, does DWM add 1 frame of delay with G-SYNC using borderless and windowed mode?


Overwatch, shows that, no, with G-SYNC enabled, both borderless and windowed mode do not add 1 frame of delay over exclusive fullscreen. Standalone “V-SYNC,” however, does show the expected 1 frame of delay. CS:GO was also tested for corroboration, and ought to have the same results, as DWM behavior is at the OS-level and should remain unchanged, regardless of the game…


Sure enough, again, G-SYNC sees no added delay, and V-SYNC sees the expected 1 frame of delay.

Further testing may be required, but it appears on the latest public build of Windows 10 with out-of-the-box settings (sans “Game Mode”), G-SYNC somehow bypasses the 1 frame of delay added by the DWM. That said, I still don’t suggest borderless or windowed mode over exclusive fullscreen due to the 3-5% decrease in performance, but if these findings are true across configurations, it great news for games that only offer a borderless windowed option, or for multitaskers with secondary monitors.

Bonus Points: Hidden Benefits of High Refresh Rate G-SYNC
Often overlooked is G-SYNC’s ability to adjust the refresh rate to lower fixed framerates. This is particularly useful for games and emulators hard-locked to 60 FPS, or, for example, emulated NES games, where the native 60.1Hz signal would be otherwise impossible to reproduce. And due to the scanout speed increase at 100Hz+ refresh rates, an input lag reduction can be had as well…

blur-busters-gsync-101-60hz-60fps-vs-144hz-60fps.png


The results show a considerable input lag reduction on a 144Hz G-SYNC display @60 FPS vs. a 60Hz G-SYNC display @58 FPS with first on-screen reactions measured (middle screen would show about half this reduction). And while each frame is still rendered in 16.6ms, and delivered in intervals of 60 per second on the higher refresh rate display, they are scanned in at a much faster 6.9ms per.
 
Optimal G-SYNC Settings*
*Settings tested with a single G-SYNC display on a single desktop GPU system; specific DSR, SLI, and multi-monitor behaviors, as well as laptop G-SYNC implementation, may vary.

Nvidia Control Panel Settings:

  • Set up G-SYNC > Enable G-SYNC > Enable G-SYNC for full screen mode.
  • Manage 3D settings > Vertical sync > On.
In-game Settings:

  • Use “Fullscreen” or “Exclusive Fullscreen” mode (some games do not offer this option, or label borderless windowed as fullscreen).
  • Disable all available “Vertical Sync,” “V-SYNC” and “Triple Buffering” options.
  • If an in-game or config file FPS limiter is available, and framerate exceeds refresh rate:
    Set 3 FPS limit below display’s maximum refresh rate (57 FPS @60Hz, 97 FPS @100Hz, 117 FPS @120Hz, 141 FPS @144Hz, etc).
RTSS Settings:

  • If an in-game or config file FPS limiter is not available and framerate exceeds refresh rate:
    Set 3 FPS limit below display’s maximum refresh rate (see G-SYNC 101: External FPS Limiters HOWTO).
Windows “Power Options” Settings:

Windows-managed core parking can put CPU cores to sleep too often, which may increase frametime variances and spikes. For a quick fix, use the “High performance” power plan, which disables OS-managed core parking and CPU frequency scaling. If a “Balanced” power plan is needed for a system implementing adaptive core frequency and voltage settings, then a free program called ParkControl by Bitsum can be used to disable core parking, while leaving all other power saving and scaling settings intact.

blur-busters-gsync-101-bitsum-parkcontrol-program.png


Mouse Settings:

If available, set the mouse’s polling rate to 1000Hz, which is the setting recommended by Nvidia for high refresh rate G-SYNC, and will decrease the mouse-induced input lag and microstutter experienced with the lower 500Hz and 125Hz settings at higher refresh rates.

mouse-125vs500vs1000-1024x570.jpg


Refer to The Blur Busters Mouse Guide for complete information.

Nvidia Control Panel V-SYNC vs. In-game V-SYNC
While NVCP V-SYNC has no input lag reduction over in-game V-SYNC, and when used with G-SYNC + FPS limit, it will never engage, some in-game V-SYNC solutions may introduce their own frame buffer or frame pacing behaviors, enable triple buffer V-SYNC automatically (not optimal for the native double buffer of G-SYNC), or simply not function at all, and, thus, NVCP V-SYNC is the safest bet.

There are rare occasions, however, where V-SYNC will only function with the in-game option enabled, so if tearing or other anomalous behavior is observed with NVCP V-SYNC (or visa-versa), each solution should be tried until said behavior is resolved.
 
Maximum Pre-rendered Frames: Depends
A somewhat contentious setting with very elusive consistent documentable effects, Nvidia Control Panel’s “Maximum pre-rendered frames” dictates how many frames the CPU can prepare before they are sent to the GPU. At best, setting it to the lowest available value of “1” can reduce input lag by 1 frame (and only in certain scenarios), at worst, depending on the power and configuration of the system, the CPU may not be able to keep up, and more frametime spikes will occur.

The effects of this setting are entirely dependent on the given system and game, and many games already have an equivalent internal value of “1” at default. As such, any input latency tests I could have attempted would have only applied to my system, and only to the test game, which is why I ultimately decided to forgo them. All that I can recommend is to try a value of “1” per game, and if the performance doesn’t appear to be impacted and frametime spikes do not increase in frequency, then either, one, the game already has an internal value of “1,” or, two, the setting has done its job and input lag has decreased; user experimentation is required.

Conclusion
Much like strobing methods such as LightBoost & ULMB permit “1000Hz-like” motion clarity at attainable framerates in the here and now, G-SYNC provides input response that rivals high framerate V-SYNC OFF, with no tearing, and at any framerate within its range.

As for its shortcomings, G-SYNC is only as effective as the system it runs on. If the road is the system, G-SYNC is the suspension; the bumpier the road, the less it can compensate. But if set up properly, and run on a capable system, G-SYNC is the best, most flexible syncing solution available on Nvidia hardware, with no peer (V-SYNC OFF among them) in the sheer consistency of its frame delivery.
 
Great post, but wish they'd also talk about FreeSync....

I'm a big supporter and booster of FreeSync; and yet I use a G-Sync monitor because it's just the better technology at the moment (but adds a $200-$250 premium to your monitor).

Here's to hoping that FreeSync can catch up...
If you don't already, follow battlenonsense on youtube. He posts pretty unique content doing a lot of what blur busters does. He does in depth lag analysis for network and local. A lot of these tests and the optimal setup should be the same on freesync.


According to battlenonse you need to cap your framerate a bit lower than with gsync for optimal input lag
View: https://www.youtube.com/watch?v=mVNRNOcLUuA
 

Rubber Rim Job Podcast Video

Episode 3-13: "Backup Bash Brothers"

Rubber Rim Job Podcast Spotify

Episode 3:11: "Clipping Bucks."
Top