Some rendering settings actually tax the CPU more than the GPU (graphics card). Here's a simple rule of thumb: if you increase the setting (and restart X-Plane) and your frame-rate does not go down, a new graphics card isn't going to make it go up!
For example, if you have one of those new-fangled GeForce 8800s, you may have noticed that when you turn on FSAA the framerate doesn't dip at all. That's because the 8800 is insanely overpowered for X-Plane (at normal monitor resolutions) and has plenty of extra capacity that will be sitting idle on an older PC. When you turn up FSAA, you are simply wasting less of the card's excess capacity. It goes without saying that if there were a card faster than the 8800, it wouldn't improve your fps any more than the 8800, it would simply be even more bored.
Here's a rough guide to which features tax the CPU vs GPU:
CPU-Intensive
- World Level of Detail
- Number of Objects
- Draw Cars On Roads
- Draw Birds (not that expensive for modern machines)
- Draw Hi Detail World
- World Field Of View (wider view means more CPU work!)
- Texture Resolution (requires more VRAM)
- Screen Resolution
- Full Screen Anti-Aliasing (FSAA)
- Anisotropic Filtering (most cards can do at least 4x)
- Draw Hi-Res Planet Textures From Orbit
- Cloud Shadows and Reflections (not that expensive)
- Draw Hi Detailed World
A few specific framerate-optimization warnings:
- FSAA is equivalent to a higher screen resolution - that is, running at 2048x2048 and no FSA is similar to running at 1024x1024 and 4x FSAA. Both of these tax the video card with virtually no CPU increase. This is probably the only setting that can be helped only with a video-card upgrade.
- Texture resolution: do not worry if the total size of all textures loaded is larger than the VRAM of your card. To find out if more VRAM would help, measure frame-rate with your normal settings, with texture resolution down a notch, and with anisotropic filtering down a notch. If turning texture resolution down increases fps more than turning down anisotropic filtering, more VRAM may help. Machines with faster graphics busses (like PCIe x16) will be less sensitive to VRAM.
- Most Important: do not ever turn "World Detail Distance" beyond the default setting - you will simply destroy your fps and chew up your CPU for no benefit. I strongly recommend trying "low" for this setting - if you like a lot of objects, this setting can make a big difference in performance.
- The number of objects is virtually always a factor of how fast your CPU is, not your GPU -- that is, most GPUs can draw about a gajillion objects if the CPU could only get through them fast enough. If you are unhappy with the number of objects you can draw, do not expect a new graphics card to help - it probably won't.
- Cars on roads hurt fps on machines that don't have the fastest CPU.
- Draw Hi detail World is doubly dangerous - it uses both the CPU and GPU. Quite literally this is where we stash "luxurious" options. Everything this checkbox does chews up framerate. (If these options didn't, we'd leave them on all the time!) So you should not use this option if you aren't happy with fps, if you don't have a fast CPU, or if your graphics card isn't that modern. (HINT: if your nVidia card has "FX" in the title, don't use this!)
EDIT: one user correctly determined (by observing CPU utilization relative to settings) that puff-style 3-d clouds bottleneck the GPU, not the CPU! This was not what I expected - when Austin originally wrote that code, our measurement indicating that sorting the puffs from far to near taxed the CPU a lot, making this CPU-intesive. At the time the old Rage 128s would also get bogged down by filling in translucent puffs as you flew right into the thick of the cloud.
Times have changed and neither the sorting nor the alpha-drawing is even remotely expensive on a modern machine. So I was surprised to see the CPU not being used. After some investigation, it turns out that while the CPU and GPU have gotten a lot faster over time, the communciations channel between them has not. The result is that they both do their jobs really quickly, and as a result clog up the communications channel...the CPU simply can't talk to the GPU fast enough to get the clouds out.
This is a great find by this user, as this is something that didn't matter on old machines, but can be optimized in the future for new ones.
0 comments:
Post a Comment