In the December 2017 System Guide, we discovered the unexpected. Given the bevy of pre-built computing devices now available, there’s a lot of debate and confusion about building one yourself these days. What’s the goal behind a custom PC build in 2018? What makes a certain hardware choice “right” to support that?
Ars Technica System Guide: December 2017So rather than starting 2018 with a traditional guide—where Ars presents three build ideas and a set of specific hardware to accomplish each—we’re going to take a step back. This will be more of a meta-guide than an actual guide; we’re going to share the methods and mechanics behind putting together your favorite long-running PC building guide. So while this guide will build from a set of three major system design goals like always, this edition will go through each major PC hardware component one by one, focusing more on ideology than instruction, discussing how a specific part does (or doesn’t) contribute to a specific construction goal.
Standard system design goals
It’s not enough to say a system should be “fast” just like it’s not enough to sum up a sports car with its 0-60 time on a track. A gaming-focused system that impressively renders the most demanding scenes in Crysis can still be frustratingly slow to boot… and may handle the same gaming scenes abysmally if you forgot to (or didn’t want to) close your email client or your 30-tab Web browser first. A system with server intentions may effortlessly run five or 10 entire virtual machines but similarly stumble on a single demanding application. Meanwhile, a five-year-old system that doesn’t have very impressive specs might feel surprisingly fast. You know you’re not going to play the latest AAA games in 4K on it, and you don’t expect it to handle 200 tabs in Chrome, but somehow, despite how old it is, such a build can feel comfortable.
Generally speaking, the three machines described above are archetypical of three high-level system characteristics we’ll be outlining: general performance, multitasking, and frames per second (or FPS for short).
This category may be the most ambiguous, therefore it’s the most contentious when determining which parts make sense. When we talk about “general performance” in the scope of the system guide, we’re not talking about crushing any individual benchmark with super high numbers. What we’re actually talking about is bottleneck elimination. A system with high general performance doesn’t feel sluggish—even if it doesn’t have the biggest, beefiest parts—because its performance is as consistent as possible. When you double-click an icon on the desktop, you have an expectation of how long it will take to open the application or document. Any machine with high general performance reliably satisfies that expectation. A machine more heavily focused on application-specific performance might frequently flail.
General performance machines likely work for the largest amount of common use cases. It’s entirely possible to build a machine that gets really great framerates in all your games, but such a machine can be massively frustrating because it takes three or four times longer to start an app (including a game) than others. At the same time, an older general performance machine that benchmarks out slower may feel better if it pushes two-thirds of the general framerate but doesn’t have those irritating, immersion-breaking “lurches” once or twice in a 15 minute session of play. (This isn’t just a gaming thing, either. The same is true of machines that usually open Office documents in 300ms but occasionally take 1500ms for no immediately obvious reason or machines that take 500ms to open the same document but take that 500ms every time.)
An ideal general performance machine has as much to do with human psychology as it does with actual hardware. I’ve always been fond of a psychology study I read in the ’90s (which I’ve sadly been unable to find again) that outlines something called a 33-percent “expectation threshold.” As I recall it, most people won’t notice a change in how long it takes a task to complete if it changes by less than a third in either direction. It doesn’t really matter how long you expect a task to take in the first place—a one-second task seems “a little quicker than usual” at 667ms and “slower than usual” at 1333ms. A one-hour task doesn’t start seeming “slow” until 80 minutes, and (without measuring it deliberately) it won’t really seem “fast” if it’s not done in 40 minutes.
This 33-percent approach has become a great rule of thumb for me when approaching performance over the years. My other favorite rule of thumb is that people remember unpleasant surprises far longer and more vividly than they remember pleasant ones. If you give someone an unexpected win in one hand and an unexpected loss in the other, they will more than likely complain about how much better things used to be. So if you build someone an expensive gaming machine that feels slow and clunky sometimes, they generally won’t speak kindly of the experience, either.
Multitasking, like general performance, is more important than it might first appear. It’s obvious that a system expected to run a few virtual machines will need to multitask well; ditto for a graphics workstation where you might want Blender and Photoshop open and running simultaneously. However, the ability to handle lots of tasks at once is relevant for any modern general-purpose computer. The operating system itself places frequent multi-tasking demands on a system—and modern users do, too—more than we realize.
If you want to be able to keep your email client up and running all the time so you’ll get notifications when that thing from your boss/ message about your date tonight arrives as you play your game… well, that’s multitasking. That email client isn’t “free” in terms of system resources. Want to keep a game Wikia up in your browser for reference while you play? Again, multitasking.
There is a lot of overlap between multitasking and general performance; you’re unlikely to have a machine that exhibits good, consistent performance without several good multitasking characteristics. But it’s worth considering these characteristics separately—just as it’s worth breaking FPS out next—because you can and may want to take multitasking out to extremes that no longer really affect general performance. Such situations might include VM servers or battle stations that you want to both host and play games on, for instance.
Thus far we’ve defined what a category is, but let’s discuss what FPS is not. When you’re chasing FPS, you’re looking for a high overall number of rendered frames per second in games. This does not cover how long it takes to load the game, how long it takes to “zone” in massive landscape games found in WoW, or even how frequently you have irritating 50-500ms “lurches” where your framerate stumbles for just long enough to break immersion. FPS is only about how many frames per second you can expect to render in a given scene of a given game when everything is working right.
Revisiting the World of Warcraft, nine years after I left(Editor’s note: If you think it sounds like I don’t think as highly of FPS as I do of the other characteristics, I won’t argue with you.)
FPS is important. Most gamers will want base framerates above 60 FPS in even the most demanding scenes of a game. With that said, I will caution anyone who’ll listen that there is a big difference between being unhappy with the way a game looks and plays and being unhappy about the number in the FPS counter you’ve chosen to keep in one corner of the screen. If you want game immersion, you’re usually better off killing the FPS counter and addressing the things that make the game occasionally immersion-breakingly slow. Obsessing about whether an icon says “130FPS,” “110FPS,” or “90FPS” while a game is running fine does nothing.
More Info: arstechnica.com