Just as you would build a house, you need a good foundation to build a powerful PC. In the tech world, every rig starts with the processor.
AMD’s 2nd Gen Ryzen processors offer something for everyone. Built with advanced 12nm transistors, improved SenseMI technology, StoreMI, and reduced memory latency, their premium performance can take your gaming experience to the next level. We’ve compiled a quick guide to help you make your choice a little easier.
AMD Ryzen 7: Do it all
The AMD Ryzen 7 2700X and Ryzen 7 2700 are the best choices for those who work hard and play hard. Rocking a whopping 8 cores and 16 threads, the Ryzen 7 series boasts phenomenal multitasking capabilities. Professional content producers who work with image compression, video transcoding, and encryption would also enjoy an accelerated workflow.
The Ryzen 7 processors are also highly recommended for gamers who want to share their epic moments through streaming. Due to the need for heavy encoding, streaming can be a very taxing process. Software encoding, or encoding with the CPU, is generally preferred if you want the best quality. But doing so on a processor without adequate resources, however, can lead to tons of dropped frames, stutters, lags, and massively reduced gaming performance. That’s not a problem for the AMD’s Ryzen 7 series as their high core count can readily handle games at breakneck speeds while maintaining a smooth, uncompromising stream quality.
AMD Ryzen 5: High-performance for everyone
If you’re an avid gamer who also occasionally dab in video and photography, then the Ryzen 5 series of processors is right for you. With 6 cores and 12 threads, the Ryzen 5 2600X and Ryzen 5 2600 sits at the sweet spot of performance and value. Their six cores are more than capable of handling today’s most demanding games as well as light to medium image and video processing.
The Ryzen 5 2400G is a good candidate for those who need the processing power of 6 cores and 12 threads as well as integrated graphics. It features 11 powerful Vega graphics cores built in. For users who need a powerful machine that sips power, be sure to grab the Ryzen 5 24000GE. The Ryzen 5 2400GE has the same specifications as the Ryzen 5 2400G, but comes in at a lower 35W TDP (Thermal Design Power).
AMD Ryzen 3: Best bang for your buck
In addition to boasting 4 cores and 8 threads to tackle all your work and multimedia needs, the Ryzen 3 2200G also comes with blazing fast Radeon RX Vega graphics built-in. Featuring 8 of the same graphics cores as AMD’s flagship Vega 64 and Vega 56 graphics cards, the integrated graphics can drive excellent frame rates in competitive eSports titles including Dota 2, League of Legends, Fortnite, and Counter-Strike. Its premium performance in both graphics and processing power makes it an easy recommend for gamers on a budget.
Just like the Ryzen 5 series, the Ryzen 3 2200G has a counterpart with a lower TDP. The Ryzen 3 2200GE also sports 4 CPU cores and Vega graphics, but lowers the TDP to just 35W. If efficiency is a priority, then the Ryzen 3 2200GE stands as a great choice.
How to upgrade?
In addition to elite performance, the nice thing about upgrading to the 2nd Gen Ryzen processors is how easy it is. If you’re the owner of a first-generation Ryzen motherboard, then upgrading is as simple as swapping out the CPU and performing a quick BIOS update. Both 1st gen and 2nd Gen Ryzen processors use the same AM4 CPU socket and offer backward/forward compatibility. AMD plans to support the AM4 socket until at least 2020, so you can even upgrade to newer chips down the road as they arrive.
Gamers looking to join the Ryzen family for the first time can find an abundance of excellent AMD motherboards, each with the right features you need. We recommend the Gigabyte Aorus X470 for the high-end and the Gigabyte Aorus AX370 for our budget pick. Of course, there are plenty more selections from reputable brands such as ASUS, MSI, and ASRock.
Whatever your choice may be, all 2nd Gen Ryzen processors (with the exception of the Ryzen 5 2400GE and Ryzen 3 2200GE) come with a chilly Wraith cooler. The coolers included with the high-end Ryzen 7 2700X is also decked out with user controllable RGB lights, helping you to show off all your hardware in full display.
To learn more about 2nd Gen Ryzen and its benefits, read our full review or make a visit to AMD
Which Apple products were killed off in order to make space for its three new iPhones and Apple Watch Series 4? We look at the discontinued Apple devices that didn't make the cut for the September 12 event.
Every time Apple announces new hardware some of its older products have to move out of the way to make space for its new babies. The September 12 2018 event was no different. We outline the Apple devices that didn't make the cut.
iPhone X
The iPhone X was announced in September 2017 with a completely refreshed screen-centric design to celebrate the iPhone's 10th birthday. Now at the grand old age of 11 it doesn't need you pointing out the new iPhone XS is basically the same phone but with a faster processor, slightly longer battery life and IP68 waterproofing. It made sense for Apple to ditch the iPhone X, and so it has.
iPhone 6s
The last flagship iPhone to feature a 3.5mm headphone jack is now officially off sale. All iPhones in Apple's line-up are now waterproof, too (rated IP67 or, for the XS and XS Max, IP68).
Apple's iPhone 6s was announced in September 2015, and though Apple has discontinued the once-loved smartphone just three years later, you can still pick it up from various retailers at the attractive price of £439 (see Currys, John Lewis or Tesco).
Despite the 6s' demise, the Rose Gold iPhone is not yet dead: you can still pick up an iPhone 7 or 7 Plus direct from Apple from £449.
iPhone SE
The 4in iPhone is dead. For all the talk of an iPhone SE 2 what we actually got was an iPhone XR, a phone that is neither as small nor as cheap as the original 'cheap' iPhone. We doubt we'll see an iPhone SE 2 materialise any time soon, and for now the cheapest phone still officially sold by Apple is the £449 iPhone 7.
Apple Watch Series 1 & Watch Edition
When Apple first announced its Apple Watch back in 2014, there was a special 'Watch Edition' that cost upwards of £8k. It was insanely expensive. Now with the introduction of Series 4 Apple has canned its original Apple Watch and the pricey Watch Edition. Even so, you can still pay up to £1,449 for the Hermes Edition.
AirPower
No-one mention AirPower ever again. Announced in September last year, this wireless charging mat was supposed to power up three Apple devices at once - a feat that no other product on the market was capable of at the time. Well, we waited, and we waited, and we waited. Now Apple has removed all traces of the AirPower from its website with no official comment. We guess this one will never see the light of day.
...And this little guy
With no current iPhones including a 3.5mm headphone jack it's time for Apple to stop bundling its 3.5mm- to Lightning adaptor in the box. This is important only if you want to use your existing 3.5mm earphones, since Apple supplies Lightning EarPods with all new iPhones.
The adaptor itself isn't discontinued, but if you want one you'll need to buy it separately. It costs £9 from the Apple Store.
FORTNITE'S creators have apologised after adding "embarrassing" jiggly breast animations into the game.
Developers at Epic Games are now removing the so-called "boob physics" in a bid to appease furious gamers – but not everyone is happy about the decision.
Some gamers complained about how the new Calamity skin's breasts movedYesterday, Epic Games released its highly anticipated Season 6 update for Fortnite to players around the world.
The update included a brand new character skin (a type of virtual appearance and outfit) called Calamity.
The character is dressed in a cowboy hat, denim shorts and a white tank top – but gamers quickly noticed something strange.
Calamity's breasts were moving, which doesn't happen on other Fortnite skins.
The breasts were deemed 'careless' and 'embarrassing' by Fortnite's creators
But some angry gamers say the breasts were jiggling excessively, moving around in an unrealistic way.
In response to the outrage, Epic Games vowed to roll back the change.
"This is unintended, embarrassing, and it was careless for us to let this ship," an Epic Games spokesperson said.
"We are working now to fix this as soon as possible."
When game designers create virtual characters, they can choose how much (or how little) of the character can move.
Typically joints and the head will almost always be animated, but for games with simple graphics, it's simply not worth the time bothering to animate clothing or hair.
Breasts will often be skipped over too, because all animation requires extra development effort – and puts more strain on your computer's processor.
But someone at Fortnite decided it was worth the effort to animate the Calamity character's breasts using the Unreal Engine 4 engine used to build the game.
Many gamers simply didn't see it as an issue, and wondered why everyone was kicking up a fuss.
One Fortnite fan wrote on Twitter: "Oh no, Fortnite is massively criticsed for...having a realistic body response of jiggling breasts on a skin...? OH NO WHAT DEEP DEVASTATION.
"Seriously...it's a bit much to be this offended by it."
Another said: "So I just caught wind of this Fortnite controversy and it makes me wonder – does the Western audience and Epic itself consider women with breasts over B-cups repulsive?
"It's seen the gif, it's not Dead or Alive, they're breasts moving how breasts move in movement. What's up?"
Fans on Twitter were divided over Fortnite's bouncing busts
What is Fortnite Battle Royale?
If you're new to the game, here's what you need to know
Fortnite's Battle Royale is free-to-play
It's available on the Xbox One, PlayStation 4, PC, Mac, iPhone and most recently Nintendo Switch
In the game, up to 100 players are dropped onto a fictional island
Players are then forced to battle it out in a last-person-standing deathmatch
To help, players can collect a range of weapons hidden all over the island
You can also collect resources – like wood, bricks and metal – to build defensive structures
The area you can play in on the island is constantly shrinking thanks to an encroaching storm
This means players are forced together over time, until just one person survives
The game also has a paid-for co-op story campaign
But female gamers hit back – with one Twitter user writing: "Look if you think the Fortnite boob physics bug is 'just how real breasts move' then I strongly suspect that you have never seen an actual breast in your entire life."
Another complained about how the change even made it to the live version of the game: "Those Fortnite boob physics went through assumedly several quality and management checks, and still got released to millions of people."
Scantily-clad 'poster girl' police sergeant plays online video games for cash
Fortnite isn't only game with breast physics, of course.
Probably the most popular example of breast physics being over-used in a game is the Dead Or Alive franchise.
It's a series of fighting (and sometimes volleyball) games featuring scantily clad women.
Developers have typically paid special attention to how breasts look and move in the game, which has given it a cult following.
But in June this year, developers announced that the sexualisation would be toned down for the upcoming Dead Or Alive 6 game.
They say they're aiming for a more "natural movement", and want to "make sure this is a fighting game first".
Do you think Epic Games is right to back down on the boob physics? Let us know in the comments!
Ever since Microsoft launched the Xbox One, it’s been making noises about bringing the console and PC closer together—including adding mouse and keyboard support to supplement the Xbox One controller. Now Microsoft's Xbox chief has confirmed that it's happening in the near future.
Mike Ybarra, the corporate vice president of Xbox, tweeted Tuesday that support for mouse and keyboard is very close to being deployed on the Xbox console, bringing (at least conceptually) control parity between it and the PC. Larry "Major Nelson" Hyrb, who handles Xbox communications for the company, added that it would be tested on the game Warframe.
"Mouse and keyboard very close to coming to Xbox console," Ybarra tweeted. "Lots of developer options to ensure fairness and a great experience. Choice is in the hands of the developer to do multiplayer pools (controller only, kb/mouse only, any, etc.). Soon! #Xbox"
That's a public admission for Microsoft, which had previously privately told developers that mouse and keyboard was coming. It will be released to a select group of Windows Insiders in the coming weeks, Microsoft said.
Microsoft also said that it is partnering with Razer to bring the "best possible mouse and keyboard experience" to this new partnership. Windows Central reported that Microsoft and Razer jointly presented a partnership on the Razer Turret earlier this year, touted as the “ultimate keyboard and mouse solution for Xbox One.” Notably, the USB mouse support uses the same APIs as those used within Windows, so you’d be able to unplug a mouse used with your PC and attach it to the Xbox.
As simple as that sounds, however, you might wonder why it’s taken so long for “true” mouse and keyboard support to arrive on the Xbox. After all, Microsoft began talking about it as far back as two years ago. Both Windows and the Xbox have gone through multiple updates to their operating systems since then.
Simply remapping controller functions to different forms of input may be taking a backseat to competitive issues. According to Windows Central, Microsoft also published advisories on how mouse and keyboard support—considered to be a more precise form of input than a controller—could affect game play.
Windows allows PC gamers to tweak mouse input controllers to allow for nearly instantaneous leaps across the screen, while controllers are generally considered to be slower and less precise. Competitive gameplay in, say, Player Unknown’s Battlegrounds would be dramatically affected if Xbox players using a controller could challenge Xbox players using a mouse and keyboard—and take on PC players as well.
Microsoft also said that because the presence of a mouse and keyboard could be detected, gamers could be restricted to special modes or queues segregated by input mode, a position that Ybarra publicly reiterated. It would be up to the developers to decide whether and how to implement it, and if they had special queues for players using different controllers.
It’s worth noting that you already can play with mouse and keyboard on the Xbox version of Minecraft, though that’s much more of an exploration game than anything else.
What this means to you: The Verge reported that Microsoft and Razer made their presentation at the Xfest developer conference last year. But mouse and keyboard support is certainly a minefield for gamers and developers alike: Should devs include it, and create special queues for PC gamers and console gamers to play together? Or would those same devs see it as fragmenting a base group of players? Mouse and keyboard support is virtually required for any sort of real-time-strategy game, but for first-person, competitive shooters—where tensions run high already—it might be tricky.
On June 11th, the Federal Communications Commission repealed rules protecting net neutrality. These rules prevented private internet service providers from engaging in such practices as speeding up or slowing down traffic to specific websites, blocking access to law-abiding websites, or offering faster speeds to sites or users who pay them for the privilege. For example, an ISP like Comcast or Time Warner couldn't force you to pay an extra fee to stream Netflix in full HD like some cellular carriers already do. They also couldn't block entire sites for basic plans and force you to pay for a higher-priced package to access them. If you had a 100 megabit per second plan, the ISPs couldn’t have a say in how you used those 100 megabits so long as you weren’t breaking the law.
It’s been more than three months since those protections were repealed, but has anything changed? There are no longer rules in place that could hold ISPs legally responsible for violating net neutrality, but that doesn’t mean all of them are guaranteed to do so—any more than making public nudity legal would result in everyone going out to shop naked immediately. Comcast proclaims on its Xfinity website that it “[does] not block, slow down or discriminate against lawful content,” and is “for sustainable and legally enforceable net neutrality protections for our customers.” But that's an awfully odd thing to say from a company that recently helped repeal legal enforcement of net neutrality.
Post-neutrality America so far
The honest answer is that we don’t know what, if anything, has changed yet. It’s very difficult to reliably determine if content is being prioritized or throttled as ISPs currently aren’t required to disclose such practices. Here in the US, we haven’t seen a rollout of tiered pricing for internet plans, but Portugal has. Over there, you can be charged an extra monthly fee for video streaming, social media, and even email access. And if it could happen there, it could happen here.
The idea that looser rules will mean more options for consumers is dubious.
FCC chairman Ajit Pai and critics of net neutrality claim that it stifles innovation, reduces investment in the internet, and disproportionately harms smaller ISPs. In the world they claim to want, everyone would have many different ISP choices to shop around among, with each having full freedom over what kind of service they sell. In theory, this would discourage ISPs from nickel-and-diming us or blocking and throttling content because they would be in danger of losing customers.
What they’re actually proposing, though, is giving anyone who owns or wants to start an ISP the ability to engage in consumer-unfriendly practices like throttling and tiered packages—getting more money out of you for the same or worse access than you have right now—to make more money for their shareholders. It’s a case of being allowed to pick your poison and hoping competing poisoners will try to woo you with less potent poisons, rather than saying you can’t poison people at all. And as for whether or not we should take Comcast at its word that it won't throttle or block sites: protesting loudly against a rule you say you don’t plan to break is kind of an odd look, as I mentioned. If I claim I never want to steal cherry oat bars from Starbucks, but spend millions of dollars lobbying to remove the legal consequences of doing so, what would that make you assume about my intentions?
The idea that looser rules will mean more options for consumers is dubious, too, as many ISPs already have local monopolies or oligopolies. Where I live in Denver area, I have a choice between two ISPs and only one that offers the speeds I need to stream and upload video for my job. So if that one ISP decided to start charging me extra for tiered service, I would have little choice but to pay up. For most reading this, the internet is a necessity of everyday professional and personal life. Imagine streamers, YouTubers, or anyone who runs an internet business having no choice but to cough up extra cash every month to Comcast—it's just less money for them, more for a huge corporation that can exploit their needs.
Is net neutrality dead?
So the rules are gone. Some big ISPs are at least still paying lip service to the idea of net neutrality, but they are under no legal compulsion to continue to support this stance. We haven’t seen anything wild in terms of tiered pricing plans yet, but it’s very difficult to tell how faithfully any ISP is keeping its promises about prioritization and throttling, especially if they’re going about it in a way meant to minimize public suspicion. And all of this could change at any time unless net neutrality protections are restored. So what do we do now?
Well, the FCC is a part of the executive branch of the US government. If you’re rusty on your Schoolhouse Rock, they’re in charge of enforcing the law. But they don’t make laws—that’s the legislative branch, aka Congress. And while they have some leeway to interpret laws, their interpretations are always open to being overturned by the judicial branch—the courts, including the Supreme Court. So while we suffered a major loss in the fight for net neutrality, there are still battles ongoing in the legislature and the courts that could overturn the decision.
A vote in the Senate of 48 yea to 47 nea with five abstaining would pass a simple majority.
At the state level, the governors of Oregon, Vermont, and Washington have signed open internet laws, and California is one signature away from enacting legislation that would require ISPs operating there to comply with net neutrality regulations even more strict than the ones that were repealed at the federal level. Pai claims this would be illegal, as it conflicts with the federal order, but a dispute between California and the federal government would have to go to the courts to be resolved. This could turn out in Pai’s favor, nullifying the California law. But it could also lead to a ruling that the federal order itself was illegal, as many net neutrality advocates are trying to prove, causing the FCC’s intervention to backfire. Slate’s April Glaser pointed out last year that Pai’s rules “aren’t on solid legal footing”, and many, including New York Attorney General Barbara Underwood, have already filed a suit— supported by 22 other US attorneys general representing over 160 million Americans—to reverse the FCC order.
On the legislative side, overturning the FCC order would only require a simple majority in Congress. That means more votes in favor than against in both the Senate and the House of Representatives, disregarding any abstentions. This is easier to achieve than an absolute majority—half the total number of seats, plus one—normally required to pass legislation. A vote in the Senate of 48 yea to 47 nea with five abstaining would pass a simple majority, even though an absolute majority of 51 was not achieved. This is significant in that swing votes only need to be convinced not to vote against net neutrality, rather than to actively vote for it.
This was already achieved in May, but the resolution hasn't made it through the House.
Governor Jay Inslee signs a Washington State bill to "protect an open internet in Washington." Credit: Gov. Jay Inslee / Legislative Support Service
What you can do
This is where anyone who can vote in the US can help out. All 435 US Representatives and 35 out of 100 Senators are up for re-election in this fall’s midterms. The Verge collected this list of every Congressperson who voted against net neutrality protections in 2017, as well as how much money they received in campaign donations from big telecom lobbies. If you live in a state or district represented by one of them, you can send a message leading up to the election by contacting them directly and, if they hold firm to their stance in a contested seat, vote them out in November. Attending town halls and other public meetings to ask questions about net neutrality throughout the campaign season is also a helpful way of determining and publicizing a candidate’s stance on the issue. You can find out if you’re registered to vote (and register if you’re not) very easily on vote.org.
All 435 US Representatives and 35 out of 100 Senators are up for re-election in this fall’s midterms.
Contacting state legislators and voting in local- and state-level elections can also be a big help. California is about to pass its own net neutrality rules, which will force a court case on the issue if the FCC wants to interfere. We can look to examples like use of cannabis, which remains federally illegal as a Schedule I substance but has been legalized or decriminalized in a majority of states to the point that the federal government has stepped back (though not consistently) and allowed those states to enforce their own rules. The same could happen with net neutrality at the state level, regardless of the federal order, especially if most state legislatures end up supporting it.
Net neutrality protections are beneficial to just about everyone who isn’t looking to make money as a shareholder or corporate officer in a big ISP. It’s hard to tell if or when ISPs will act on their new freedoms, but they have a strong financial incentive to do so and most face little meaningful competition that would rein them in. For more on net neutrality and why it matters to us, see our article from last year about what net neutrality means for PC gaming.
Solid state drives are great, but nothing can beat hard drives when it comes down to raw capacity. If you want to store your entire Steam library locally, Toshiba's 5TB performance hard drive is just $105.59 on Amazon for today only. That's $36 cheaper than the MSRP, and $30 lower than the previous price.
This is a 7200 RPM drive with a 128MB cache, so it's fast enough to play games from or to use as a primary OS drive (if you can't get an SSD). The form factor is 3.5-inches, not 2.5", so it won't fit in most laptops.
What is ray tracing? That question just became far more relevant for PC gamers, as the new Nvidia GeForce RTX 2080, GeForce RTX 2080 Ti, and GeForce RTX 2070 are adding dedicated hardware to accelerate ray tracing. All of those graphics cards use Nvidia's new Turing architecture, which promises to be the most revolutionary jump in GPUs that we've seen in a long time—perhaps ever. Will these be the new best graphics cards when they become available, or are they priced too high?
That's a difficult question to answer, because even though we've reviewed the GeForce RTX 2080 Founders Edition and GeForce RTX 2080 Ti Founders Edition, we're still waiting for games that use the new hardware features. Nvidia has provided in-depth information on all the technologies going into the cards, and a few demos of the tech, but actual use in games will vary. While we wait for the ray tracing games to arrive, we've created this overview of ray tracing, rasterization, hybrid graphics, and now Nvidia's GeForce RTX cards are set to change what we can expect from our GPUs.
A short primer on computer graphics and rasterization
Creating a virtual simulation of the world around you that looks and behaves properly is an incredibly complex task—so complex in fact that we've never really attempted to do so. Forget about things like gravity and physics for a moment and just think about how we see the world. An effectively infinite number of photons (beams of light) zip around, reflecting off surfaces and passing through objects, all based on the molecular properties of each object. Trying to simulate 'infinity' with a finite resource like a computer's computational power is a recipe for disaster. We need clever approximations, and that's how modern graphics currently works.
We call this process rasterization, and instead of looking at infinite objects, surfaces, and photons, it starts with polygons. Early attempts might have only used hundreds of polygons at a time, but that number has been steadily increasing as our graphics cards and processors have become faster. Now, games have millions of polygons, but how do you turn all these triangles into an image? Rasterization.
It involves a lot of mathematics, but the short version is that a viewport (the screen) is defined and then a 2D rendition of the 3D world gets created. Converting a polygon into a 2D image on a screen involves determining what portion of the display the object covers. Up close, a single triangle might cover the entire screen, while if it's further away and viewed at an angle it might only cover a few pixels. Once the pixels are determined, things like textures and lighting need to be applied as well.
Doing this for every polygon for every frame ends up being wasteful, as many polygons might not be visible. Various techniques like the Z-buffer (a secondary buffer that keeps track of the depth of each pixel) and Z-culling (discarding objects that are blocked from view) help speed up the process. In the end, a game engine will take the millions of potentially visible polygons, sort them, and then attempt to process them as efficiently as possible.
That's no small task, and over the past couple of decades we've gone from primitive polygons with 'faked' light sources (eg, the original Quake), to more complex environments with shadow maps, soft shadows, ambient occlusion, tessellation, screen space reflections, and other techniques attempting to create a better approximation of the way things should look. This can require millions or even billions of calculations for each frame, but with modern GPUs able to process teraflops of work (trillions of calculations per second), it's a tractable problem.
What is ray tracing?
Ray tracing is a different approach, one that has theoretically been around for nearly 50 years now, though it's more like 40 years of practical application. Turner Whitted wrote a paper in 1979 titled "An Improved Illumination Model for Shaded Display" (Online PDF version), which outlined how to recursively calculate ray tracing to end up with an impressive image that includes shadows, reflections, and more. (Not coincidentally, Turner Whitted now works for Nvidia's research division.) The problem is that doing this requires even more complex calculations than rasterization.
Ray tracing involves tracing the path of a ray (a beam of light) backward into a 3D world. The simplest implementation would trace one ray per pixel. Figure out what polygon that ray hits first, then calculate light sources that could reach that spot on the polygon (more rays), plus calculate additional rays based on the properties of the polygon (is it highly reflective or partially reflective, what color is the material, is it a flat or curved surface, etc.).
To determine the amount of light falling on to a single pixel, the ray tracing formula needs to know how far away the light is, how bright it is, and the angle of the reflecting surface relative to the angle of the light source, before calculating how hot the reflected ray should be. The process is then repeated for any other light source, including indirect illumination from light bouncing off other objects in the scene. Calculations must be applied to the materials, determined by their level of diffuse or specular reflectivity—or both. Transparent or semi-transparent surfaces, such as glass or water, refract rays, adding further rendering headaches, and everything necessarily has an artificial reflection limit, because without one, rays could be traced to infinity.
The most commonly used ray tracing algorithm, according to Nvidia, is BVH Traversal: Bounding Volume Hierarchy Traversal. That's a big name for a complex process, but the idea is to optimize the ray/triangle intersection computations. Take a scene with hundreds of objects, each with potentially millions of polygons, and then try to figure out which polygons a ray intersects. It's a search problem and would take a very long time to brute force. BVH speeds this up by creating a tree of objects, where each object is enclosed by a box.
Nvidia presented an example of a ray intersecting a bunny model. At the top level, a BVH (box) contains the entire bunny, and a calculation determines that the ray intersects this box—if it didn't, no more work would be required on that box/object/BVH. Next, the BVH algorithm gets a collection of smaller boxes for the intersected object—in this case, it determines the ray in question has hit the bunny object in the head. Additional BVH traversals occur until eventually the algorithm gets a short list of actual polygons, which it can then check to determine how the ray interacts with the bunny.
All of this can be done using software running on either a CPU or GPU, but it can take thousands of instruction slots per ray. The RT cores are presented as a black box that takes the BVH structure and an array, and cycles through all the dirty work, spitting out the desired result. It's important to note that this is a non-deterministic operation, meaning it's not possible to say precisely how many rays the RT cores can compute per second—that depends on the BVH structure. The Giga Rays per second figure in that sense is more of an approximation, but in practice the RT cores can run the BVH algorithm about ten times faster than CUDA cores.
Using a single ray per pixel can result in dozens or even hundreds of ray calculations, and better results are achieved by starting with more rays, with an aggregate of where each ray ends up used to determine a final color for the pixel. How many rays per pixel are 'enough'? The best answer is that it varies—if the first surface is completely non-reflective, a few rays might suffice. If the rays bounce around between highly reflective surfaces (eg, a hall of mirrors effect), hundreds or even thousands of rays might be necessary.
Companies like Pixar—and really, just about every major film these days—use ray tracing (or path tracing, which is similar except it tends to use even more rays per pixel) to generate highly detailed computer images. In the case of Pixar, a 90-minute movie at 60fps would require 324,000 images, with each image potentially taking hours of computational time. How is Nvidia hoping to do that in real-time on a single GPU? The answer is that Nvidia isn't planning to do that. At least not at the resolution and quality you might see in a Hollywood film.
Enter hybrid rendering
Computer graphics hardware has been focused on doing rasterization faster for more than 20 years, and game designers and artists are very good at producing impressive results. But certain things still present problems, like proper lighting, shadows, and reflections.
Screen space reflections use the results of what's visible on the screen to fake reflections—but what if you're looking into a mirror? You could do a second projection from the mirror into the game world, but there are limits to how many projections you can do in a single frame (since each projection requires a lot of rasterization work from a new angle). Shadow maps are commonly used in games, but they require lots of memory to get high quality results, plus time spent by artists trying to place lights in just the spot to create the desired effect, and they're still not entirely accurate.
Another lighting problem is ambient occlusion, the shadows that form in areas where walls intersect. SSAO (screen space ambient occlusion) is an approximation that helps, but again it's quite inaccurate. EA's SEED group created the Pica Pica demo using DXR (DirectX Ray Tracing), and at one point it shows the difference between SSAO and RTAO (ray traced ambient occlusion). It's not that SSAO looks bad, but RTAO looks better.
Hybrid rendering uses traditional rasterization technologies to render all the polygons in a frame, and then combines the result with ray-traced shadows, reflections, and/or refractions. The ray tracing ends up being less complex, allowing for higher framerates, though there's still a balancing act between quality and performance. Casting more rays for a scene can improve the overall result at the cost of framerates, and vice versa.
Nvidia had various game developers show their ray tracing efforts at Gamescom, but everything so far is a work in progress. More importantly, we haven't had a chance to do any performance testing or adjust the settings in any way. And all the demonstrations ran on RTX 2080 Ti cards, which can do >10 Giga Rays per second (GR/s)—but what happens if you 'only' have an RTX 2080 with 8 GR/s, or the RTX 2070 and 6 GR/s? Either games that use ray tracing effects will run 20 percent and 40 percent slower on those cards, respectively, or the games will offer settings that can be adjusted to strike a balance between quality and performance—just like any other graphics setting.
Taking the 2080 Ti and its 10 GR/s as a baseline, if we're rendering a game at 1080p, that's about 2 million pixels, and 60fps means 120 million pixels. Doing the math, a game could do 80 rays per pixel at 1080p60, if the GPU is doing nothing else—and at 4k60 it would be limited to 20 rays per pixel. But games aren't doing pure ray tracing, as they still use rasterization for a lot of the environment. This brings us to an interesting dilemma: how many rays per frame are enough?
Nvidia's Optix denoising algorithm at work
Denoising and AI to the rescue
Here's where Nvidia's Turing architecture really gets clever. As if the RT cores and enhanced CUDA cores aren't enough, Turing has Tensor cores that can dramatically accelerate machine learning calculations. In FP16 workloads, the RTX 2080 Ti FE's Tensor cores work at 114 TFLOPS, compared to just 14.2 TFLOPS of FP32 on the CUDA cores. That's basically like ten GTX 1080 Ti cards waiting to crunch numbers.
But why do the Tensor cores even matter for ray tracing? The answer is that AI and machine learning are becoming increasingly powerful, and quite a few algorithms have been developed and trained on deep learning networks to improve graphics. Nvidia's DLSS (Deep Learning Super Sampling) allows games to render at lower resolutions without AA, and then the Tensor cores can run the trained network to change each frame into a higher resolution anti-aliased image. Denoising can be a similarly potent tool for ray tracing work.
Pixar has been at the forefront of using computer generated graphics to create movies, and its earlier efforts largely relied on hybrid rendering models—more complex models perhaps than what RTX / DXR games are planning to run, but they weren't fully ray traced or path traced. The reason: it simply took too long. This is where denoising comes into play.
Many path tracing applications can provide a coarse level of detail very fast—a quick and dirty view of the rendered output—and then once the viewport stops moving around, additional passes can enhance the preview to deliver something that's closer to the final intended output. The initial coarse renderings are 'noisy,' and Pixar and other companies have researched ways to denoise such scenes.
Pixar did research into using a deep learning convolutional neural network (CNN), training it with millions of frames from Finding Dory. Once trained, Pixar was able to use the same network to denoise other scenes. Denoising allowed Pixar to reportedly achieve an order of magnitude speedup in rendering time. This allowed Pixar to do fully path traced rendering for its latest movies, without requiring potentially years of render farm time, and both Cars 3 and Coco made extensive use of denoising.
If the algorithms are good enough for Pixar's latest movies, what about using them in games? And more importantly, what about using denoising algorithms on just the lighting, shadows, and reflections in a hybrid rendering model? If you look at the quality of shadows generated using current shadow mapping techniques, lower resolution textures can look extremely blocky, but it's often necessary to reach acceptable performance on slower GPUs—and most gamers are okay with the compromise.
Take those same concepts and apply them to RTX ray tracing. All the demonstrations we've seen so far have used some form of denoising, but as with all deep learning algorithms, additional training of the model can improve the results. We don't know if Battlefield V, Metro Exodus, and Shadow of the Tomb Raider are casting the maximum number of rays possible right now, but further tuning is certainly possible.
Imagine instead of using the GeForce RTX 2080 Ti's 10 GR/s, use just 1-2 GR/s and let denoising make up the difference. There would be a loss in quality, but it should make it viable to implement real-time ray tracing effects even on lower tier hardware.
If you look at the above image of the goblets, the approximated result on the right still looks pretty blocky, but if that only impacted the quality of shadows, reflections, and refractions, how much detail and accuracy do we really need? And since the RT cores in Turing are apparently able to run in parallel with the CUDA cores, it's not unreasonable to think we can get a clear improvement in visual fidelity without killing framerates.
Big names in rendering have jumped on board the ray tracing bandwagon, including Epic and its Unreal Engine, Unity 3D, and EA’s Frostbite. Microsoft has created an entirely new DirectX Ray Tracing API as well. Ray tracing of some form has always been a desired goal of real-time computer graphics. The RTX 20-series GPUs are the first implementation of ray tracing acceleration in consumer hardware, and future Nvidia GPUs could easily double or quadruple the number of RT cores per SM. With increasing core counts, today's 10 GR/s performance might end up looking incredibly pathetic. But look at where GPUs have come from in the past decade.
The first Nvidia GPUs with CUDA cores were the 8800 GTX cards, which topped out at 128 CUDA cores back in late 2006. 12 years later, we have GPUs with up to 40 times as many CUDA cores (Titan V), and modest hardware like the GTX 1070 still has 15 times as many cores—plus higher clockspeeds. Full real-time ray tracing for every pixel might not be possible on the RTX 2080 Ti today, but we've clearly embarked on that journey. If it takes another five or ten years before it becomes practical on mainstream hardware, I can wait. And by then we'll be looking toward the next jump in computer graphics.
It's been a long time since memory kits were cheap. Who knows if things will ever return to the way they were, though if you're looking for some positive news on the subject, a new report predicts that DRAM pricing will see a price drop in the fourth quarter.
The report comes from DRAMeXchange, a division of TrendForce, which has its pulse on the memory market. You may recall that DRAMeXchange noted a couple of weeks ago that lower DRAM pricing was imminent. That's still true, the market research firm says, only now it's forecasting a steeper decline.
"DRAMeXchange expects that the quotations of DRAM products to decline by 5 percent quarter-overquater, higher than the previous forecast of 1~3 percent. The weak quotations are mainly due to increasing bit supply yet fairly limited growth in demand, despite the coming of holiday sales season," DRAMeXchange said.
This is welcome news for consumers after nine consecutive quarters of price growth. Lower contract prices should translate to cheaper memory kits, at least in theory. Further good news is that the price forecast applies not just to server and mobile memory products, but consumer DRAM, and DDR4 memory kits specifically.
Memory pricing has a tendency to ebb and flow, though two years ago, RAM kits were comparatively cheap. For example, this 16GB Corsair Vengeance LPX DDR4-3000 memory kit hovered around $70 in 2016, but now lists for $144.99 on Amazon, and that's a sale price. Likewise, you could have found this 8GB Kingston HyperX Fury Black DDR4-2133 for just over $30 for a period of time in 2016, but now it sells for $97.34.
So, the best days are probably behind us, as it pertains to memory pricing, but we'll take any price drops we can get.
Several major technology firms are attending a quantum computing summit hosted by the White House today, as the US looks to make some major headway in the field over the next decade. Google and Intel are among those attending, as are executives from AT&T, Honeywell, IBM, Lockheed Martin, and several others.
White House Office of Science and Technology Policy organized the meeting, Reuters reports. The meeting is aimed at coming up with and publishing a strategy on how to advance quantum computing and "really develop a plan" for making it a reality.
This is not something that will affect PC gaming in the next several years, but it could certainly have an impact sometime down the road.
Quantum computing is much more complex and far faster at running calculations than today's computing methods. There's a detailed write-up at Wired that provides a pretty good overview and is worth reading if you've never heard of quantum computing, or just want to understand it a bit better. In short, whereas conventional computers are based on bits (0s and 1s of binary code), quantum computing uses quantum bits, or qubits, which can exist in superpositions of 1 and 0.
It's a lot to wrap one's head around. The takeaway is that quantum computing could have a major impact on all areas of science and technology.
The US is eager to usher in the era of quantum computing. Lawmakers are trying to approve $1.3 billion in funding over the next five years to help "create a unified national quantum strategy." According to ABC News, the US is partially motivated by fears of growing competition from China.
There are several challenges that lie in the way of quantum computing being used on a mass scale. Everything from cooling to even the programming language present significant hurdles. As it pertains to the latter, Microsoft last year announced a breakthrough in a programming language that is integrating into Visual Studio, and is designed to work on both a quantum simulator and a quantum computer.
As it pertains to gaming, it's tough to predict the precise impact quantum computing will have. Generally speaking, it will probably help deliver better physics and AI scenarios, as opposed to providing the horsepower for, say, a completely ray-traced experience in real time (versus the current hybrid solution that Nvidia's RTX technology uses). Think bigger and more complex environments, deeper interactions with smarter NPCs, and that sort of thing.
"[Quantum machine learning] will give game developers an opportunity to create experiences that adapt to human input over time. In massively multiplayer scenarios, quantum-powered machine learning will be able to analyze the behaviors of legions of gamers, and create experiences that challenge us better collectively, while adapting to each player’s unique style of play," Jeff Henshaw, founding member of Microsoft’s Xbox team and current group project manager of Microsoft’s Quantum Architecture and Computing Group (QuArC), recently told Gizmodo.
It's all speculation right now, but here's hoping we find out sooner rather than later.
Facebook has issued a statement warning its users of a "security issue" that affects nearly 50 million users accounts. An investigation is underway, but Facebook that "it's clear that attackers exploited a vulnerability in Facebook's code."
That vulnerability arose from the "View As" feature that enables users to see what their profiles look like to other people. Exploiting a change made in July 2017 to the video uploading system, hackers could take control of Facebook access tokens—"the equivalent of digital keys that keep people logged in to Facebook so they don’t need to re-enter their password every time they use the app"—and use them to take over connected accounts.
Facebook has fixed the problem and informed law enforcement, and the access tokens of the accounts affected by the breach have been reset, as have tokens belonging to 40 million more accounts that have been the subject of a "View As" lookup over the past year. "As a result, around 90 million people will now have to log back in to Facebook, or any of their apps that use Facebook Login," the statement says.
The "View As" feature has also been suspended while Facebook investigates.
"Since we’ve only just started our investigation, we have yet to determine whether these accounts were misused or any information accessed. We also don’t know who’s behind these attacks or where they’re based," Facebook wrote.
"We’re working hard to better understand these details—and we will update this post when we have more information, or if the facts change. In addition, if we find more affected accounts, we will immediately reset their access tokens."
Facebook noted that users do not need to change their passwords. Users who want to log out of Facebook just in case should hit up the "Security and Login" section of the Facebook settings menu.