Windows.  Viruses.  Laptops.  Internet.  Office.  Utilities.  Drivers

Gained the ability to monitor graphics processing unit (GPU) performance data. Users can analyze this information to understand how graphics card resources are used, which are increasingly used in computing.

This means that all GPUs installed in the PC will be shown in the Performance tab. Additionally, in the Processes tab, you can see which processes are accessing the GPU, and GPU memory usage data is located in the Details tab.

How to check if GPU performance viewer is supported

Although Task Manager has no special requirements for monitoring CPU, memory, disk, or network adapters, the situation with GPUs looks a little different.

In Windows 10, GPU information is available in Task Manager only when using the Windows Display Driver Model (WDDM) architecture. WDDM is a graphics driver architecture for a video card that enables desktop and application rendering on the screen.

WDDM provides a graphics core, which includes a scheduler (VidSch) and a video memory manager (VidMm). It is these modules that are responsible for making decisions when using GPU resources.

The task manager receives information about the use of GPU resources directly from the scheduler and video memory manager of the graphics core. Moreover, this is true both in the case of integrated and dedicated GPUs. For this function to work correctly, WDDM version 2.0 or higher is required.

To check if your devices support viewing GPU data in Task Manager, follow these steps:

  1. Use a combination Windows keys+ R to open the Run command.
  2. Enter the command dxdiag.exe to open the DirectX Diagnostic Tool and press Enter.
  3. Go to the “Screen” tab.
  4. In the right section “Drivers” look at the driver model value.

If you are using WDDM 2.0 or higher, Task Manager will display usage data GPUs on the Performance tab.

How to Monitor GPU Performance Using Task Manager

To monitor GPU performance data using Task Manager, simply click right click mouse on the taskbar and select “Task Manager”. If compact view is active, click the More details button, and then go to the Performance tab.

Advice: For quick launch In Task Manager, you can use the keyboard shortcut Ctrl + Shift + Esc

Performance tab

If your computer supports WDDM version 2.0 or later, then in the left pane of the tabs Performance Your GPU will be displayed. In case there are multiple GPUs installed on the system, each GPU will be shown using a number corresponding to its physical location, such as GPU 0, GPU 1, GPU 2, etc.

Windows 10 supports multiple GPU pairings using Nvidia SLI and AMD Crossfire modes. When one of these configurations is detected on the system, the Performance tab will indicate each link using a number (for example, Link 0, Link 1, etc.). The user will be able to see and check each GPU within the bundle.

On the specific GPU page, you'll find aggregated performance data, which is generally divided into two sections.

The section contains current information about the engines of the GPU itself, and not about its individual cores.

Task Manager by default displays the four most in-demand GPU engines, which by default include 3D, Rip, Video Decoding, and Video Processing, but you can change these views by clicking on the name and selecting a different engine.

The user can even change the graph view to a single engine by right-clicking anywhere in the section and selecting the "Change Graph > Single Engine" option.

Below the engine graphs is a block of data on video memory consumption.

Task Manager shows two types of video memory: shared and dedicated.

Dedicated memory is memory that will only be used graphics card. Typically this is the amount of VRAM on discrete cards or the amount of memory available to the processor on which the computer is configured to be explicitly reserved.

In the lower right corner the “Hardware reserved memory” parameter is displayed - this amount of memory is reserved for the video driver.

The amount of allocated memory in this section represents the amount of memory actively used by processes, and the amount of shared memory in this section represents the amount system memory, consumed for graphical needs.

In addition, in the left panel under the name GPUs, you will see the current GPU resource utilization as a percentage. It's important to note that Task Manager uses the percentage of the busiest engine to represent overall usage.

To see performance data over time, run a GPU-intensive application, such as a video game.

Processes tab

You can also monitor GPU performance in the tab Processes. In this section you will find a generalized summary for a specific process.

The GPU column shows the most active engine usage to represent the overall GPU resource usage of a particular process.

However, if multiple engines report 100 percent utilization, confusion can arise. The additional column “GPU Core” provides detailed information about the engine loaded by this process.

The column header on the Processes tab shows the total resource consumption of all GPUs available on the system.

If you don't see these columns, right-click any column header and check the appropriate boxes.

Details tab

By default, the tab does not display GPU information, but you can always right-click on the column header, select the “Select Columns” option and enable the following options:

  • GPU core
  • Dedicated GPU memory
  • Shared GPU Memory

The memory tabs display the total and allocated memory respectively that are being used by a particular process. The GPU and GPU Core columns show the same information as in the Processes tab.

When using the Details tab, you need to be aware that each process's addition of used memory may be greater than the total available memory because the total memory will be counted multiple times. This information is useful for understanding the memory usage of a process, but you should use the Performance tab to see more detailed graphics usage information.

Conclusion

Microsoft is committed to providing users with more precision instrument graphics performance evaluations compared to third party applications. Please note that work on this functionality is ongoing and improvements are possible in the near future.

Task Manager Windows 10 contains detailed monitoring tools GPU (GPU). You can view per-app and system-wide GPU usage, and Microsoft promises that the indicators task manager will be more accurate than indicators from third-party utilities.

How it works

These features GPU were added in the update Fall Creators for Windows 10 , also known as Windows 10 version 1709 . If you are using Windows 7, 8 or older Windows version 10, you won't see these tools in your task manager.

Windows uses newer features in the Windows Display Driver Model to extract information directly from GPU (VidSCH) and video memory manager (VidMm) in the WDDM graphics core, which are responsible for the actual allocation of resources. It shows very accurate data no matter what API applications use to access the GPU - Microsoft DirectX, OpenGL, Vulkan, OpenCL, NVIDIA CUDA, AMD Mantle or anything else.

That is why in task manager Only WDDM 2.0 compliant systems are displayed GPUs . If you don't see this, your system's GPU is probably using an older type of driver.

You can check which version of WDDM your driver is using GPU by pressing the Windows key + R, typing "dxdiag" in the field, and then pressing "Enter" to open the tool " DirectX Diagnostic Tool" Go to the “Screen” tab and look to the right of “Model” in the “Drivers” section. If you see a WDDM 2.x driver here, your system is compatible. If you see a WDDM 1.x driver here, your GPU incompatible.

How to View GPU Performance

This information is available in task manager , although it is hidden by default. To open it, open Task Manager by right-clicking on any empty space on the taskbar and selecting " Task Manager"or by pressing Ctrl+Shift+Esc on the keyboard.

Click the "More details" button at the bottom of the window " Task Manager" if you see the standard simple view.

If GPU not showing up in task manager , V full screen mode on the " Processes"Right-click any column header and then enable the option " GPU " This will add a column GPU , which allows you to see the percentage of resources GPU , used by each application.

You can also enable the option " GPU core" to see which GPU the app is using.

General Use GPU of all applications on your system appears at the top of the column GPU. Click a column GPU to sort the list and see which apps are using your GPU most of all on this moment.

Number in column GPU- This is the highest usage that the application uses across all engines. So, for example, if an application uses 50% GPU 3D engine and 2% GPU video engine decoding, you will simply see the GPU column displaying the number 50%.

In the column " GPU core» each application is displayed. This shows you what physical GPU and what engine the application uses, for example whether it uses a 3D engine or a video decoding engine. You can determine which GPU qualifies for a specific metric by checking the " Performance", which we will talk about in the next section.

How to view an application's video memory usage

If you are wondering how much video memory is being used by an application, you need to go to the Details tab in the Task Manager. On the Details tab, right-click any column header and select Select Columns. Scroll down and turn on columns " GPU », « GPU core », « " And " " The first two are also available in the Processes tab, but the last two memory options are only available in the Details panel.

Column " Dedicated GPU memory » shows how much memory the application is using on your GPU. If your PC has a discrete graphics card from NVIDIA or AMD, then this is part of its VRAM, that is, how much physical memory on your video card is using the application. If you have integrated graphics processor , a portion of your regular system memory is reserved exclusively for your graphics hardware. This shows how much of the reserved memory is being used by the application.

Windows also allows applications to store some data in regular system DRAM. Column " Shared GPU Memory " shows how much memory the application is currently using for video devices from the computer's normal system RAM.

You can click on any of the columns to sort by them and see which application is using the most resources. For example, to see the applications using the most video memory on your GPU, click the " Dedicated GPU memory ».

How to track GPU share usage

To track overall resource usage statistics GPU, go to the " Performance" and look at " GPU" at the bottom of the sidebar. If your computer has multiple GPUs, you'll see several options here GPU.

If you have multiple linked GPUs - using a feature like NVIDIA SLI or AMD Crossfire, you will see them identified by a "#" in their name.

Windows displays usage GPU in real time. Default Task Manager tries to display the most interesting four engines according to what's going on in your system. For example, you'll see different graphics depending on whether you're playing 3D games or encoding videos. However, you can click on any of the names above the charts and select any of the other available engines.

Name of your GPU also appears in the sidebar and at the top of this window, making it easy to check what graphics hardware is installed on your PC.

You will also see dedicated and shared memory usage graphs GPU. Shared Memory Usage GPU refers to how much of the system's total memory is used for tasks GPU. This memory can be used for both normal system tasks and video recordings.

At the bottom of the window you will see information such as the version number of the installed video driver, development date and physical location GPU on your system.

If you'd like to view this information in a smaller window that's easier to leave on the screen, double-click anywhere inside the GPU screen or right-click anywhere inside it and select the option Graphic summary" You can maximize a window by double-clicking in the panel or by right-clicking in it and unchecking the " Graphic summary».

You can also right-click on the graph and select "Edit Graph" > "Single Core" to view just one engine graph GPU.

To keep this window permanently displayed on your screen, click "Options" > " On top of other windows».

Double click inside the panel GPU again and you'll have a minimal window that you can position anywhere on the screen.

In 2016, hopes for a full-fledged generational change in GPUs, which had previously been hampered by the lack of manufacturing capabilities necessary to produce chips with significantly higher transistor densities and clock speeds than the proven 28 nm process allowed, finally came true. The 20nm technology we hoped for two years ago has proven to be commercially unviable for chips as large as discrete GPUs. Since TSMC and Samsung, which could act as contractors for AMD and NVIDIA, did not use FinFETs at 20 nm, the potential increase in performance per watt compared to 28 nm was such that both companies chose to wait for mass adoption of 14/16-nm. nm standards, already using FinFET.

However, the years of anxious waiting have passed, and now we can evaluate how GPU manufacturers have used the capabilities of the updated technical process. As practice has once again shown, “nanometers” by themselves do not guarantee high energy efficiency of a chip, so the new architectures of NVIDIA and AMD turned out to be very different in this parameter. And additional intrigue was added by the fact that companies no longer use the services of one factory (TSMC), as was the case in past years. AMD chose GlobalFoundries to produce Polaris GPUs based on 14 nm FinFET technology. NVIDIA, on the other hand, is still collaborating with TSMC, which has a 16nm FinFET process, on all Pascal chips except the low-end GP107 (which is made by Samsung). It was Samsung's 14nm FinFET line that was once licensed by GlobalFoundries, so the GP107 and its rival Polaris 11 give us a convenient opportunity to compare the engineering achievements of AMD and NVIDIA on a similar manufacturing base.

However, let's not dive into technical details prematurely. In general, the proposals of both companies based on the new generation GPUs look like this. NVIDIA has created a full line of Pascal accelerators based on three consumer-grade GPUs - GP107, GP106 and GP104. However, the place of the flagship adapter, which will certainly receive the name GeForce GTX 1080 Ti, currently vacant. A candidate for this position is a card with a GP102 processor, which is so far used only in the “prosumer” accelerator NVIDIA TITAN X. And finally, the main pride of NVIDIA is the GP100 chip, which the company, apparently, is not even going to implement in gaming products and left for Tesla computing accelerators.

AMD's successes are more modest so far. Two processors of the Polaris family were released, products based on which belong to the lower and middle categories of gaming video cards. The upper echelons will be occupied by the upcoming Vega family of GPUs, which are expected to feature a comprehensively upgraded GCN architecture (while Polaris is not that different from the 28nm Fiji and Tonga chips in this regard).

NVIDIA Tesla P100 and new TITAN X

Thanks to the efforts of Jensen Huang, the permanent head of NVIDIA, the company is already positioning itself as a manufacturer of general-purpose computing processors no less than a manufacturer of gaming GPUs. A signal that NVIDIA is taking the supercomputing business more seriously than ever with the division of its Pascal line of GPUs into gaming, on the one hand, and computing, on the other.

Once the 16nm FinFET process came online at TSMC, NVIDIA put its first efforts into releasing the GP100 supercomputer chip, which debuted ahead of the Pascal line of consumer products.

The distinctive properties of the GP100 were an unprecedented number of transistors (15.3 billion) and shader ALUs (3840 CUDA cores). In addition, this is the first accelerator that is equipped with HBM2 memory (16 GB) combined with a GPU on a silicon substrate. The GP100 is used as part of the Tesla P100 accelerators, which were initially limited to the field of supercomputers due to a special form factor with the NVLINK bus, but later NVIDIA released the Tesla P100 in a standard expansion card format PCI Express.

Initially, experts assumed that P100 could appear in gaming video cards. NVIDIA apparently did not deny this possibility, because the chip has a full-fledged pipeline for rendering 3D graphics. But it is now clear that it is unlikely to ever go beyond the computing niche. For graphics, NVIDIA has a related product - the GP102, which has the same set of shader ALUs, texture mapping units and ROPs as the GP100, but lacks ballast in the form large quantity 64-bit CUDA cores, not to mention other architectural changes (fewer schedulers, reduced L2 cache, etc.). The result is a more compact (12 billion transistors) core, which, together with the abandonment of HBM2 memory in favor of GDDR5X, allowed NVIDIA to distribute the GP102 to a wider market.

Now the GP102 is reserved for the prosumer accelerator TITAN X (not to be confused with the GeForce GTX TITAN X based on the GM200 chip of the Maxwell architecture), which is positioned as a board for reduced-precision calculations (in the range from 8 to 32 bits, among which 8 and 16 are NVIDIA’s favorite deep training) even more than for games, although wealthy gamers can purchase a video card for $1,200. Indeed, in our gaming tests, the TITAN X does not justify its cost with a 15-20 percent advantage over the GeForce GTX 1080, but it comes to the rescue overclocking. If we compare the overclocked GTX 1080 and TITAN X, the latter will be 34% faster. However, the new gaming flagship based on the GP102 will most likely have fewer active computing units or lose support for any computing functions (or both).

Overall, releasing massive GPUs like the GP100 and GP102 early in the 16nm FinFET process is a major achievement for NVIDIA, especially considering the challenges the company faced in the 40nm and 28nm phases.

NVIDIA GeForce GTX 1070 and 1080

NVIDIA deployed its line of GeForce 10 series gaming accelerators in its usual sequence - from the most powerful models to more budget ones. The GeForce GTX 1080 and other Pascal architecture gaming cards released subsequently most clearly showed that NVIDIA fully realized the capabilities of the 14/16 nm FinFET process to make chips denser and more energy efficient.

In addition, by creating Pascal, NVIDIA not only increased performance in various computational tasks (as shown by the example of the GP100 and GP102), but also supplemented the Maxwell chip architecture with functions that optimize graphics rendering.

Let us briefly note the main innovations:

  • improved color compression with ratios up to 8:1;
  • the Simultaneous Multi-Projection function of the PolyMorph Engine geometric engine, which allows you to create up to 16 projections of scene geometry in one pass (for VR and systems with multiple displays in the NVIDIA Surround configuration);
  • the ability to interrupt (preemption) during the execution of a draw call (during rendering) and the command flow (during calculations), which, together with the dynamic distribution of GPU computing resources, provides full support for asynchronous computing (Async Compute) - an additional source of performance in games running the DirectX 12 API reduced latency in VR.

The last point is especially interesting, since Maxwell chips were technically compatible with asynchronous computing (simultaneous work with a computational and graphics command queue), but performance in this mode left much to be desired. Pascal's asynchronous computing works as intended, allowing games to load the GPU more efficiently with a separate thread for physics calculations (though admittedly on chips NVIDIA problem fully loading shader ALUs is not as acute as for AMD GPUs).

The GP104 processor used in the GTX 1070 and GTX 1080 is the successor to the GM204 (the second-tier chip in the Maxwell family), but NVIDIA has achieved such high clock speeds that the GTX 1080 performs faster than the GTX TITAN X (based on a larger GPU) on average by 29%, and all this within a more conservative thermal package (180 versus 250 W). Even the GTX 1070, cut much more than the GTX 970 was cut compared to the GTX 980 (and the GTX 1070 uses GDDR5 memory instead of the GDDR5X in the GTX 1080), is still 5% faster than GTX TITAN X.

NVIDIA has updated the display controller in Pascal, which is now compatible with DisplayPort interfaces 1.3/1.4 and HDMI 2.b, which means it allows you to output an image with an increased resolution or refresh rate over one cable - up to 5K at 60 Hz or 4K at 120 Hz. 10/12-bit color representation provides support dynamic range(HDR) on the few screens yet that have this capability. The dedicated Pascal hardware unit is capable of encoding and decoding HEVC (H.265) video with resolutions up to 4K, 10-bit color (12-bit decoding) and 60 Hz.

Finally, Pascal has gone away from the limitations inherent in previous version SLI buses. The developers raised the frequency of the interface and released a new, two-channel bridge.

You can read more about these features of the Pascal architecture in our GeForce GTX 1080 review. However, before moving on to other new products of the past year, it is worth mentioning that in the 10th line GeForce NVIDIA will be releasing reference design cards for the first time throughout the life of the respective models. They are now called Founders Edition and are sold above the retail price recommended for partner graphics cards. For example, the GTX 1070 and GTX 1080 have recommended prices of $379 and $599 (which is already higher than the GTX 970 and GTX 980 in their youth), while the Founders Edition is priced at $449 and $699.

GeForce GTX 1050 and1060

The GP106 chip brought the Pascal architecture to the mainstream segment of gaming accelerators. Functionally, it is no different from older models, and in terms of the number of computing units it is half the GP104. True, the GP106, unlike the GM206 (which was half of the GM204), uses a 192-bit memory bus. In addition, NVIDIA removed SLI connectors from the GTX 1060 board, upsetting fans of gradual upgrades of the video subsystem: when this accelerator exhausts its capabilities, you can’t add a second video card to it (except for those games running DirectX 12, which allow you to distribute the load between GPUs, bypassing driver).

The GTX 1060 originally featured 6GB GDDR5, a fully functional GP106 chip, and retailed for $249/$299 (partner cards and Founders Edition, respectively). But then NVIDIA released a video card with 3 GB of memory and a recommended price of $199, which also reduced the number of computing units. Both video cards have an attractive TDP of 120 W, and are similar in performance to the GeForce GTX 970 and GTX 980.

The GeForce GTX 1050 and GTX 1050 Ti belong to the lowest category mastered by the Pascal architecture. But no matter how modest they may look compared to their older brothers, NVIDIA has made the greatest step forward in the budget niche. The GTX 750/750 Ti, which occupied it before, belong to the first iteration of the Maxwell architecture, so the GTX 1050/1050 Ti, unlike other accelerators of the Pascal family, have advanced not one, but one and a half generations. With a significantly larger GPU and higher-clocked memory, the GTX 1050/1050 Ti improves performance over its predecessors more than any other member of the Pascal series (90% difference between the GTX 750 Ti and GTX 1050 Ti).

And although the GTX 1050/1050 Ti consume a little more power (75 vs 60 W), they still fit within the power standards for PCI Express cards without a slot additional food. NVIDIA did not release junior accelerators in the Founders Edition format, and the recommended retail prices were $109 and $139.

AMD Polaris: Radeon RX 460/470/480

AMD's response to Pascal was the Polaris family of chips. The Polaris line now includes only two chips, on the basis of which AMD produces three video cards (Radeon RX 460, RX 470 and RX 480), in which the amount of on-board RAM additionally varies. As you can easily see even from the model numbers, the upper echelon of performance in the Radeon 400 series remains unoccupied. AMD will have to fill it with products based on Vega silicon. Back in the 28 nm era, AMD acquired this habit of testing innovations on relatively small chips and only then introducing them into flagship GPUs.

It should be noted right away that in the case of AMD, the new family of graphics processors is not identical new version the underlying GCN (Graphics Core Next) architecture, but reflects a combination of architecture and other product features. For GPUs built using the new process technology, AMD has abandoned the various “islands” in the code name (Northern Islands, South Islands, etc.) and denotes them with the names of stars.

Nevertheless, the GCN architecture in Polaris received another, third update, thanks to which (along with the transition to the 14 nm FinFET process technology) AMD significantly increased performance per watt.

  • The Compute Unit, the elementary form of organizing shader ALUs in GCN, has undergone a number of changes related to instruction prefetching and caching, and access to the L2 cache, which together increased the specific performance of the CU by 15%.
  • There is now support for half-precision calculations (FP16), which are used in computer vision and machine learning programs.
  • GCN 1.3 provides direct access to the internal instruction set (ISA) of stream processors, through which developers can write the most “low-level” and quick code- as opposed to the shader languages ​​DirectX and OpenGL, which are abstracted from hardware.
  • Geometry processors are now capable of eliminating zero-size polygons or polygons that have no pixels in the projection early in the pipeline, and have an index cache that reduces resource consumption when rendering small, duplicate geometry.
  • Double L2 cache.

In addition, AMD engineers have worked hard to get Polaris to run at as high a frequency as possible. The GPU frequency is now controlled with minimal latency (latency less than 1 ns), and the voltage curve of the card is adjusted every time the PC is booted in order to take into account the variation in parameters between individual chips and the aging of silicon during operation.

However, the transition to the 14nm FinFET process has not been smooth sailing for AMD. Indeed, the company was able to increase performance per watt by 62% (judging by the results of the Radeon RX 480 and Radeon R9 380X in gaming tests and the TDP of the cards). However, Polaris' maximum frequencies do not exceed 1266 MHz, and only a few of its manufacturing partners have achieved more with additional work on the cooling and power systems. On the other side, GeForce video cards still retain leadership in terms of performance-to-power ratio, which NVIDIA achieved back in the Maxwell generation. It seems that AMD at the first stage was not able to reveal all the capabilities of the new generation technical process, or the GCN architecture itself already requires deep modernization - the last task was left to the Vega chips.

Polaris-based accelerators occupy the price range from $109 to $239 (see table), although in response to the appearance of the GeForce GTX 1050/1050 Ti, AMD reduced the prices of the two lower cards to $100 and $170, respectively. At the moment, in each price/performance category there is an equal balance of power between competing products: the GeForce GTX 1050 Ti is faster than the Radeon RX 460 with 4GB of RAM, the GTX 1060 with 3GB of memory is faster than the RX 470, and the full-fledged GTX 1060 is ahead of the RX 480. At the same time, AMD video cards are cheaper, which means they are popular.

AMD Radeon Pro Duo

The report on the past year in the field of discrete GPUs will not be complete if we ignore one more of the “red” video cards. While AMD had not yet released a flagship single-processor video adapter to replace the Radeon R9 Fury X, the company had one proven move left to continue conquering new frontiers - installing two Fiji chips on one board. This card, the release of which AMD repeatedly postponed, nevertheless went on sale shortly before the GeForce GTX 1080, but fell into the category of professional Radeon Pro accelerators and was positioned as a platform for creating games in the VR environment.

For gamers, at $1,499 (more expensive than a pair of Radeon R9 Fury Xs at launch), the Radeon Pro Duo is of no interest, and we didn't even have the opportunity to test this card. It's a pity, because from a technical point of view, the Radeon Pro Duo looks intriguing. The card's nameplate TDP increased by only 27% compared to Fury X, despite the fact that peak frequencies AMD processors reduced by 50 MHz. Previously, AMD has already managed to release a successful dual-processor video card - the Radeon R9 295X2, so the specifications declared by the manufacturer do not cause much skepticism.

What to expect in 2017

The main expectations for the coming year are related to AMD. NVIDIA will most likely limit itself to releasing a flagship gaming card based on the GP102 under the name GeForce GTX 1080 Ti and, perhaps, fill another vacancy in the 10th GeForce series - GTX 1060 Ti. Otherwise, the Pascal line of accelerators has already been formed, and the debut of the next architecture, Volta, is planned only for 2018.

As in the CPU space, AMD has put all its efforts into developing a truly breakthrough GPU microarchitecture, while Polaris has become just a staging post on the way to the latter. Presumably, already in the first quarter of 2017 the company will release its best silicon, Vega 10, to the mass market for the first time (and along with it or subsequently one or more lower-end chips in the line). The most reliable evidence of its capabilities was the announcement of the MI25 computing card in the Radeon Instinct line, which is positioned as an accelerator for deep learning tasks. Based on the specifications, it is based on none other than the Vega 10. The card develops 12.5 TFLOPS of processing power in single-precision calculations (FP32), which is more than the TITAN X on GP102, and is equipped with 16 GB of HBM2 memory. The TDP of the video card is within 300 W. The real performance of the processor can only be guessed at, but it is known that Vega will bring the most large-scale update to the GPU microarchitecture since the release of the first GCN-based chips five years ago. The latter will significantly improve performance per watt and allow more efficient use of the processing power of shader ALUs (which AMD chips traditionally lack) in gaming applications.

There are also rumors that AMD engineers have now mastered the 14 nm FinFET process technology and the company is ready to release the second version of Polaris video cards with a significantly lower TDP. It seems to us that if this is true, then the updated chips would rather go into the Radeon RX 500 line than receive increased indexes in the existing 400 series.

Application. Current lines of discrete video adapters from AMD and NVIDIA

Manufacturer AMD
Model Radeon RX 460 Radeon RX 470 Radeon RX 480 Radeon R9 Nano Radeon R9 Fury Radeon R9 Fury X
GPU
Name Polaris 11 Polaris 10 Polaris 10 Fiji XT Fiji PRO Fiji XT
Microarchitecture GCN 1.3 GCN 1.3 GCN 1.3 GCN 1.2 GCN 1.2 GCN 1.2
Technical process, nm 14 nm FinFET 14 nm FinFET 14 nm FinFET 28 28 28
Number of transistors, million 3 000 5 700 5 700 8900 8900 8900
1 090 / 1 200 926 / 1 206 1 120 / 1 266 — / 1 000 — / 1 000 — / 1 050
Number of shader ALUs 896 2 048 2 304 4096 3584 4096
56 128 144 256 224 256
ROP number 16 32 32 64 64 64
RAM
Bus width, bits 128 256 256 4096 4096 4096
Chip type GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM H.B.M. H.B.M. H.B.M.
1 750 (7 000) 1 650 (6 600) 1 750 (7 000) / 2 000 (8 000) 500 (1000) 500 (1000) 500 (1000)
Volume, MB 2 048 / 4 096 4 096 4 096 / 8 192 4096 4096 4096
I/O bus PCI Express 3.0 x8 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16
Performance
2 150 4 940 5 834 8 192 7 168 8 602
Performance FP32/FP64 1/16 1/16 1/16 1/16 1/16 1/16
112 211 196/224 512 512 512
Image output
DL DVI-D, HDMI 2.0b, DisplayPort 1.3/1.4 DL DVI-D, HDMI 2.0b, DisplayPort 1.3/1.4 HDMI 1.4a, DisplayPort 1.2 HDMI 1.4a, DisplayPort 1.2 HDMI 1.4a, DisplayPort 1.2
TDP, W <75 120 150 175 275 275
109/139 179 199/229 649 549 649
8 299 / 10 299 15 999 16 310 / 18 970 ND ND ND
Manufacturer NVIDIA
Model GeForce GTX 1050 GeForce GTX 1050 Ti GeForce GTX 1060 3 GB GeForce GTX 1060 GeForce GTX 1070 GeForce GTX 1080 TITAN X
GPU
Name GP107 GP107 GP106 GP106 GP104 GP104 GP102
Microarchitecture Pascal Pascal Maxwell Maxwell Pascal Pascal Pascal
Technical process, nm 14 nm FinFET 14 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET
Number of transistors, million 3 300 3 300 4 400 4 400 7 200 7 200 12 000
Clock frequency, MHz: Base Clock / Boost Clock 1 354 / 1 455 1 290 / 1 392 1506/1708 1506/1708 1 506 / 1 683 1 607 / 1 733 1 417 / 1531
Number of shader ALUs 640 768 1 152 1 280 1 920 2 560 3 584
Number of texture mapping units 40 48 72 80 120 160 224
ROP number 32 32 48 48 64 64 96
RAM
Bus width, bits 128 128 192 192 256 256 384
Chip type GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5X SDRAM GDDR5X SDRAM
Clock frequency, MHz (bandwidth per contact, Mbit/s) 1 750 (7 000) 1 750 (7 000) 2000 (8000) 2000 (8000) 2000 (8000) 1 250 (10 000) 1 250 (10 000)
Volume, MB 2 048 4 096 6 144 6 144 8 192 8 192 12 288
I/O bus PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16
Performance
Peak performance FP32, GFLOPS (based on maximum specified frequency) 1 862 2 138 3 935 4 373 6 463 8 873 10 974
Performance FP32/FP64 1/32 1/32 1/32 1/32 1/32 1/32 1/32
RAM bandwidth, GB/s 112 112 192 192 256 320 480
Image output
Image output interfaces DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b
TDP, W 75 75 120 120 150 180 250
Suggested retail price at time of release (USA, excluding tax), $ 109 139 199 249/299 (Founders Edition / affiliate cards) 379/449 (Founders Edition / affiliate cards) 599/699 (Founders Edition / affiliate cards) 1 200
Recommended retail price at the time of release (Russia), rub. 8 490 10 490 ND 18,999/- (Founders Edition/Affiliate Cards) ND / 34,990 (Founders Edition / partner cards) ND / 54,990 (Founders Edition / partner cards)

The integrated graphics processor plays an important role for both gamers and undemanding users.

The quality of games, movies, watching videos on the Internet and images depends on it.

Principle of operation

The graphics processor is integrated into the computer's motherboard - this is what integrated graphics looks like.

As a rule, they use it to remove the need to install a graphics adapter -.

This technology helps reduce the cost of the finished product. In addition, due to the compactness and low power consumption of such processors, they are often installed in laptops and low-power desktop computers.

Thus, integrated graphics processors have filled this niche so much that 90% of laptops on US store shelves have such a processor.

Instead of a regular video card, integrated graphics often use the computer's RAM itself as an auxiliary tool.

True, this solution somewhat limits the performance of the device. Still, the computer itself and the graphics processor use the same memory bus.

So this “neighborhood” affects the performance of tasks, especially when working with complex graphics and during gameplay.

Kinds

Integrated graphics have three groups:

  1. Shared memory graphics is a device based on shared memory management with the main processor. This significantly reduces cost, improves energy saving system, but degrades performance. Accordingly, for those who work with complex programs, integrated graphics processors of this type are most likely not suitable.
  2. Discrete graphics - a video chip and one or two video memory modules are soldered onto the motherboard. Thanks to this technology, image quality is significantly improved, and it also becomes possible to work with 3D graphics with the best results. True, you will have to pay a lot for this, and if you are looking for a high-power processor in all respects, the cost can be incredibly high. In addition, your electricity bill will increase slightly - the power consumption of discrete GPUs is higher than usual.
  3. Hybrid discrete graphics is a combination of the two previous types, which ensured the creation of the PCI Express bus. Thus, access to memory is carried out both through the soldered video memory and through the RAM. With this solution, manufacturers wanted to create a compromise solution, but it still does not eliminate the shortcomings.

Manufacturers

As a rule, large companies - , and - are engaged in the manufacture and development of integrated graphics processors, but many small enterprises are also involved in this area.

This is not difficult to do. Look for Primary Display or Init Display First. If you don’t see something like that, look for Onboard, PCI, AGP or PCI-E (it all depends on the buses installed on the motherboard).

By choosing PCI-E, for example, you enable the PCI-Express video card and disable the built-in integrated one.

Thus, to enable the integrated video card, you need to find the appropriate parameters in the BIOS. Often the activation process is automatic.

Disable

It is better to disable it in the BIOS. This is the simplest and most unpretentious option, suitable for almost all PCs. The only exceptions are some laptops.

Again, search for Peripherals or Integrated Peripherals in the BIOS if you are working on a desktop.

For laptops, the name of the function is different, and not the same everywhere. So just find something related to graphics. For example, the necessary options can be placed in the Advanced and Config sections.

Disabling is also carried out in different ways. Sometimes it’s enough to just click “Disabled” and put the PCI-E video card first in the list.

If you are a laptop user, do not be alarmed if you cannot find a suitable option; a priori, you may not have such a function. For all other devices, the rules are simple - no matter how the BIOS itself looks, the filling is the same.

If you have two video cards and they are both shown in the device manager, then the matter is quite simple: right-click on one of them and select “disable”. However, keep in mind that the display may go dark. This will most likely happen.

However, this is also a solvable problem. It is enough to restart the computer or software.

Make all subsequent settings on it. If this method does not work, roll back your actions using safe mode. You can also resort to the previous method - through the BIOS.

Two programs - NVIDIA Control Center and Catalyst Control Center - configure the use of a specific video adapter.

They are the most unpretentious compared to the other two methods - the screen is unlikely to turn off, and you won’t accidentally mess up the settings through the BIOS either.

For NVIDIA all settings are in the 3D section.

You can select your preferred video adapter for the entire operating system and for specific programs and games.

In Catalyst software, an identical function is located in the “Power” option in the “Switchable Graphics” sub-item.

So switching between GPUs is a breeze.

There are different methods, in particular, through programs and through BIOS. Turning on or off one or another integrated graphics may be accompanied by some failures, mainly related to the image.

It may go out or simply become distorted. Nothing should affect the files on the computer themselves, unless you clicked something in the BIOS.

Conclusion

As a result, integrated graphics processors are in demand due to their low cost and compactness.

You will have to pay for this with the level of performance of the computer itself.

In some cases, integrated graphics are simply necessary - discrete processors are ideal for working with three-dimensional images.

In addition, the industry leaders are Intel, AMD and Nvidia. Each of them offers its own graphics accelerators, processors and other components.

The latest popular models are Intel HD Graphics 530 and AMD A10-7850K. They are quite functional, but have some flaws. In particular, this applies to power, performance and cost of the finished product.

You can enable or disable a graphics processor with a built-in core either yourself through BIOS, utilities and various programs, but the computer itself can easily do this for you. It all depends on which video card is connected to the monitor itself.

If you notice an error, select a piece of text and press Ctrl+Enter
SHARE: