Best price/gaming performance ratio? Sparkle Arc A580 Orc

Sparkle Arc A580 Orc in detail

AMD and Nvidia let the cheap graphics card segment “deteriorate” so much that Intel took advantage of it quite elegantly. The latest Arc A580 in the Sparkle (Orc) design has a lot of imperfections that you might not get over, but at the same time it has settled into the top spot when it comes to price/performance ratio in gaming. With this primacy, it profiles itself as one of the leading adepts into low-budget builds.

Integrated GPUs weren’t enough for Intel and it already has a few graphics cards from the Arc 3, 5 and 7 series in its portfolio. The latest is the Arc A580, which is between the A380 and A750 performance-wise. In terms of competing solutions, the A580 is priced both below the GeForce RTX 3050 and below the Radeon RX 6600, somewhere around the RX 6500 XT. But that’s already a significantly slower graphics card.

Like the RTX 3050 or RX 6600 (or possibly the RX 7600), the A580 has 8GB of GDDR6 memory. The difference is that Intel connects it via a 256-bit bus instead of a 128-bit bus (RTX 3050/RX 6600). The result is then practically double the memory bandwidth, which is not that noticeable in games, although it scales better at higher resolutions and the performance drop is smaller (than for graphics cards with a 128-bit bus), which you will come across when studying our tests. Unlike comparable graphics cards from competitors, the Arc connects over a full PCI Express 4.0×16 interface, so sixteen lanes (instead of eight), which is mainly an advantage on motherboards with PCIe 3.0 interfaces.

Following on from the previous statement, however, it should be added in the same breath that Intel Arc graphics cards, at least in their current form, are not very suitable as an upgrade to older builds. For the reason that it is dependent on the support of the Resizable BAR technology, which is supported from the AMD AM4 platform or Intel Kaby Lake processors (LGA 1151) and even this may not apply to all boards, respectively their BIOSes. On older platforms (e.g. AMD AM3+), lower than expected performance may be achieved.

The Arc GPU architecture includes XMX units, which are analogous to Nvidia’s Tensor cores (AMD only uses shaders for AI acceleration). While you can’t use Nvidia DLSS on them, Intel has its own XeSS upscaling that can take advantage of XMX units for possibly higher image quality, but that’s only on Arc graphics cards. However, unlike DLSS, XeSS also has a shader-only mode, similar to AMD FSR, and support is there across all graphics cards. In other words, XeSS can also run on GeForce, while the reverse is not true – DLSS does not work with Intel GPUs. This is also one of the things by which Intel wanted to differentiate itself from current solutions, although it should also be noted that the implementation of XeSS in games is rather weak, rare.

However, the fact that the Intel Arc architecture is specific also means that some games may be more poorly optimized for these graphics cards and compatibility issues may be more common. In short, situations where this or that game doesn’t work well on release day and has to wait for an update are more likely. There is a well-known incident like this with Starfield. From a development perspective, it tends to be a priority to debug the game for GeForce and Radeon first. Arc is only third in line.

There is also a possible disadvantage in supporting older games, where Intel was not yet active in the field of gaming GPUs at the time of relevance. This applies for example to games using DirectX 9. Intel addresses these with emulation, in some cases with special optimizations for selected titles. For Dx9 games, however, there’s still a decent chance that you may not mind, as they’re still undemanding enough for the Arc A580 and even with some performance losses, there will still be a “surplus” of performance. This concludes a short prologue highlighting the basic technological differences and potential pitfalls compared to the more traditional graphics card manufacturers. Although we will get to the fact that there are also some gaps in the support or rather non-support of GPU acceleration for computational tasks later in the article. Some things may not concern you at all because you won’t encounter them, but they are good to know about.

Sparkle Arc A580 Orc in detail

Sparkle? An unfamiliar name to the younger years, but this brand has a history. It used to produce GeForce graphics cards, which it stopped doing sometime around the Kepler/GTX 600 generation. Now Sparkle is back as what appears to be an exclusive producer of Intel Arc graphics cards. However, it’s not like they’re restarting production after a decade, practically from scratch. Sparkle is supposed to be a brand belonging to Taiwanese firm TUP, which is also behind the PowerColor brand under which Radeons are made. So in reality, these graphics cards were probably designed by engineers who never left the industry. TUL merely resurrected the brand for them, while PowerColor will remain exclusive to Radeons.

The Arc 580 Orc variant is an overclocked graphics card with an official GPU boost of up to 2000 MHz.

The A580 Orc’s cooler uses two 85mm fans, and although it looks really solid, it’s ultimately going to be the thing that may put you off this graphics card the most. For instance because of the gentle whirring noise.

The fans themselves, as far as the impeller is concerned, are designed in a modern way, also with regard to vibration suppression at the tips of the blades. To achieve this effect, a hoop is used that connects the tips of all blades, thus ensuring greater stability (less vibration). This reduces the likelihood of resonant frequencies. The nine individual blades are quite tightly packed, with no significant gaps, so you don’t have to worry about the amount of static pressure (and air flow through the heatsink fins).

The A580 Orc also benefits from an overall smaller footprint. It is a relatively short 222 mm in length and with a height of 41 mm it can fit into two slots. This means that there is above-standard support with PCIe expansion cards on the one hand and smaller cases on the other.

   

Better support with cases than many similarly long (or is it short?) graphics cards is also possible for smaller widths. From the PCIe connector towards the side panel, it’s about 115 mm. Eventually, this will also make it possible to install in cases with vertically split interiors, where the width of the graphics card is crucial for “does fit/doesn’t fit”. For comparison, the RTX 3050 Ventus 2X 8G OC is roughly a centimeter wider.

Despite categorically belonging to a lower class, the Sparkle Arc A580 Orc has a metal backplate. However, it doesn’t fulfill the cooling function much, it doesn’t have any thermal pads between itself and the PCB. In any case, it reinforces the graphics card and protects it from mechanical damage.

What is notable is the grille at the rear. Graphics card manufacturers do use this, but usually in places behind the PCB (for better air circulation through the heatsink). In this case it’s a bit different, but still useful, as the grille is behind the power delivery, so less heat accumulates behind it than if the perforation wasn’t there.

External power supply is provided via two 6+2-pin PCIe connectors. The A580 Orc’s power draw is over 210W, although you will find mentions of a TDP of 185W in the specifications of other cards (A580). However, this may not include the total power draw (TGP) or this variant (Sparkle Orc) is simply more power-hungry, after all it is the “OC edition”. And also TDP (Thermal Design Power) doesn’t always scale exactly with power draw, there are more possibilities.

   

The backlit Sparkle logo changes color as the GPU heats up. It glows blue when idle or under lower load, then yellow when under load. Thus, unless there is excessive temperature (for example, due to non-working fans), in that case it goes orange.

Please note: The article continues with following chapters.


  •  
  •  
  •  
Flattr this!

Nvidia DLSS (3.5) in Alan Wake II. When does the game run best?

Alan Wake II is the first game to support Nvidia DLSS 3.5 from the start. In addition to the technological aspect, there is also the high popularity among gamers. This gives us the basic reasons to take a close look at the performance under different settings. In diving in with the gaming performance, we will be interested not only in the visual side, but also the power consumption. Not just of the graphics card, but the CPU as well. Read more “Nvidia DLSS (3.5) in Alan Wake II. When does the game run best?” »

  •  
  •  
  •  

Minitest: Cheap PSUs vs. graphics cards coils whine

You’ve already read the extensive analysis of graphics card coil whine changes depending on the PSU used. One last thing is missing for it to be complete. And that is to add the behavior of cheap PSUs with lower quality components. Otherwise, one could still speculate that across classes the situation could be significantly different. Could it? This is what we will focus on in the measurements with “80 Plus” PSUs, one of which is also already quite old. Read more “Minitest: Cheap PSUs vs. graphics cards coils whine” »

  •  
  •  
  •  

The Ventus 3X (RTX 4070 TiS) case: Final vs. original VBIOS

The GeForce RTX 4070 Ti Super Ventus 3X graphics card came out with a BIOS that MSI (and even Nvidia) wasn’t happy with. After the second revision, there is the third, the last revision of the BIOS. This one increases the power limit to allow higher GPU clock speeds to be achieved. However, this comes at the cost of a bit lower power efficiency. To update or not to update? That’s for everyone to decide for themselves, if they get the chance. Read more “The Ventus 3X (RTX 4070 TiS) case: Final vs. original VBIOS” »

  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *