Photos 2/2: Affinity Photo, Topaz Labs AI Apps, ZPS X, ...
It’s the fastest Core i3 yet, but it’s also the hungriest. The 14100F’s (Raptor Lake Refresh) biggest competition in its own ranks is in the form of older models (13100F and 12100F). These are a bit slower, but lower-power. The “better” choice depends on what holds more weight on your scales. Maybe it will be that record-breaking speed? In this class (Core i3), power consumption is always relatively low.
Affinity Photo (benchmark)
Test environment: built-in benchmark.
Topaz Labs AI apps
Topaz DeNoise AI, Gigapixel AI and Sharpen AI. These single-purpose applications are used for restoration of low-quality photos. Whether it is high noise (caused by higher ISO), raster level (typically after cropping) or when something needs extra focus. The AI performance is always used.
Test environment: As part of batch editing, 42 photos with a lower resolution of 1920 × 1280 px are processed, with the settings from the images above. DeNoise AI is in version 3.1.2, Gigapixel in 5.5.2 and Sharpen AI in 3.1.2.
XnViewMP
Test environment: XnViewMP is finally a photo-editor for which you don’t have to pay. At the same time, it uses hardware very efficiently. In order to achieve more reasonable comparison times, we had to create an archive of up to 1024 photos, where we reduce the original resolution of 5472 × 3648 px to 1980 × 1280 px and filters with automatic contrast enhancement and noise reduction are also being applied during this process. We use 64-bit portable version 0.98.4.
Zoner Photo Studio X
Test environment: In Zoner Photo Studio X we convert 42 .CR2 (RAW Canon) photos to JPEG while keeping the original resolution (5472 × 3648 px) at the lowest possible compression, with the ZPS X profile ”high quality for archival”.
- Contents
- Intel Core i3-14100F in detail
- Methodology: performance tests
- Methodology: how we measure power draw
- Methodology: temperature and clock speed tests
- Test setup
- 3DMark
- Assassin’s Creed: Valhalla
- Borderlands 3
- Counter-Strike: GO
- Cyberpunk 2077
- DOOM Eternal
- F1 2020
- Metro Exodus
- Microsoft Flight Simulator
- Shadow of the Tomb Raider
- Total War Saga: Troy
- Overall gaming performance
- Gaming performance per euro
- PCMark and Geekbench
- Web performance
- 3D rendering: Cinebench, Blender, ...
- Video 1/2: Adobe Premiere Pro
- Video 2/2: DaVinci Resolve Studio
- Graphics effects: Adobe After Effects
- Video encoding
- Audio encoding
- Broadcasting (OBS and Xsplit)
- Photos 1/2: Adobe Photoshop and Lightroom
- Photos 2/2: Affinity Photo, Topaz Labs AI Apps, ZPS X, ...
- (De)compression
- (De)encryption
- Numerical computing
- Simulations
- Memory and cache tests
- Processor power draw curve
- Average processor power draw
- Performance per watt
- Achieved CPU clock speed
- CPU temperature
- Conclusion
Thank you for the article. I have been looking for 14th gen non-K cpu reviews.
Do you have an explanation why does 14100 take so much more power at idle, compared to 13100? In the other power graphs too, 13100 seems to be an outlier… I expected 14100 to be basically the same cpu, just being produced using a tad better refined process, and the clocks whipped up a bit.
More aggressive clock speed management. The Core i3-14100F does not go to 400 MHz like the Core i3-13100F (although the working range of the multiplier should be the same). I’m not saying it never does, but not at the level of our load corresponding to “idle”. And it won’t be on the edge either, nothing changed by terminating some processes (for example launchers) in the background, after which less load is put on the processor. Sometimes, under the same conditions, the Ci3-14100F doesn’t underclock as aggressively (as the Ci3-13100F). I can’t note what this is related to, but it might have something to do with a more aggressive TB (2.0), which makes the processor run at higher clock speeds even at very low load.
Sounds plausible.
Did you re-test 12100 and 13100 using the same exact OS version? I am also thinking if the silicon lottery may play a role, I have seen a test of multiple cpus of the exact same model, and the results were somewhat divergent. (Do not remember the source, I think it was Der8auer.)
No, we haven’t re-tested the Ci3-12100F and Ci3-13100F, but we are still using Windows 10 (22H2). For the reason that it does not change as dynamically as W11 and is therefore more suitable for building a massive database. It would never be possible to make with the kind of tests we do here otherwise.
I can assure you that it is definitely not about “silicon lottery”. I find Der8baer’s tests very irresponsible and unreliable. To accurately analyze such things, a controlled, consistent testing environment is essential, which he does not have. This is extremely important for processors such as the Ryzen 5 7600, which are particularly sensitive to temperature changes. Rather than real differences between processors, his results are more a reflection of how ambient conditions change during individual tests. These include, among other things, the mounting of the cooler on the processor, which is defined by an always equal heat transfer from the processor to the cooler. This is also very difficult to achieve.
We also used to deal with the dependence of the cooling performance on the different techniques of applying thermal paste and its different quantities. And also the influence of different pressure. All this is necessary to control in order to analyze the properties across different pieces of the same processor. It doesn’t seem that these things are any special concern of the author testing with the motherboard “installed” on its box and components randomly spread on the table. 🙂