Instead, we need to use a proxy metric: a certain current threshold or a certain temperature threshold. Okay, for our lowest frequency we choose our manual overclock from Strategy 2B of SkatterBencher 26 which is an average frequency across all cores of MHz.
Then we start the Cinebench R23 multi-threaded benchmark workload and change the affinity to 1 core in task manager. Now monitor the effective clock frequency. It will be higher than our target of MHz. Now you can gradually increase the Cinebench R23 thread count. Next, boot into the operating system and run a stress test. AI Overclocking is better able to determine the limits of your machine with longer and more intensive workloads, but that doesn't mean you need to settle in for a full day of stress testing.
I chose to use Intel XTU and set the test for 30 minutes, which is long enough to get the system's blood pumping. The automated intelligence gave our system cooling a score of It predicted an optimal all-core clock speed of 5.
AI Overclocking applies its optimized settings automatically, so all you have to do is hit F10 to save and reboot. After saving the AI Overclock config, I headed back to the desktop to check stability with another stress test. The system passed with flying colors. The process is really easy, and predictions are generated almost instantly. The entire procedure barely took longer than the hour we spent on stress tests. Longer stress tests can provide a more accurate read on your system's limits, but you can also use a shorter test to get a quick assessment and then rely on continuous training to hone the prediction.
It's worth noting that AI Overclocking does not adjust the AVX instruction offset due to limitations associated with this parameter. AVX workloads generate more heat and typically require lower clock speeds, so if you run software that uses those instructions, we recommend manually adjusting the offset value to compensate. The same algorithm that sets the all-core frequency also predicts the maximum stable AVX frequency, so it's easy to check the prediction and modify the offset appropriately.
This is also covered in the step-by-step guide included in the UEFI. I feel that Toms should have done some stability testing on their manual and automatic OCed Processors. They might have and just not posted their results. I am in the camp where I feel that if you can't take the hour or two to figure it all out you probably shouldn't be Overclocking. If we had a larger sample of Proccessors we have no idea how many would turn out badly.
It looks like a good tool to start off your own OC because it's probably gonna be in the ballpark, but on it's own it leaves much to be desired. Was the same CPU used in all tests? If so, it seems untrue to say that "CPU's" shouldn't have more than 'n' voltage when the Mobo's are presenting different internal loads, right?
You stated that manually you can get 4. If the CPU is consistnat in all tests, 1. My point being different mobo's require more "push" there by making it harder for me to fault autoOC programs for cranking up voltage past "comfy" limits when, for all we know, they are taking into account higher internal loads.
There is a variable missing someplace. Like 1. I don't know..
0コメント