• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Memory stress/burn-in programs

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
pauldriver said:
Hot CPU Tester Pro 3

http://www.7byte.com/

Doesn't do anything for the I/O sub systems (HD, VidCard, etc.), but there are better tests for those parts. Hot CPU Tester Pro 3
really beats the heck out of the CPU, Memory controller, and RAM.

If you can last 6 hours with this, your box is good to go.

I went to the website, and I liked the screenshot and info., looks like a good all around burn-in test program. However, you can only download the Lite version correct? You would have to pay for the Pro version. And in the FAQ on their website, it says the Lite version is missing some of the testing modules, such as the L1, L2 cache, chipset, etc. Without these modules it seems this program does not test all components right? Is there a pro version available for download, like as a trial or something, etc. Thanks.


TO ALL:

I have a question regarding all burn-in programs. Now that the new P4's have come out with Hyper Thread Technology (what I have), then wouldn't a burn-in program essentially be testing half the cpu's load? Since windows XP shows it as essentially a multi-processor system. When you load any of the burn in programs, don't they just max out only one (i.e. half) of the CPU(s)? What can you do to really test all 100% CPU load? Thanks.

SuperG
 
I see a lot of you only run burn-in for 6-24 hours, but one reason to run burn-in for 72 hours is because the new thermal greases (artic silver 3) don't set before 72 hours of use. So your best chance for OC is after 72 hours of use.
 
Hey i was reading your posts and they were pretty funny. I too have recently blown the dust off CIV 3 and understand your pain! Hey i think im goin with the 12 hour burn in that you suggested. Ill let you know !!! Laters!
 
new version of prime out

Improvements for Athlon, Duron, Pentium 3, and Celeron 2 owners! You can expect up to 3-8% faster iteration times compared to version 23.4!

Windows: ftp://mersenne.org/gimps/p95v235.zip
or: ftp://mersenne.org/gimps/p95v235.exe
Linux: ftp://mersenne.org/gimps/mprime235.tar.gz
or: ftp://mersenne.org/gimps/sprime235.tar.gz
NT service: ftp://mersenne.org/gimps/winnt235.zip
FreeBSD: ftp://mersenne.org/gimps/mprime235-freebsd.tar.gz
or: ftp://mersenne.org/gimps/sprime235-freebsd.tar.gz


there is a new test menu that lets you pick between short long or mix

its explained better in the provided documentation
 
the difference is that a work unit is run to get results, while the torture test runs and checks calculations whose answers are already known.

a work unit may appear to complete, but since the results are (by definition) unknown, errors won't be uncoved until the result is verified. all work unit results are verified. an unstable machine will not run useful work units, but you won't know it.

so, in short, if torture test fails, a work unit will probably fail as well, producing nothing of value for the GIMPS.
 
you are correct

however prime also has some built in error checking
it runs the number twice and if it gets two diffrent results then it goes back to the last save point
 
Cowboy X said:
Though not marketed as a burn-in program I find that folding at 100% is fairly good as well . But it is better for cpu burn-in more than anything else , it will also allow you to see load cpu temps while helping humanity and of course team 32 :)


^^^^^^^^im with this guy^^^^^^^^^^^^^^

excpet teh team number...your 32...im sorry...lol, j/k...734 representin'...amdmb.com folding frogs, werd!!
 
Quoted from AnandTech:

There is burn-in and then there is burn-in. In semiconductor manufacturing terminology "burn-in" is a stage of the production flow after packaging in which the CPU is placed in an elevated temperature environment and is stressed at atypical operating conditions. The end goal of this is to dramatically reduce the statistical probability of "infant mortality" failures of product on the street. "Infant mortality" is a characteristic of any form of complex manufacturing in that if you were to plot device failures in the y-axis and time in the x-axis, the graph should look like a "U". As the device is used, initially quite a few fail but as time goes on this number drops off (you are in the bottom of the "U" in the graph). As the designed life of the product is reached and exceeded, the failure count rises back up again. Burn-in is designed to catch the initial failures before the product is shipped to customers and to put the product solidly in the bottom section of the "U" graph in which few failures occur. During this process there is a noticeable and measurable circuitry slow-down on the chip that is an unfortunate by-product of the process of running at the burn-in operating point. You put a fast chip into the burn-in ovens and it will always come out of the ovens slower than when it went in - but the ones that were likely to fail early on are dead and not shipped to customers.

There are two mechanisms that cause the circuitry in CMOS - particularly modern sub-micron CMOS - to slow down when undergoing the burn-in process: PMOS bias-temperature instability (PMOS BTI) and NMOS hot-electron gate-impact ionization (known as "NMOS hot-e"). Both of these effects are complex quantum-electrical effects that result in circuitry slowing down over time. You should be able to type either of these two terms into Google to read more about what is actually happening. The end-result is, as mentioned, that the chips will start to fail at a lower frequency than they did before going into burn-in due to the transistor current drive strength being reduced.

There is another use for the term "burn-in" with regard to chips that is used by system builders and that is as a test for reliability and to reduce customer returns due to component failure. This usually consists of putting the system together, plugging it in and running computational software on the system for a period of 24-48 hours. At the larger OEM companies, this is often done at a higher than typical operating temperature.

Some time ago someone on the internet wrote a very factual sounding article on the benefits of running a CPU at a higher than typical voltage for a day or two to improve it's "overclockability". This author wrote some scientific sounding verbiage about how NMOS hot-e actually improves the drive strength of PMOS devices as a supposed explanation for why this method works. Reading this particular article and, even worse, seeing people commenting that this was a wonderful article that everyone should follow was the reason why I started posting on AnandTech way back when. The author was wrong on several key points - primarily that NMOS hot-e can occur in electron-minority (hole majority) carrier devices that are biased such as to repel electrons - and I contacted the author with a wide assortment of technical journals showing that he was wrong. He was not particularly open to the fact that he might be mistaken and never remove the article from the website that I'm aware of. Suffice to say, however, that he did not understand basic semiconductor electronics and was wrong.

There is no practical physical method that could cause a CPU to speed up after being run at an elevated voltage for an extended period of time. There may be some effect that people are seeing at the system level, but I'm not aware of what it could be. Several years ago when this issue was at it's height on the Internet, I walked around and talked to quite a few senior engineers at Intel asking if they had heard of this and what they thought be occurring. All I got were strange looks followed by reiterations of the same facts as to why this couldn't work that I had already figured out by myself. Finally, I was motivated enough to ask for and receive the burn-in reports for frequency degradation for products that I was working on at the time. I looked at approximately 25,000 200MHz Pentium CPU's, and approximately 18,000 Pentium II (Deschutes) CPU's and found that, with practically no exceptions at all, they all got slower after coming out of burn-in by a substantial percentage.

To me there is no doubt in my mind that suggesting that users overvolt their CPU's to "burn them in" is a bad thing. I'd liken it to an electrical form of homeopathy - except that ingesting water when you are sick is not going to harm you and overvolting a CPU for prolonged periods of time definitely does harm the chip. People can do what they want with their machines that they have bought - as long as the aware that what they are doing is not helping and is probably harming their systems. I have seen people - even people who know computers well - saying that they have seen their systems run faster after "burning it in" but whatever effect they may or may not be seeing, it's not caused by the CPU running faster.


Patrick Mahoney is a Senior Design Engineer in the Enterprise Processor Division at Intel.
He is not speaking for Intel Corp.
 
well, you have like 10,000,000 overclocking nuts saying burning in is a good thing and one Patrick Mahoney sayings its not...hrmmm
 
Why should "burning in" help

What is the theoretical basis for thinking that "burning in" a semiconductor device (beyond the time required to bring it normal operational temperature) would serve any useful purpose in terms of allowing the device to operate more reliably under over-clocked conditions or require less added voltage to operate reliably overclocked?

I exclude such unlikly, but possable, factors as heat-sink compound spreading between semiconductor case and heat sink in response to pressure and heat and giving better heat transfer as a result.

This sounds a lot like the technologically baseless mysticism that runs rampant in high-end audio and allows golden-ears to hear things that others can not (as long as it is not under double-blind test conditions in which case they can't hear it either), but I would be happy to hear the reasoning, nevertheless.

- nopcbs
 
'STRESS' Burn-in is more of an anecedotal then scientific concept.

The 'Theory' is that is umm I'm not gonna geek anyone with PNP NPN junction theory and other [expletive deleted] stuff today, the simple version is that a 'stress burn-in' sort of "clears" the pathways within the semiconductor materials.

It's vaugely possible, according to the physics, for "stress burn-in" to work, but it's not highly repeatable, as it depends on vauge variations within the contruction of the device, and the sucess of the procedure would vary with differing manufacturing processes.

It's also possible for a "stress burn-in" to break down a junction, and damage the device, just as easily as it is possible for the procedure to "enhance" the device.

It's sorta of like starting my 54 Studebaker pickup, I know that checking the battery and ignition cables makes NO DIFFERANCE, but If I don't get out, open the hood, check the cables, and then try cranking it over again, it will not start!! (O.K. what's REALLY happening is that the prodecure is giving the gas, dumped into the header time to vaporise, and the process of getting out, opening the hood, checking to connections, getting back in and THEN cranking it over is just enough time to keep me from flooding it)

[Edit] the point of that rambling analogy was that checking the cables makes me feel better while I wait for the fuel to vaporise, and "Stress Burning" a CPU makes others feel better about their overclock. The likelyhood of stress burning for a few days killing a CPU is low (on non-modified motherboards), as manufacturers tend to limit the maximum overvoltage to about 20%

Paul.
 
Last edited:
WAY THE [EXPLETIVE DELETED] OFF TOPIC

I was getting pretty [expletive deleted] off at the Stude for not starting, and I spent ALOT of time trying to figure out what was wrong (A blast from a can of cold start fires it up everytime.).

The world of carburated engines is a deep and dark mystery to me, fuel injected OBD I/II based engines are so simple, they tell you what is wrong!!!
 
BTW I discounted the stress burn-in theory along time ago, EVEN though I have experienced it, but only with P4 Processors.

My example is a 1.6Northwood CPU that wouldn't overclock much at all, even tried extreme voltages. Disapointed, I set the FSB back to normal, but forgot to change the Voltage. A few days later that the CPU was running running HOT, and I shut down to check my fans and heatsink interfaces. Everything was good, so I reboot, check the BIOS and find my voltage is still high. Now, I'm not trying to get any work done, so I decive to muck about again, and bang, I get a nice stable 800M CPS overclock to 2.4G CPS.

Perhaps it's something to do with the thermal compond Intel uses between the P4 cover and the core. Hell if I know, but that CPU and board are quite happy in my "server" still overclocked.

I consider it an EXTREME hobbist activity, like nitrous injectors in a car.

Paul.
 
i found this program when i googled memtest. its windows based so you dont have to run memtest in dos.

im running the prime95 tochure test "blend" right now. i guess its working good, but not only does it take up my 1gb of memory, but its also taking up 46% of my Swap Space and 61% of my virtual memory, so all my programs take forever to load, and once they are loaded, they run sluggish.
 
How good is that Microsoft memory diagnostic? What interests me is that the documentation says that some of its tests are optimized for certain brands of memory chips, and the binary file lists several brands, but none are displayed when the program is run.

Is this program a lot closer in quality to MemTest86 or Gold Memory, or is it a dud, like DocMem from www.simmtester.com?
 
Back