Do Computers Eventually Just Wear Out?

  • Thread starter Thread starter Keith Henry
  • Start date Start date
K

Keith Henry

New member
DO COMPUTERS EVNTUALLY JUST WEAR OUT?

Here's my basic setup:
2) pre-historic beige g3's (1 333mhz & 1 300mhz)
each are running OS 9.1 and MOTU DP 3.11.

the 333 G3 houses my motu 324pci supporting a 2408 and a 1224, running DP 3.11.
the 300 G3 houses a KORG Oasys 24 bit audio card, running DP 3.11.

my 333 supporting the 2408/1224 is essentially my multitracking rig, and i monitor to a mackie 32x8 console to do most of my eq'ing & verbs externally. I then use the 300 G3 with the 24 bit Oasys card as my 2 trk mixdown deck of my multitrack session from the mackie 32x8.

In the days before G4's this was a descent recording rig, and was relatively stable all the time. however, for several months (almost 1 yr now), i've been experiencing problems with both rigs. i might add here i also have an old performa 6400 OS 9.1 which i use as a session backup machine; i have all these machines networked together so i can transfer files to any given machine.

it seems as though when problems with any one of the systems begins, the others are soon to start having problems too. i have determined that having the MOTU sound manager driver extension installed on the 333 always causes it to crash during bootup. i can disable the MOTU soundmanger driver extension, and the 333 will boot up. on the 300, i recently started having digital glitches in my audio-it's sounds similar to a digital clocking glitch, only much worse. at one time i had thought this may be a cause of having a pci firewire card installed, and running audio from an external firewire HD. however, i've now determined that the glitches are occuring even on the internal ide HD.

over the all these months of periodic problems, I have completely re-initialized audio HD's, re-initialized the system OS HD's, performed clean reinstalls of the 9.1 OS, reinstalled DP 3.11 numerous times.

Why is it, even after these re-initialize HD's and reinstalls of OS & DP 3.11, the problems eventually come back? It's alomost like it's deeper into the system than the OS. I zap P-ram, rebuild desktop, etc, etc., and I can always bet, evntually the probs will come back. Does ram eventually go bad? Does the CPU eventually go faulty?

I'm to the point it's time to breakdown and get a Dual G4.?.? i'm waisting too much down time with having to move 80+ GB of files to backup & reintialize/reinstall.

Anybody else experienced wierdnesses like this with older systems?

Thanks
 
It is possible for components to die like your harddrive and your cpu can fry if it overheats, but it is rare for a cpu to die. I know plenty of 486's with 66mhz cpus that are still going strong.

I put my bet on your harddrives.
 
"shit" accumulates in your computer. Periodically you should back up everything you need to save, and completely wipe your hard drive and start from scratch...reload OS, drivers, and everything you need.

My roommate is a systems architect for News Corp. He starts fresh on his big computer setups every few months for the same reason. I dont know the computer lingo for "shit accumulating" in the computer, but that's the long story made short.

Start fresh on your computer and it should be like new.
 
There are two terms for that: "bloatware" and "bit rot". Hope that helps!

Processors (and memory, for that matter) can die over time due to a number of solid-state woes. The worst for memory is deterioration of the gate oxide in the transistors that actually store the data, driven by heat and possibly by process problems (surface conditions on the silicon wafer in the fab). This results in increasing error rates over time, and the faster/denser the memory, the harder it is to make them live over the long haul. Processors can experience this sort of problem as well, but it is more rare- memories use a preponderance of absolute-minimum-size devices in order to achieve high density, and the smaller the transistor, the more likely that a latent crystal-lattice or oxide defect will kill it.

Processors use bigger transistors, by and large, and dissipate *much* more power- so they are more prone to metal-system-related problems. With the very small geometries we're looking at these days, the interconnect wires are running right up at the critical level for current density: you get into a quantum effect called "electromigration", where the metal is actually swept along by the current flow itself, and starts turning a straight wire into something that meanders like the Mississippi. Cooling is key, and keeping the power dissipation down to design levels is _really_ key. That's one reason that the overclockers are running into shorter and shorter processor life: the designers can no longer afford to leave as much safety margin in the designs. Safety margin is performance left on the table- so pushing the envelope on your 2.4GHz whatever is going to run into problems at a *much* smaller percentage gain than pushing your old 66MHz beast... No such thing as a free lunch.

Disks run into electromechanical issues- wear occurs in the head positioning hardware, and all that sort of stuff. But it sounds to me as if your problem is either bloatware or bit rot. (;-)
 
Last edited:
skippy said:
That's one reason that the overclockers are running into shorter and shorter processor life: the designers can no longer afford to leave as much safety margin in the designs.


Skippy, that was all really interesting info, but it just doesn't make practical sense to me. Since when are overclockers running into shorter processor life? I hang around the enthusiast overclocking crowd quite a bit, and I have never, EVER once heard of a processor "wearing out." The only two kinds of processor deaths I've ever heard of are melting and cracking. Heat-related problems that do not cause a total meltdown can cause instability, but shorter processor life? When have you ever seen a processor die of old age?

Same goes for memory - never seen stuff just wear out before.
 
Well, for the last 20 years or so I've designed microprocessors for a living. Every design I've done has gone through accelerated life testing as part of its production qualification cycle: a process where you deliberately induce wearout to statistically verify that the product will meet its life specs in the field, and identify and correct any weak spots in the design that need to be addressed in order to make that happen. The way you do this is simple: run the parts at high temperature and high-limit supply voltage, and lately clients of mine have gone to over-high-limit clock speeds to increase the stress on the part (and shorten the time it takes to get through the life test cycle, since that affects time-to-market and cuts into the revenue from the part).

The burnin process that you do for life testing is very educational, and can be pretty humbling. You take the failures and look at them to see what failed: was it a process-related issue, was it a metal system issue, was it the phase of the moon, was it die cracking- what _was_ it? As a result I _have_ held a pretty sizable number of downright worn-out parts in my hand (at looked at them under the microscope, both optical and SEM).

It used to be that a designer could sandbag and leave sizable safety margins in the design. Can't do that anymore: the marketing requirement now is _max out the clock rate at all costs_. So, you used to have room to push it quite a bit without running into issues. Now, you have significantly less margin for that: you run into critical path setup/hold issues (flakey failures- what you'd call instability), and you run into edge-of-the-envelope process issues much sooner, percentage-wise.

I'm perfectly happy that the overclockers say that they can do whatever they want with the parts with no problems: it sells more processors, once they die. But the prudent approach is to run the part within its design parameters, and that's what I'll always advocate. I have some insight into how those specs come to be, you see. Otherwise, you're just voluntarily participating in more accelerated life testing.... (;-) Make sense?
 
Most overclocking sites that I have seen state that a hard overclock will reduce the lifespan of your cpu by 50%. Intel claims that its cpu's will last for an average of 15 years. Now, if you're interested enough in performance computing that you are willing to overclock your computer's processor, are you REALLY going to want that PC around 7 YEARS later? (imagine the PC that you had back in 1996 running Cubase SX with a full rack of plugins and VST instruments).

Yes, there are risks involved with OC'ing...but I don't see any problem with a mild overclock. I have an Athlon XP 2100+ (rated 1.7ghz) that I am running at 2ghz, and it is extremely stable. It's a great system for me right now, but If this chip kicks the bucket in 2010 I don't think I'll be crying too hard.

just my two cents.
 
I totally unfamiliar with how Macs are engineered although I was once all too familiar with the AppleII.

I wonder if it has an equivalent of the pc cmos memory - where basic configuration stuff, vital on boot-up, is stored. This is maintained by some kind of battery, either soldered on the main board or buried in the real-time clock chip. At anyrate, after 4 or 5 years the battery gives up, even if it's the rechargable Ni-cad type. This latter battery has the nasty habit of leaking its corrosive guts onto circuit boards leading to intermittent failures or death. More recent designs use button cells mounted in a clip frame and can be replaced. My pet hate is the ones built into chips as you often find they are obsolete and can't be replaced.

Temperature problems can also cause faults but this might not be terminal. Old computers gather a coating of fine dust which prevents proper cooling of componants. A good blow out with an Aeroduster aerosol can solve that.

Finally, I suggest you give any socketed chips a push to make sure they haven't worked loose.

Consider the Atari ST. Many are still in use as midi sequencers. They had no fans to pull in dust. No real-time clock battery and (apart from early builds), most chips were soldered in.
 
skippy said:
Otherwise, you're just voluntarily participating in more accelerated life testing.... (;-) Make sense?


What about a low-end model made through a mature process that's putting out much higher speeds as well? I can understand that sometimes chips that cannot run a fast clockspeeds for whatever reasons (errors etc) will be downgraded in their spec speed, but I thought that a lot of the time, spec speed is downgraded simply to fill market demand. I have a Barton 2500+ (1.8Ghz) that overclocks to 2.4Ghz without breaking a sweat or increasing voltage. I don't normally run it overclocked just because I have no need of that extra couple percent of speed (can't notice it anyway) but the fact is, there are a lot of chips like that out there - downgraded to simply fill the market, when they're just as capable as higher-spec'd chips.
 
I think that that happens a lot less often than you think, especially now that the clock rates are so far up. When you do speed binning at production sort, you do it under moderately stressful conditions- and even with a mature and in-control process, you'll have significant yield that while functional, has to be binned at a lower speed for reliability reasons. That's the fat part of the bell curve. You simply aren't going to see a lot of manufacturers taking their 3Ghz parts and selling them spec'd as 2GHz parts (for a lot less money), if they can _possibly_ sell them as 3GHz parts. That's not a very economically sound strategy: the yields of top-speed parts just aren't that high. I'm not going to say that it never happens: I'm sure that it does, although I personally have never seen it on any of my designs. I just wouldn't bet on it routinely being the case for every design out there! There's no way I can convey to you how incredibly difficult it is to make modern microprocessors work at all at these astronomical clock rates...

Now, having said all that: you have to realize that this is all a statistical process. Some of those 3GHz-binned parts would probably run quite happily at 4GHz, and some fraction of _those_ might live essentially forever. On the other end of the bell curve, some will exhibit infant mortality, and crap out after only 3 hours running even if operated within their design specs. There is a distribution of ultimate speed capability, and there is a distribution of ultimate lifespans. The farther up the speed/power curve you go, the further down the lifespan curve you go. Statistically, it's an absolutely unavoidable relationship over the total population of parts. However, on a one-chip-at-a-time basis, you might very well never see that. You might have just randomly obtained a golden chip that can go fast and live long. The luck of the draw plays an undeniable role there.

On the other other hand, you might also have just paid two hundred dollars for a _fuse_ that will blow after 3 hours if you twist its tail too hard. Who knows?

My point is, and always will be, why take that chance for a few percent of performance gain? If you're using your computer to enable your _art_, it would seem to me that going for reliability is far more important that going balls-to-the-wall and relying on luck and statistics... Recording ain't playing Doom! I personally don't feel that it is wise putting the machine into a mode where lockup or failure is any more likely than it should be. But your mileage may certainly vary.

The other thing is that the statistics hide a lot of detail. When Intel specs for a 15 year life, and somebody says "doing this will reduce the life by 50%", that does not mean that the chips will all last 7 years. You've moved yourself onto a different part of the bell curve, and the relationship of average life to projected life isn't linear. You also know that they didn't get those statistical numbers on lifetime reduction from Intel, and they didn't perform a life test to find out what the imapct really is in any statistically meaningful way. So basically, all they can say is "I got away with it a few times, and you probably can too".

Cool! If you want to go there as well, knock yourself out. Some chips will do it. Some just flat won't, and you won't know which you have until you either experience a lockup or the thing goes poof. That's not a risk I'm willing to take with my recording machine, and it's not one that I would advise anyone else to take either.

Anyway, this is getting away from the "do computers wear out" question. The answer is "yes, they do", and the other anwser is "and oftentimes you can affect how quickly". Have fun!
 
reply to all

firstly; thanks to everyone, there's so many in depth responses i'm printing all these out to read on paper.

Jim Y from Wales, do you know Laura Sutton in Anglesey? She's a dear freind of mine, and an incredible singer-check her out! Too, you brought up a good idea on the battery idea, I've never replaced the batteries, and these amchines date to umm.... 97-98??

Hi all,

to answer a few of the questions; way back when...( a good year ago now) things were running great on all machines, yes I was running 9.2.2 on the 333 G3, and at that time I was still running 8.6 on the 300 G3. The reason I was still running 8.6 was, I had Rebirth on that rig and it couldn't run on any OS higher that 8.6. the 333 eventually decided it wanted to freak out on me w/ 9.2.2 OS. I have never been able to get 9.2.2 to reinstall, the 333 will always lock up during reboot on 9.2.2. so, that's why i went back down to 9.1.

these issues seam to be deeply embedded in the core of the machine. it'slike it is deeper than ram and OS. it's ltterally like somewhere deep within the 333, it remembers... something weird happened a long time ago with 9.2.2, i (the 333 speaking here) didn't like it therefore i will never run on 9.2.2 ever again. trust me, i have tried everything under the sub to correct these problems... i mean everything. in fact someof you may remember me whinning about these problems in the past. I've reset cuda switches, pulled ram, moved ram from one amchine to the other, zapped p-ram, rebuilt desktop, blah, blah, blah. Again I say; it's alsomst something deep within the core of the machine.

reagarding ram & HD's; the 333 is maxed out (i think) 640MB, and the 300 has 512MB. I really don't think the internal ide HD's are the problem, they're pretty new-in fact the 80GB ide in the 300 is only 3 mos. old.

Extensions: the MOTU MAS extension version is 2.4 which should be the latest required for DP 3.11. My 333 (and really the 300 is even more so audio only) is dedicated mostly to nothing but recording audio. therefore my extensions are disabled to the point of; only the necessary extension need to run DP are enable. Yes there may be the possibility that appletalk could be affecting the systems, but it never did in the good old days when all was working great. relating to my clean installs... i've done it all!!!! completely wiped everything, reintialized the HD, clean OS install, clean DP install, etc, etc. and eventually things will come back to screwed up status. I had a freind present the idea/suggestion; maybe this is happening due to the fact i had partitioned my 80 GD HD's (the newer HD's)-the 333 80GB partioned drive had the system OS, the 300 was not the system OS just 80GB plit into 10 40GB storage & 2) 20GB partitions.

Audio Track Count: when the 333 is running correctly i have run 22-24 mono audio tracks. now obviously when i start adding several plugin's on this type session, I eventually run out of CPU

I know for sure the 333 doesn't like the MOTU sound manager driver, which I liked to use to be able to audition samples from PEAK through my 2408 or 1224. i don't use MOTU sound manager driver or MAS on the 300, because it houses the KORG OAsys card which requires ASIO drivers.

Anyhoo, I'm backing up everything off the 333's internal ide 80GB HD right now, so i can completely reinitialize that 80GB drive and unpartition it into a single 80GB volume. I had just recently done that on the 300 (which also has a internal ide 80GB HD). to explain the systems a little further; the 333 has an internal 4Gb scsi hd which i recently clean installed 9.1 and is surrnetly working, but i haven't had time to clean install DP yet to get DP up & running. the 333 also contains a scsi pci card with 2) 4 GB (i think they're 10,000 rpm cheetas?) whch were factory apple intalled- these 2 scsi HD's are for nothing but recording audio to. the 300 has a single 17GB internal scsi which houses the system OS, and the internal 80GB ide HD is for recording audio. and of course the 300 has the recently added pci firewire external 120GB firewire drive, which is mostly used to carry working DP sessions or audio files to other facilities.

So, Yeah thanks for all the suggestions, I'm definately open & listening to all ideas.

THANKS
 
Update

thanks to everyone for all the superb recommendations.

so... here's the recent update:

On the beige 333, I have backed up everything of all HD's, completely reinitialized all HD's (zeroed all data), reset cuda switch, even removed ram that was in the machine and put different ram in, remove all pci cards (audio, dual monitor card, and scsi accelerator card). I used the original 8.5 resorre disk image function from the CD-ROm that original came with the 333. Then I clean installed OS 9.1. Then I attempted the 9.2.1 OS update.

Like I predicted, after the 9.2.1 the 333 crashes on bootup. I held shift down to disable extensions-same thing CRASH on reboot. I then moved all extensions to the extensions disabled folder, and same thing crash on rebbots. This is truly bizarre. Over a year ago when I had 9.2.2 running on this machine, and then something went corrupt. Since then, I have never been able to get the machine further than OS 9.1.

This makes me think, if I've done all these above procedures (many, many times over the past year), there is something deeper in the system that no longer likes 9.2. This appears to be deeper than ram/HD issues.

Truely Bizarre!!!!!

I give up! I don't know of anything else than can be attempted?

thanks again
 
i'm thinking damage to either the CPU, motherboard, or power supply. heck, maybe the floppy drive's even shorting out or something. or it could be a heat issue.
 
enuff's - enuff

i've wasted too much down time over the past year. i broke down today and ordered a dual 1.25 G4. now i plan to get back to some real work.

thanks all
 
Heat is the enemy of electronic components. Overclocking generates more heat. If you can remove it and keep the die at the normal operating temp, the life should be the same. Running a higher voltage level (and more current) will subject the circuits to a higher than design level power.

Heat cycling is what breaks electronics, also. The material flexes when it trasitions from cold to warm, same as bending a coat hanger back and forth. Eventually it fractures. I'm from the leave-it-on school of thought.

Hard disks are mechanical, and subject to mechanic wear from hours of spinning unused in the middle of the night. I let my secondary hard disks power down after 2 hours.
 
Back
Top