Home : Linux resources : Hard Disk Upgrade
The following notes recount my March 2001 experience with upgrading the hard drive on my cheapo Linux box, which was then two years old. This turned into a major project, because it also involved repartitioning the disk and doing a memory upgrade while I was at it. It also turned out to be somewhat harrowing, because, despite precautions, I screwed up badly enough so that the system wouldn't boot up. Twice.
Nevertheless, it was well worth the trouble. The system seems much faster because it pages much less, and seems to page faster when it needs to (though some of that is undoubtedly due to the simultaneous memory upgrade). I also have tons more disk space on which to put stuff; before, I was unable to keep the source files for new packages online for very long. And last but not least, my file space is more usefully divided for the way I do backups.
This document is divided roughly equally into three major sections. The first section gives an overview of what I intended to do, the second consists of the detailed step-by-step instructions I wrote out for myself (in order to ensure that I didn't goof it up), plus a little hindsight, and the final section recounts the lessons I learned the hard way, and includes a postscript of sorts.
On this disk, /dev/hda5 is /boot, and /dev/hda6 is the root partition. Virtually all files lived together in the 2G root partition, which had several disadvantages:Disk /dev/hda: 128 heads, 63 sectors, 787 cylinders Units = cylinders of 8064 * 512 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 271 1092640+ b Win95 FAT32 /dev/hda2 272 787 2080512 5 Extended /dev/hda5 272 276 20128+ 83 Linux /dev/hda6 277 770 1991776+ 83 Linux /dev/hda7 771 787 68512+ 82 Linux swap
Another disadvantage of this setup is that the swap partition is
/dev/hda7, at the very end of the drive. As I understand it,
this means that it is close to the spindle, where the disk surface moves
relatively slowly past the heads. This is the slowest part of the disk
to read and write, not only because it takes longer for a sector to be
read, but because there are fewer sectors per cylinder, so the space
must be spread out over more cylinders, leading to more seeks.
Accordingly, I wanted to move the swap partition closer to the start of
the disk.
I bought a used 30G drive from PCs for Everyone (they now call
themselves Thinkmate.com, and planned
to divide it up as follows:
Notice that I'm still only using slightly more than half the disk . . .
As implemented (according to "fdisk -l"):
New partitions
Notes:
partition size blocks what
/dev/hdb1 2G DOS
/dev/hdb5 20M 7929 /boot
/dev/hdb6 300M swap
/dev/hdb7 1G 62389 /
/dev/hdb8 3G 195476 /home
/dev/hdb9 5G 329336 /usr/local
/dev/hdb10 2G 622439 /usr
/dev/hdb11 2G 17698 /var
Total: 15.32G
The partition contents (the part of the file structure shown in the
"what" column) were chosen to reflect common patterns of file usage
(i.e. ephemeral stuff like system logs in /var, important stuff
in /home, various kinds of code development in
/usr/local, etc.). This was driven by my backup strategy; I
wanted to be able to back up /home frequently, the root
filesystem infrequently, and everything else at intermediate frequencies
that changed depending on usage, all independently of each other.
The first four partitions of any IDE drive (i.e. hda1 through hda4) are
special; they are physical partitions,
whose location is specified on the MBR. I left the first partition
untouched, and the third and fourth are not used (this appears to be the
convention). /dev/hda2 is a special "extended" partition that
incorporates the rest of the disk (or the part that is used at any
rate); the Linux partitions appear as logical partitions within it.
/dev/hda6 is not used; see the second tale of
woe for how and why it was created.
Disk /dev/hda: 255 heads, 63 sectors, 3720 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 261 2096451 6 FAT16
/dev/hda2 262 2261 16065000 5 Extended
/dev/hda5 262 264 24066 83 Linux
/dev/hda6 265 525 2096451 83 Linux
/dev/hda7 526 563 305203+ 82 Linux swap
/dev/hda8 564 694 1052226 83 Linux
/dev/hda9 695 1086 3148708+ 83 Linux
/dev/hda10 1087 1739 5245191 83 Linux
/dev/hda11 1740 2000 2096451 83 Linux
/dev/hda12 2001 2261 2096451 83 Linux
[I have since given /dev/hda6 an ext2 file system, and mounted it as /scratch; this partition is never backed up. I use it for such things as 250MB of Squid cache space, and as a holding area for backup files waiting to be cut to CD. -- rgr, 7-Jul-02.]
Here is how df reports the state of the mounted filesystem partitions on the new disk:
Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda8 1018298 54919 910768 6% / /dev/hda5 23300 7924 14173 36% /boot /dev/hda12 2028098 30275 1893001 2% /var /dev/hda11 2028098 622426 1300850 32% /usr /dev/hda10 5065710 349576 4453875 7% /usr/local /dev/hda9 3043987 192556 2693996 7% /home
Reboot the system to ensure the partition table is correctly updated.This may be obsolete, as "fdisk -l" showed the new information immediately. But I didn't take chances.)
mkdir /new/home mount -t ext2 /dev/hdd9 /new/home (cd /home; tar cf - .) | (cd /new/home; tar xBfp -) (cd /home; tar cf - .) | (cd /new/home; tar dBf -)The first tar line copies, the second one compares. Repeat for all partitions, being careful about the order of mounts due to directory nesting.
[NB: I had to copy /usr and / differently in order to avoid copying all subdirectories. -- rgr, 17-Mar-01.]
umount /new/home e2fsck -f /dev/hdd9
mv /home /old-home mkdir /home mount -t ext2 /dev/hdd9 /homeThis allows a certain amount of testing before punting the old drive. [I don't remember whether I did this or not, but it's kinda silly; I never actually threw the old drive away, so the files are still available. -- rgr, 23-Dec-04.]
Old:
/dev/hda6 / ext2 defaults 1 1 /dev/hda5 /boot ext2 defaults 1 2 /dev/hda7 swap swap defaults 0 0New:
/dev/hda8 / ext2 defaults 1 1 /dev/hda5 /boot ext2 defaults 1 2 /dev/hda12 /var ext2 defaults 1 2 /dev/hda11 /usr ext2 defaults 1 2 /dev/hda10 /usr/local ext2 defaults 1 2 /dev/hda9 /home ext2 defaults 1 2 /dev/hda7 swap swap defaults 0 0Note that these are all still described as partitions of /dev/hda, and not of /dev/hdd, because the new drive will be installed as /dev/hda before we need this fstab.
To do this, create a new file, /etc/lilo.conf.new for example, with the following contents, modified appropriately for your configuration. Do not replace your current /new/etc/lilo.conf file, as this will probably only need small changes (see below).
disk=/dev/hdd bios=0x80 # Tell LILO to treat the second disk as if # it were the first disk (BIOS ID 0x80). boot=/dev/hdd # Install LILO on second hard disk. map=/new/boot/map # Location of "map file". install=/new/boot/boot.b # File to copy to hard disk's # boot sector. prompt # Have LILO show "LILO boot:" prompt. timeout = 300 vga = normal read-only image=/new/boot/vmlinuz-2.2.17-14 # Location of Linux kernel. label=linux # Label for Linux system. root=/dev/hda8 # Location of root partition on the new # hard disk. Modify accordingly for # your system. Note that you must use # the name of the future location, once # the old disk has been removed. read-only # Mount partition read-only at first, to # run fsck.Note: In this configuration file, the disk is referred to as /dev/hdd and the names of files on its boot partition use /new/boot/ because lilo needs to access the disk as it is now to set it up for booting later.
Once you have this file set up, do "lilo -C /etc/lilo.conf.new to get LILO to install itself on the new disk.
[This is based on section 8 of the Hard-Disk-Upgrade mini-HOWTO.]
Voila; you are done already, and you didn't even have to do any more reboots than it took for the OS installation. You now have two choices:
I wrote this page partly out of a need for catharsis, and partly
because of the chance that I may be able to save others some amount of
aggravation. May you have better luck than I did in avoiding the
"stupid rays."
It occurred to me to try booting the other kernels I had lying around
(I was running 2.2.17 at the time), but 2.2.16 gave me more or less the
same thing. (I also tried Windows 98, but that was completely useless;
it just gave me the splash screen and hung.) The 2.2.5 kernel gave me
something slightly different, though. It also gave a "kernel NULL
pointer dereference" error, though at a slightly different address, and
a different "oops" code, but it got further into the boot process before
crapping out.
Things looked even worse when it continued to fail in the same way
after I disconnected the new disk; I was afraid I had damaged the
original drive or its cabling somehow. It was about 7 PM then, and
my system seemed to be dead in the water, so I figured I might as well
take a dinner break. After dinner, I planned to put the machine back
together in order to bring it into the shop the following morning, since
that seemed to be my next bet.
When I finished dinner, I decided to try my luck again, since it
would have been a colossal waste of time to take the box in to the shop
if (I had my fingers crossed) the problem had miraculously taken care of
itself while I ate. This time, my "luck" consisted of trying both
2.2.17 and 2.2.5 with the new drive still connected; both kernels still
died in the same idiosyncratic ways, but I noticed that 2.2.5 (the
kernel that shipped with Red Hat 6.0) first printed the following, well
before "oopsing" out:
This lead me to test the thing I should have suspected all along: My
other upgrade task, the new memory module. Sure enough, I pulled the
module out and pushed it back in again, and it seemed to seat better the
second time. Then, I booted the machine, this time without any trouble.
I've only done one other memory upgrade, to my wife's old Macintosh
Performa system, but I seem to recall that upgrade job required two
tries as well. Perhaps all such connectors are stiff initially, and
need a few insertions to get "broken in" properly.
The moral, of course, is to do only one thing at a time, so that you
can test one thing at a time. At least I only wasted an hour or
two, if you exclude having dinner ruined for being bummed about my
broken system.
Fortunately, Linux skips the BIOS entirely, and talks directly to the
disk, so once it is booted, Linux is not affected by archaic BIOS
protocols. This also means that Linux can access much bigger disks than
are made these days. Unfortunately, LILO has to use BIOS disk calls in
order to pull Linux off of the disk, so BIOS arcana can't be entirely
ignored.
I was rudely reminded of this sad state of affairs when I attempted
to reboot off of my snazzy new disk . . . and couldn't even get to the
LILO prompt; it hung with "LI", and that's all she wrote.
Several attempts to reboot after reseating connectors met with the same
result (though the BIOS seemed to be finding the drive, so that
shouldn't have been an issue). Finally, suspecting I had make a mistake
in /etc/lilo.conf, I reverted to the two-disk configuration;
fortunately I hadn't actually moved the drives nor replaced the CDROM at
that point, so reverting was easy. (If I still hadn't been able to
reboot then, I'd have probably pushed the damn thing off a bridge.)
Well, /etc/lilo.conf seemed to be what the HOWTO said it
should, so, in desperation, I started hunting through all of the LILO
documentation I could find. While ploughing through
/usr/doc/lilo-0.21/doc/tech.tex looking for clues, I learned
that the incomplete "LI" prompt means that LILO thought it had
loaded the secondary boot loader (having printed the "I"), but it failed
to start (having not printed a second "L"). That made me suspect the
1024-cylinder problem, though I had thought sure I was "safe" in that
regard. Here's what the partition table on the new disk looked like at
the time:
Then, in a flash of inspiration, I realized that these numbers were
based on fdisk's fictional notion of disk geometry, using the
maximum values of 255 heads and 63 sectors (and still it shows almost
four times as many cylinders as DOS could use). If the BIOS had a
different notion of geometry, then it might be using cylinder indices
that were at least a factor of two greater. Sure enough, the original
layout (see above) was reported using 128 heads
instead of 255. And, though lilo normally warns if it tries to
use files beyond the 1024-cylinder limit, it would have no way of
knowing that the BIOS was using different geometry.
So then I did the wrong thing: I moved the partition. Since the
boot partition is really small, at least this didn't waste much time.
For reference, here's what was involved:
But since I had nothing better to do, I poked around in the BIOS disk
setup menu, and fortuitously discovered that the geometry of the old
drive had been hardwired for the first disk. Once I changed the first
disk to "Auto/LBA" (which seemed to have worked when it was the second
disk), Linux booted like a charm. Of course, this probably would have
worked without moving the partition, but I hope you'll excuse me for not
feeling like moving it back to try.
For completeness, I should also note that the Large-Disk
mini-HOWTO, which I had read (or at least skimmed) while developing
my upgrade procedure, spends a whole chapter on the need for agreement
on geometry between LILO and the BIOS. Here is the first paragraph from
this chapter (titled "Consequences"):
The moral for this tale? I would like to say, "Steer clear of
ancient operating systems and their obsolete protocols," but this is
hardly possible. Perhaps a better one would be "Nobody is completely
unaffected by the Wintel monopoly."
Then, on 15 June, I sat down in front of my machine, and found that I
couldn't do anything with it. Although it looked normal, and emacs and
X11 both seemed to work fine, whenever I did something that required
forking (e.g. trying to start something new in an emacs subshell), emacs
would hang, and I couldn't get it back no matter what I tried.
Attempting to start a new shell by connecting from another machine via
ssh was completely fruitless.
After rebooting, I looked at /var/log/messages, and found a
number of wierd entries from early that morning, starting with the
following, which had transpired (or been logged, at least) at 01:05:27:
Finally, on Monday 17 June, it crashed and I couldn't reboot it; the
superblock for /dev/hda8, my root partition, was corrupt. This
necessitated booting from a rescue floppy, at which point I attempted to
use e2fsck to restore the root partition. This didn't seem to
be working; it found the corrupted boot sector problem OK, but then it
kept giving me scores of really outrageous problems, things such as ref
counts in the billions, that couldn't possibly be even close to being
correct. It seemed conceivable that there could be a handful of such
errors on a partition, if part of an inode sector had been destroyed by
a bad pointer before writing, but for the list to go on and on like that
just didn't seem reasonable. It occurred to me that the boot disk I was
using was made with kernel version 2.2.5 (the one shipped with Red Hat
6.0), but I was running 2.2.19 with updated ext2 filesystem tools, so
perhaps that was why it was seeing nonsense. So, rather than complete
the operation, I powered the machine off. I was wrong, of course; ext2
is very well established and quite stable, and would not have changed so
drastically. Even if it had been changed, such major changes would not
have come without dire compatibility warnings plastered all over the
package. Despite being wrong, it's a really good thing I gave it up;
writing these "fixes" would probably have destroyed the partition
utterly.
So I was left with an unbootable system, possibly with flaky hardware
-- time for a coffee break and some cool-headed thinking. I grabbed
several sheets of paper and a pen (being a computer guy, I don't usually
run around with pens or pencils in my pockets), and lit out for our
local Carberry's bakery and
cafe. I needed to scope out my options, because whatever I did,
getting the system back up would probably be painful and time-consuming,
and I only wanted to go through it once.
By then it had dawned on me that the kernel version thing couldn't
explain all the errors I was seeing, and that the most likely
explanation for all the symptoms I had seen -- indeed, the only
reasonable one -- was that my disk had Alzheimer's, of a sort. (This
was also incorrect, as it turned out, but at least it gave me something
concrete to work on.) With some physical and mental space between me
and the computer, plus the welcome distraction of coffee and a scone
(Carberry's makes awesome maple-oatmeal scones, BTW), I began to plan
what I could do to get my data (and my sense of normalcy) back.
The following is a rendering of my notes from that Monday morning
session:
Option 2: Punt
At PCs for Everyone, the
guy agreed that I had a bad power supply, but they didn't have the
145-watt supply I needed in stock, and in any case it would be another
two weeks before they could even look at my disk problem. I must have
blanched at the prospect of being without my computer, the "main server"
for my life and livelihood, for all of two weeks. After some
back-and-forth, we came up with an alternative plan: He could attach a
300-watt supply, even though that was too big to install properly within
the enclosure, after which I would be able to continue flogging the
disk problem. When a new 145-watt supply came in, he said he'd give me
a call, and I could return the old one under their 14-day refund policy
and replace it with something that would fit the box. Since this seemed
the only path to getting my machine up in any sort of reasonable time, I
readily agreed, and after executing the sale, he popped out the old
supply and hooked up the new one, leaving it trailing wires out the back
of the box. (Even though this was a 3-year-old machine, PCs for
Everyone has a lifetime labor warranty.) I left my name and phone
number so he could give me a call when the 145-watt units came in, and
carefully carried my Dr. Frankenstein PC back to the car.
Once home again, I picked up the rescue process where I had left off
late that morning. I moved the new drive back to where the CD-ROM was,
and booted off the old drive. To my surprise, the new drive no longer
seemed quite so flaky. I was able to use e2fsck to repair the
bad superblock, plus a few other minor partition data structure
consistency problems I had seen before as a result of unexpected
shutdowns, and that was it. (Just to be sure I wouldn't run afoul of
version problems, I used the e2fsck on the old drive to check
out /dev/hda8 on the new drive, and then used the new drive's
e2fsck (which itself lives on /dev/hda8) to recheck
/dev/hda8, and then fix the other partitions on the new drive.)
Was that all there was to it? Had all of my "disk problems" been due
to a flaky power supply? That would have been the first bit of good
luck I had seen all day, so I was disinclined to believe it. I checked
each partition with the current e2fsck version, then checked
them again to be sure they were still clean, and finally backed up all
partitions, so I would have the latest data in case the new disk resumed
misbehaving when I booted off of it. (Doing "cross-disk" backups turned
out to be tricky -- I had to cut-and-paste entries from the new
/usr/local/etc/dumpdates to the old
/usr/local/etc/dumpdates so that dump used the correct
starting dates, changing the partition names from
/dev/hdaX to /dev/hddX as I went. Then,
after I was done, I had to move the entries for the backups I had just
made to the new drive, undoing the partition renaming.)
Sure enough, it worked. The system booted properly, all services
came up without problems, and has been working flawlessly since then.
(Well, more or less flawlessly.)
For consistency, perhaps I ought to conclude with a moral that
summarizes what I learned from the experience. Certainly, the lesson I
learned (relearned, actually) is not to assume I know what the problem
is until I've actually fixed it. This is also a good maxim for
software, but it goes double for hardware, especially for me, since I
have much less hardware troubleshooting experience. But I can't think
of anything really pithy and original that conveys this. If you're
willing to forego orginality, we could use "It ain't over until the
fat lady sings." But I'll understand if you're unwilling.
There's actually slightly more to the tale -- the subsequent
de-frankensteinification of my system. This took longer than expected,
partly because the guy from PCfE never called, but I would up getting
the new power supply and swapping it for the Franko-monster without
incident. I also got a new CD-burner at
the same time.
So perhaps a good moral would be, "Choose your hardware supplier
carefully." Kudos to PCs
for Everyone for taking such good care of an old (and not
particularly rich) customer.
Once I had the spare system in place, I was in much less of a hurry
to get the old system fixed, so I didn't take it in for servicing until
about a month later. The previous winter, two older systems at work had
failed with what appeared to be the same symptoms, and both turned out
to be due to bad processor chips; the CPUs had fried themselves, and one
had also taken out its motherboard. Based on this experience, I was
expecting a repair bill of at least a hundred dollars, so I was relieved
to learn that it was only the stupid power supply (again), and the bill
only came to $51.45. But I was lucky they had the right unit; as it
was, I just managed to snag the last one that PCs for Everyone had in stock.
It seems that they had recently decided to discontinue this model on
account of unreliability.
So if the new power supply is similarly short-lived, I can look
forward to another such failure in the summer of 2006, at which time I
may have to get rid of the machine. But even if so, I will have gotten
my money's worth; the system will have lasted more than seven years in
that case, and even with the cost of repairs and the monitor thrown in,
it works out to about $12/month, which is pretty darn good. My
high-speed Internet connection costs four times as much.
Never do more than one thing at a time
When I tried to boot after shuffling all that hardware around, I was
swiftly plunged into every sysadmin's nightmare, at least for those of
us who don't hack kernels. After displaying something about how much
memory it found, I saw the messages reproduced below. I didn't bother
to transcribe them completely, which is unfortunate, but I noted at the
time that it did seem to see the new memory module just fine.
And there it hung. This looked pretty drastic to me.
.
.
Memory: . . .
Unable to handle kernel NULL pointer dereference at UA 00000030
.
.
oops: 0002
.
. [register dump]
.
Kernel panic: Attempted to kill the idle task!
In swapper task - not syncing
It also said the right thing for hda, so at least both drives
were working well enough to be queried for their model numbers and
configuration information.
hdd: ST30630A, 29188MB w/2048kB Cache, CHS = 59303/16/63
Count your cylinders carefully
Disk configuration reeks of deceit. The drive lies about its C/H/S
configuration to the BIOS, and the BIOS lies to the operating system,
either by passing on whatever tall tale the disk provided, or possibly
by embellishing a little along the way.
To cap it all off, the very notion of C/H/S addressing is
obsolete; it doesn't even apply to modern disk drives, which are
designed with
variable numbers of sectors per track, depending upon the cylinder
address. Modern drives use what is known as LBA addressing, in which
sectors are numbered sequentially from 0 to umpty-million. However, the
age-old BIOS design requires C/H/S addressing, with a hardwired 24-bit
format that only allocates 10 bits for the cylinder field. Hence, the
need to exaggerate the number of heads and sectors in order to report no
more than 1023 cylinders. Operating systems that work through the BIOS
are therefore limited to 8.5GB. This is just the tip of the iceberg;
further sorry details can be found in the Large-Disk
mini-HOWTO (also in the /usr/doc/HOWTO/mini/Large-Disk
file).
I forget now why I left that second FAT16 partition from the original
setup; I certainly wasn't worried about reclaiming all of the available
space just yet. In any case, the boot partition (shown as
/dev/hdd6) is well under the 1024-cylinder limit.
Disk /dev/hdd: 255 heads, 63 sectors, 3720 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hdd1 * 1 261 2096451 6 FAT16
/dev/hdd2 262 2261 16065000 5 Extended
/dev/hdd5 262 522 2096451 6 FAT16
/dev/hdd6 523 525 24066 83 Linux
. . .
This was not only not necessary, it was not even sufficient; upon
rebooting, LILO froze at "LI" just as before. The underlying
problem was that the BIOS and LILO were using two different addressing
schemes (both fictitious, of course), so when LILO told the BIOS to
fetch the secondary boot loader from such-and-such a place, the BIOS
actually read blocks from somewhere different. Unfortunately, I didn't
understand this at the time; I had thought I was really dead in the
water.
What does all of this mean? For Linux users only one thing: that they
must make sure that LILO and fdisk use the right geometry where
`right' is defined for fdisk as the geometry used by the other
operating systems on the same disk, and for LILO as the geometry that
will enable successful interaction with the BIOS at boot time.
(Usually these two coincide.)
It is also worth noting that the problem was ultimately due to explicit
specification of geometry in the BIOS; the HOWTOs all warn against
overriding the defaults, because chances are it'll get screwed up.
When disk errors are not disk errors: a postscript
Once installed, my new disk ran without any sort of trouble for more
than a year, and then I started getting mysterious crashes. At odd
intervals, I would find that the machine had powered itself off in the
night, with no explanation. Sometimes it was still powered up but
completely hung; I couldn't get anything to happen from the console, nor
could I connect to it from another machine. In either case, I would
power it up again, fixing whatever minor disk data structure damage had
occurred along the way, and things would seem to be back to normal
again.
This looked really bad, especially since gpm provides mouse
functionality for the console, but I was running X11 at the time, and so
wasn't even using gpm when moving the mouse, let alone when
sound asleep in my bed at a little after one in the morning. I disabled
the gpm process (the fresh one in my newly rebooted system),
but later in the day it dawned on me that this wasn't necessary, or even
relevant, since a low-level intermittent hardware glitch could readily
explain odd problems in gpm, or any other process for that
matter. I think that's when it first occurred to me that the disk might
be flaking out.
Unable to handle kernel NULL pointer dereference at virtual address 00000056
current->tss.cr3 = 0059e000, %%cr3 = 0059e000
*pde = 00000000
Oops: 0000
CPU: 0
EIP: 0010:[check_tty_count+16/116]
EFLAGS: 00010202
eax: 00000002 ebx: 00000001 ecx: c0241fdc edx: c1867000
esi: 00000001 edi: c0012930 ebp: c18c8960 esp: c000bf10
ds: 0018 es: 0018 ss: 0018
Process gpm (pid: 734, process nr: 36, stackpage=c000b000)
Option 1: Rescuscitate
Based on this, it looked like it was going to be a roughly equal amount
of pain either way. The second option would take a lot longer, but
would also get me over the Linux upgrade hump as well. But it would
also cost significantly more, so I decided to give the first plan a try;
at worst, I'd only have to give up and use the second plan anyway. Here
is what I tried:
May have to rebuild root partition;
screwed if so.
no; can preload dumps onto old partition.
Cost/benefit
Opt 1
Opt 2
$$
+/0
-
time
+/-
--
upgrade
+
risk
-
+
At this point, I had intended to reboot on the old disk in order to fix
the new one, but was surprised to find that I couldn't even get the box
to power up. No lights, no whirring fans, no response to pushing
buttons, nothing. Without the cooling fans, the silence in my attic
office was oddly distracting. At that point, I figured I had no choice
but to take it in to the shop, and let them diagnose the problem.
Another one bites the dust
Unfortunately, this power supply lasted slightly less than two years.
On 19 June 2004, the system crashed, wouldn't reboot, and couldn't even
be booted via the rescue CD (I had upgraded to SuSE 8.1 in the mean
time). At that point I wasn't sure what was wrong, but the next morning
it wouldn't even power up -- no BIOS beeps or anything -- so I knew it
had to be a hardware problem. This time I was better prepared, as I had
bought a new desktop machine only a few months before, and was able to
put it into service over a period of several days. This involved
creating new partitions, restoring files from backup, and rebuilding,
installing, and configuring the necessary servers, so it was not
straightforward. Still, it wasn't as hard as it could have been; the
new system runs SuSE 9.0, which has such things as xntpd
pre-installed, and I had already been using the Apache 2 Web server on this machine
to test and preview my Web site. The biggest headache was getting mail
service to work again, as I had to recompile and install qmail
and ezmlm. Since then I have learned that RPMs are available
(at least for qmail), which might have made the job much
easier.
Bob Rogers
<rogers@rgrjr.com>