I think there is some confusion here, about PCI slots.
Your normal PCI slots are 33 MHz and 32 bit. The two 66 MHz slots on the K7D are 66 MHz, and 64 bit. Now, if your PCI card is normal length, but has two keys cut out, it is 66 MHz, or 66 MHz compatible. If it only has one key cut out, it is just 33 MHz.
If your card is extremely long (longer than a regular PCI card) then it is 64 bit. That extra length of those PCI slots is where you get 64 bit. (If you have a 66 MHz, 64 bit PCI card, it will have three cutouts.)
Not all 66 MHz cards are 64 bit, not all 64 bit cards are 66 MHz.
You can, very well, have 66 MHz, 32 bit PCI cards. You can also have 33 MHz, 64 PCI cards. My SCSI card is 33 MHz (but 66 MHz PCI slot compatible) and 64 bit. My old RAID card was 66 MHz and 32 bit.
Here's the bandwidth lowdown.
-The peak bandwidth of a "Normal" 33 MHz, 32 bit PCI slot is 133 MB/s.
-A 33 MHz, 64 bit PCI device has a peak bandwidth of 266 MB/s.
-A 66 MHz, 32 bit PCI device has a peak bandwidth of 266 MB/s.
-A 66 MHz, 64 bit PCI device has a peak bandwidth of 533 MB/s.
Now, the MPX chipset has this behavior with its 66 MHz, 64 bit PCI slots.
If you run two cards in both of the 66 MHz, 64 bit slots and one is a 66 MHz PCI card, but the other is a 33 MHz PCI card (66 MHz compatible), then the bus will slow down to 33 MHz. Generally, this isn't a big deal, but if you run a 33 MHz (but 66 MHz compatible) NIC or something, in one of those slots, it will limit the bandwidth of the other device, if the other device is a 66 MHz device.
Now, if you run two cards in both of the 66 MHz, 64 bit slots and one is a 64 bit, but the other is a 32 bit, then the bus doesn't slow down. The 64 bit device has its normal bandwidth, and the 32 bit device has its normal bandwidth.
One last thing
The 33 MHz PCI bus, for those "normal" PCI slots, below the 66 MHz, 64 bit slots, have a known issue.
This bus (totally separate from the other bus) has a latency issue, which messes up its peak bandwidth. These slots SHOULD be able to achieve 133 MB/s peak bandwidth, but they don't. This latency issue strangles the peak bandwidth down to about 40-60 MB/s. Now, for most devices (soundcards, NICS, TV cards, etc) this doesn't hurt anything, because they never come close to using the peak bandwidth, anyway. For disk controllers, however (SCSI, RAID, etc), this is a very bad issue. Hard disks actually reach 100 MB/s peak data transfer or higher. This latency issue really messes up their performance. RAID controllers reach up past 100-133 MB/s in sustained data tranfer rate. To choke the performance down to half of that, ruins the whole point of running RAID.
You don't want to EVER run ANY kind of disk controller in any of the 33 MHz PCI slots. Run them in the 66 MHz PCI slots, even if they are just 33 MHz (but 66 MHz compatible), otherwise your performance will be awful.