• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Overclocking Sandbox: Tbred B DLT3C 1700+ and Beyond

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I'd go ahead and set the voltages the same as the first unless there are cooling issues with the first rig. Then after the OS is installed, set it to 75% of the first rig's OC. Ramp it up after that if all is well.
 
c627627 said:
I got one for you hitech man:

Let's say you buy two identical systems. Same CPU codes, etc...

Your system gradually does such and such overclock stability tested.


What do you do with the second system from the get go, I mean in what increments considering you know what the first one did approximately?

Do you boot at stock at all, or do you jump to what FSB and what voltage since the system is identical to the first one. Let's say it's a PC2100 RAM situation and let's say 1800+ B. Let's say 152 FSB max with 15 multiplier. Vdimm up one notch, 1.8 volts Vcore...

How many reboots in what increments to approach the first system's overclock?

Before a scientific or engineering discovery, sometimes it might take 10, 20 years, or more to figure out certain things. But once it is known, even without complete details, it might take relatively short times (maybe only days) for other to confirm and improve on them.

Scaled down to specific problems like optimizing computers, once some data points are known, such as how high a CPU or motherboard can be clocked, or about what mod can be done, experienced people would be able to replicate the results quickly, by skipping many steps.

So back to your question above two identical systems:
- Once one has some data point to chew on, the time to optimize the second one should be much shorter. It does not matter the knowledge is from the first system, or from some other sources (such as the internet).
- I would say one would not have to follow all the steps like setting up the first one or some other systems, since there are data points and know-how.
- Experience people should be able to just start with stock settings, for sanity check.
- Then jump to the vicinity of the already established settings (from the first system or from other sources), and nail down the search to finalize the setup.

Since each additional electronic component do not behave exactly the same (as the first one), the process becomes statistical. Even with same stepping, same magic code, …, one would not be able to, in general, get the exactly same results on voltage, MHz, temperature, …. There would only be certain probability to achieve results with respect to the first one. Also the probability would follow the composite probability distribution of that for the individual components, NOT the behavior of the first system built, even all the settings are exactly the same as the first one.

An example seems to confirm this is the NF7-S rev 2.0, one of the most popular boards for FSB overclocking. Two such motherboards obtained (within the same vendor, within a few days, may be even the same order) can be completely different in terms of highest FSB, stability characteristics, behavior in terms of bios settings, …. I have posted about how tricky to overclock that motherboard is.


-- to be continued, next post --
 
Last edited:
What is the probability of achieving certain level of overclocking

Components and the composite system follow normal distribution. Each component centers around a mean and has certain standard deviation (or sigma). In normal distribution, 84.1% of the population is below +1 sigma, 97.7% is below +2 sigma, 99.9% is below +3 sigma, about 90% is below +1.3 sigma.

(1) Looking at the poll about the Tbred B 1700+ DLT3C in this forum, even there are many unknowns (or randomness, including voltage, cooling, skill, degree of overclocking, margin of errors, ...) about how each number is obtained, but the 360+ data points shows something. As of today (10/18/2003), the distribution centers within the interval 2300- 2400 MHz.
...
Probability for 2101 – 2200 MHz = 11.36%
Probability for 2201 – 2300 MHz = 14.68%
Probability for 2301 – 2400 MHz = 19.94%
Probability for 2401 – 2500 MHz = 18.28%
Probability above 2501 MHz = 11.91 %.

Approximately, it shows the 1700+ DLT3C max overclocking frequency center around 2350 MHz, with a sigma of 125 MHz. Assuming 84.1% of the chips are below 2475 MHz.


(2) Usually, on the average, the Barton does about 100 MHz less (older Barton's are 100-150 MHz less), compared to the DLT3C 1700/1800, so I would extend the distribution for Barton to:

The distribution centers within the interval 2200- 2300 MHz.
...
Probability for 2001 – 2100 MHz = 11.36%
Probability for 2101 – 2200 MHz = 14.68%
Probability for 2201 – 2300 MHz = 19.94%
Probability for 2301 – 2400 MHz = 18.28%
Probability above 2401 MHz = 11.91 %.

Approximately, it means the Barton max overclocking frequency center around 2250 MHz, with a sigma of 125 MHz. Assuming 84.1% of the chips are below 2375 MHz.


(3) We can do similar thing for other major components for overclocking:
- Max FSB for motherboard, e.g. distribution for 190, 200, 210, 220, 230, … MHz
- Max overclocking frequency for memory, …


With these information, we can calculate the probability of a system, assuming the overclocking behavior of each components is independent (at least to the first order).
Prob(CPU, FSB, memory) = Prob(CPU above x MHz) Prob(FSB above y MHz) Prob(memory above y MHz)

E.g. for NF7-S rev 2.0, with certain bios setting, voltage setting, …
Prob(motherboard, above 200) = 99% (assume 1% may not do 200 MHz !!!)
Prob(motherboard, above 210) = 85%
Prob(motherboard, above 220) = 50%
Prob(motherboard, above 230) = 15%
Assuming same distribution for a PC3200 memory.

Then the probability for a system with CPU above 2400 MHz, FSB and memory above 220 MHz is
Prob(CPU, FSB, memory)
= Prob(above 2400 MHz, above 220 MHz, above220 MHz)
= (0.18+0.12) (0.50) (0.50)
= 0.075 = 7.5%

If the components follow the above distribution, there is only 7.5% chance that the second system can do 2400 MHz for CPU, 220 MHz for FSB and memory. That is the second system follows the composite distribution of the components, not the first system.

If one can establish a higher probability of achieving a certain level of overclocking for some compoent types or stepping, ..., then the above probabilities for some components (e.g. some CPU stepping) would be higher, and the results would be slightly different (higher). But still, a second system would follow the composite distribution of the better components (of higher probability for performance), not the first one.
 
Last edited:
What is channel length of a MOS transistor

There are two key dimensions in (MOS) transistors of a chip (CPU), namely the transistor width and transistor channel length. The 180 nm, 130 nm, 90 nm, 65 nm, ... commonly mentioned are the nominal design length of the smallest transistors of a technology. Design length is the length used by chip designers to describe each individual transistor to make the masks for manufacturing. There is another length called the effective length which is shorter than the design length (to be discussed later), it is the actual, physical distance the electrons (or holes) in a transistor have to travel in order to switch the transistor ON and OFF.

Channel length (either in terms of design length or effective length) is a key metric to describe a (lithographical) generation of silicon technology, from which (among other parameters, such as gate oxide thickness, transistor threshold voltages, nominal supply voltage, ...) transistor delay, clock frequency, device density, ON and leakage current density, power density, … can be derived. Almost all transistors in a given technology are made with the smallest length to take advantage of the speed offer by a technology, since the smallest the transistor channel length, the higher the clock can be.

Inside a chip, each transistor, wire (among other things) are described by a set of rectangles for manufacturing. E.g. a FET (field effect transistor) is basically represented by two rectangles, one for diffusion (D) and one for polysilicon (P), plus some additional rectangles to define the well. The intersection of D and P defines a transistor gate under which is the channel region of a transistor where electrons (or holes) move to turn the transistor ON and OFF, controlled by the gate to source voltage and drain to source voltage. The length of P is the channel length, whereas the width of P is the width of the transistor. The two end of the D rectangle outside of P define the source and drain respectively of the transistor. The shorter the length of P, the faster the transistor can switch and the faster the clock of the chip can be.

Impurity has to be implanted into the source and drain diffusion regions (as described above). Due to the small size (sub-micron, 1 micron = 1000 nano meter (nm), or 1 meter = 1,000,000,000 nano meter) of the current state of silicon process, during manufacturing, the impurity diffuses into the adjacent channel region under the transistor gate. The impurity is conductive and hence effectively reduces the length of the channel region of the transistor. That means the actual effective length is smaller, the electrons (or holes) need only to move a distance shorter than the design length. So the transistors can actually switch faster and be clocked faster. For sub-micron technology, such reduction in percentage is very significant, about 25-40% that of the design length, as published, e.g. for 90 nm technology, effective channel length seen is around 50-70 nm.


What is gate break-down voltage

MOS FET’s which makes up most of the CPU and CMOS chips have very high gate impedance, since the gate is insulated from the underlying source drain channel of a FET, and the gate itself would NOT conduct current (except for the order of magnitude smaller tunneling current due to quantum effect) even when the gate voltage is above its threshold voltage (about 0.2 – 0.3 V for 0.18, 0.13 and even 0.09 micron regular FET’s). So applying 1-2 V to the gate per se (at least up to 0.13 micron chips) should not be a concern.

MOS FET’s have a so called gate break down voltage. The gate of a FET transistor is insulated from the underlying source drain channel by a thin layer of silicon dioxide (SiO2) forming a MOS capacitor. The thickness is very thin (of the order of 20-30 A for 90 and 130 nm, and is getting thinner for next generations, approaching 10 atoms or so thickness). So when a high enough voltage is applied to the FET gate, the intense electric field (electric_field = voltage/oxide_thickness) may damage the gate dielectric (dielectric breakdown). I estimated that voltage is somewhere between 2 and 3 V, depending on the oxide thickness. So if a DMM or analog multimeter has high open circuit voltage, they indeed can potentially damage FET.

For commercial chips, the internal FET’s are well insulated from the external pins from package. The package pin should not be directly connected to the internal, smaller transistors, without going through some stage of I/O buffers which also can stand a higher voltage (e.g. thicker oxide) than the internal small and fast transistors. There are also protective diode at each package pin to minimize damage due to electrostatic discharge, ….
 
Last edited:
Enermax 92mm Fan important discussion:
hitechjb1:
"For some motherboards, if the auto fan speed protection is on, the motherboard would not start if the fan is OFF or the speed is below certain RPM. Hence in your case, the system did not boot."

c627627:
What happens on your mobo with the 92mm Enermax Fan at default settings? I gather that if "auto fan speed protection is off," then my nForce2 Epox 8RDA+ would not be affected by this fan like it is now when it's set for low spin which shuts down the system. Is there another name for "auto fan speed protection" because I can't seem to find it in BIOS...

hitechjb1:
"Have you looked at the CPU health section (or something like that) of the bios, which shows Vcore, temperature, .... Mine has auto shutdown if fan is off or CPU above certain temperature. When I first put my Enermax 92 mm fan on, it did not turn, but motherboard was still booting for 5 - 10 sec until I rush to shut computer down. The fan is defauted to OFF or low, very scary. May be some bios have built in shutdown down also. Anyway, I am going to post some results on 4 fans, ... see my AMD post. I found that the Enermax 92 mm fan is useless for overclocking, not enough CFM, ..., will post detailed results later."

c627627:
Sorry, the fan disables the system when it's connected to any of the three fan connectors on the mobo. It only works at the max setting. If the computer is on and while it's on if I try to turn down the fan speed just a little - instant shutdown. So "auto fan speed protection" affects all three fan connectors? Shouldn't it only apply to CPU fan connector?

hitechjb1:
It looks like the shutdown may not be due to fan shutdown protection. "It is due to CPU temp protection. When the fan speed is turned down, it temp rises quickly above the temp protection limit and triggers the shutdown. Forget that Enermax 92 mm fan, it is not good for overclocking, not enough CFM."

c627627:
Thank you for this reply. I couldn't find anything in the BIOS either. I'd like to talk about this. You believe it's the auto shutdown kicking in. But the mobo can't start at all with the fan being set on low. It doesn't have time to heat up. Did you try turning off the auto shutdown feature. What if the mobo doesn't start even if the auto shut down feature is off. Would that not mean it's not the temperature?

I'd like to find out the reason for this. Please make this a part of your fan testing results.

This fan is quieter than 80mm fans and it does seem to keep my temps at 50 C at 1.85 volts and FSB near 200 so at $5.99, given the low noise level, would it not have to be dismissed by extreme overclockers only.

It may get thumbs down because of the way it affects the system...
 
Here's what I've got running now:

Abit NF7-S v2.0
AMD Barton AXDA2500DKV4D AQXEA 0331XPMW
Thermalright SLK947U & Zalmam ZM-F2 92mm fan
Kingston HyperX KHX3200A 512Mb DDR400

I'm not really overclocking it now (technically yes) but it's 99% sure to be an XP2500 rebadged XP3200 core.

Default vcore 1.6v/200fsb/2200MHz 25c sys/42c CPU @ 100% SSE FAH gromacs load 24/7.

Kingston HyperX 2.5-3-3-8 set at most stable optimum setting by default.

CPU interface disabled by default.

My fan is a 92mm 60+ CFM 36dB that has a lot of overhang air spillage on the VRM MOSFETS/coils/caps & ICs. around the ZIF socket. The system temp sensor is obviously getting a good blast of air too as it's only a few degrees above ambient.

I haven't even started overclocking yet as I just selected the XP3200 Barton Choice in the BIOS v18.

This seems to be an amazing CPU so far. Maybe this will be my ticket to 2500MHz on air.

After a couple of weeks of proven stable operation running FAH Gromacs with SSE, I'll start pushing it a little at a time.
 
Testing 4 fans w/ a SLK-947U on a Tbred B 1700+ DLT3C

The following four poplular fans were tested on overclocking
- a Tbred B 1700+ DLT3C WPMW 0310
- to 2.57 GHz (223 x 11.5) @ 1.95 V (Prime95 stable)
- NF7-S rev 2.0 motherboard (Vdd 1.7 V)
- 2x512 MB 3500 memory @ 6-3-3-2 (Vdimm 2.9 V)
- SLK-947U

The four fans are
1. 92 mm Vantec Tornado, 119 CFM @ 4800 rpm, 56.4 dBA
2. 80 mm Vantec Tornado, 84.1 CFM @ 5700 rpm, 55.2 dBA
3. 80 mm TT SF II, var speed, max at 75.7 CFM @ 4800 rpm, 48 dBA
4. Enermax 92x25 mm^2 (UC-9FAB), var speed, max at 64.15 CFM @ 2800 rpm, 34.3 dBA

Test Results

fan.................................CPU frequency…......load temp (CPU/sys)……..fan speed reported
Vantec 92 mm Tornado.......2.57 GHz.….............45 / 16 C………....…....………4821 rpm
Vantec 80 mm Tornado………2.57 GHz.….............37-40 / 15 C……......…………5444 rpm
TT SF II 80 mm….......……….2.57 GHz.….............48 / 18 C…………….......…....4561 rpm
Enermax 92 mm………...……….2.34 GHz (1.875V)...48.5 / 24 C…....................2377 rpm


Observations:

The Vantec 92 mm Tornado did NOT perform as well as the 80 mm Tornado, not because of the fan noise, it is the CPU temperature, about 7 C worse (try a few times w/ system turned off)!! That means, under the 80 mm Tornado, the CPU should be able to go higher (above 37 C), provided the CPU can take the higher Vcore.

The larger 92 mm diameter and higher CFM are not usefull. From my measurement, both the 80 mm and the 92 mm Tornado have the same size of dead spot at the center. The 92 mm is rated 119 CFM compared to 84.1 CFM for the 80 mm, so my conclusion is that the 119 CFM air flow is wasted over its larger 92 mm diameter and not concentrate over the CPU area to cool the CPU.

The Enermax 92 mm performed poorly on overclocking, even it is the quietest among them. First, its fan speed is only around 2400 rpm (knob at max speed), not 2800 rpm as spec. Hence there is not enough CFM to cool the CPU for overclocking. The highest stable CPU frequency is 200+ MHz below the other three fans.

Similar to the 92 mm Tornado, its high 64 CFM @ 2800 rpm did not help compared to the TT SFII. It was working more like a 30-40 CFM fan, due to below spec rpm and air flow wasted.

Further, the speed knob came default at lowest speed, and the fan did not even turn (due to friction I think) when the system was power up. The motherboard booted without fan for 5 – 10 sec and I had to rush to shutdown the machine. The fan-fail protection of the bios apparently did not kick in also (reason unknown).

When the fan speed is at LOW, I have to spin the fan blade or tape the fan few times to get it to overcome friction in order to turn, very unfriendly and unreliable at LOW setting. When the speed is at LOW, I measure the fan with an ohmmeter, it is not OFF. So it just does not start due to friction.

Conclusion:

From the test result, among these four fans, I would use either

- The 80 mm Tornado for maximum CPU overclocking (at least for testing, or add a fan speed control), or

- The TT SF II for 24 / 7 usage, with speed adjusted to acceptable noise level. It is around 50 MHz less than what the 80 mm Tornado when pushing for final MHz.

The 119 CFM 92 mm Tornado performs way below the 84.1 CFM 80 mm Tornado, the high CFM of the 92 mm is wasted. My explanation is that the air flow is not concentrated onto the CPU, due to its larger size overhang over the narrower heat sink (in one direction). That is its CFM efficiency is low compared to the 80 mm Tornado, as a result, I think its effective CFM over the CPU area is smaller than that of the 80 mm counterpart.

The Enermax 92 mm does not have enough CFM for the last 10% overclocking, losing 200+ MHz. Due to its unreliable low speed (the one tested did NOT start at low setting when system powered on, c627627's 92 mm Enermax also), I do not recommend it for cooling CPU. Even aiming for quietness, the actual CFM per dBA is not as useable as the TT SFII at various speed setting or when turned to lower speed.


The tests were not done with full extent, such as testing with various speed, measuring noise, ..., the main goal is see how the fans perform under full load.

The main surprise of the tests is that the bigger 92 mm fans, even have more rated CFM, but due to their larger size and larger overhang over the heat sink. The air flow spread out and is not as concentrated over the CPU die area as that from a 80 mm fan, they lose CFM efficiency. That is the actual amount of air flow (CFM) over the CPU may be even less than that from a 80 mm fan.

I think this reduced CFM efficiency for 92 mm fans will extrapolate into slower fan speed. Even I don't have test results to support at this point. I conjecture that for the 92 mm fans (of similar type to the 92 mm fans tested) will not perform as well as 80 mm fans at slower fan speed with same noise level due to lose of air flow for the bigger fans. Hope others can firm or disprove that.


So for the SLK-900U or SLK-947U HS which can take 92 mm fans, I would stay with 80 mm fans, as the two 92 mm fan did not perform well at all.


Comments are welcome, confirming or otherwise.
 
Last edited:
--- to continue ----

Next few posts show screeen shots of prime95 run using the different fans.
 
Last edited:
Vantec Tornado 92 mm

TbredB_1700_DLT3C_223_2567_SLK947U_tornado92_c.JPG
 
seems like something isn't right between the tt sf2 and the 80mm vantec in that you're getting an extra 11C of cooling for only 9cfm difference.

Is ambient the same for each? 15C for sys temp is damn cold.
 
My fan's overhang seems to be cooling the mobo components (NB included) as well as the CPU. With the smaller hub, the 80mm might just be able to push more air down those fin spaces better than the 92mm.

Once I try overclocking, I'm sure my sys/cpu temp spread will be greater than 17c like it is now. That 80mm Tornado has a very tight 22c spread for such a high overclock. Only con is the shopvac spl levels you have to endure. My 2575 RPM fan is almost silent but not as effective as your Tornado.
 
johnoh said:
seems like something isn't right between the tt sf2 and the 80mm vantec in that you're getting an extra 11C of cooling for only 9cfm difference.

Is ambient the same for each? 15C for sys temp is damn cold.

The measurement of the Vantec 80mm and TT SF II were done within two hours, no full temperature control of the ambient. I also redid the run, the Vantec Tornado 80 mm reading can vary a few C between 37-40 C, depends on the ambient which can vary by a few C also.

So the spread between the TT SFII and the Tornado 80 mm can be between 8 - 11 C.


I look back some of the screen shot in this thread (page 1 and 2), I got these numbers under Prime95 for both fans with the same heat sink SK7 and roughly same system ambient:

- TT SF II + SK7, 4891 rpm, 2.52 GHz, 48 / 28 C (A7N8X-DLX sensor)
- Tornado 80 mm + SK7, 5625 rpm, 2.54 GHz, 42 / 27 C (A7N8X-DLX sensor)

This shows there is a 6 C spread but the CPU is running 20 MHz lower for the TT SF II. 20 MHz is about 2 C in temperature. So the spread is again 8 C between the 80 mm Tornado and the TT SF II.

Also another data point:
- Tornado 80 mm + SLK800U, 5113 rpm, 2.6 GHz, 38 / 27 C (A7N8X-DLX sensor)
 
So would you say perhaps the sf2 75 cfm is overstated? Seems like the 50% higher fins of the vantec would give it more than an 83cfm vs 75cfm advantage, especially given the rpm difference.
 
johnoh said:
So would you say perhaps the sf2 75 cfm is overstated? Seems like the 50% higher fins of the vantec would give it more than an 83cfm vs 75cfm advantage, especially given the rpm difference.

The CFM specfications such as
- 119 CFM for the Tornado 92 mm
- 84.1 CFM for the Tornado 80 mm
- 75.7 CFM for the TT SFII
are air flow measured when the fans are operating in open air, both side of fans are in open air space, i.e. the pressure on both side of the fan are the same, zero pressure difference. The air flow (rate) in CFM is maximum at that operating point and that number is quoted for the fan specification.

When the fans are mounted over a heat sink, or mounted over a heat core or radiator (as in water cooling), or an air duct, the pressure on both side of the fan becomes different. Using heat sink as example, The actual air flow (rate) through the heat sink is determined by the intersecting point of
- the pressure vs flow characteristic of the fan and
- the load characteristic (pressure vs flow) of the heat sink
Since87 had some good pictures showing how a fan and its load interact in the pressure vs flow rate characteristics
Since87 said:

...

fancurves.gif


[Ignore the blue line. I added the blue line to indicate the pressure drop vs flowrate curve for a heatercore in another post.]


The flow resistance of the load (heat sink) is a function of static pressure (exerted by the air from the fan) and air flow. The intersection of the two curves determines the operating point of a particular fan and heat sink, and it gives the static pressure at the interface boundary of the fan and heat sink, and the actual air flow, which is much reduced than the open air flow rate.

The actual pressure profile over the fins of the heat sink exerted by the fan (and vice versa) determines how much is the air flow (rate) through the heat sink to cool the CPU. The pressure difference betwen the front and back of the fan is the same as the static pressure at the fan-heat sink air interface and the atmoshpere.

The Tornado 80 mm fan has an open air flow (rate) of about 8 CFM (= 84 - 75.9) higher than the TT SFII, as you pointed out. Bu it is conceivable that the Tornado having a 38 mm casing, and actually longer spacing (~ 10 mm) between the fan blade and the heat sink, and it translates into a different load characteristic of pressure vs flow (i.e. more flow per pressure). Hence for the same heat sink, the Tornado 80 mm operates at a higher pressure and a much higher effective flow rate than 8 CFM, percentage-wise.
 
Last edited:
That makes sense to me. Especially since the 25mm high sf2 will result in a higher flow resistance since the meat of its fins is about 33% closer to the sink.

But even in open air, I have a hard time believing a 10% cfm difference between a 25mm fan versus a 38mm fan operating at 20% higher rpms.

So aside from your work (nice work here btw) showing that an undervolted tornado may be the best fan for normal use (I couldn't handle the volume of it at 5000+ rpm), I think it shows that tt may be overstating their case.
 
johnoh said:
That makes sense to me. Especially since the 25mm high sf2 will result in a higher flow resistance since the meat of its fins is about 33% closer to the sink.

But even in open air, I have a hard time believing a 10% cfm difference between a 25mm fan versus a 38mm fan operating at 20% higher rpms.

So aside from your work (nice work here btw) showing that an undervolted tornado may be the best fan for normal use (I couldn't handle the volume of it at 5000+ rpm), I think it shows that tt may be overstating their case.

I did some estimate and show that it is possible that the Tornado 80 mm has 10% CFM more than the TT SFII while spinning 20% more RPM in open air. The reason is because the TT SFII has larger effective fan area, or the Tornado has more wasted, dead area.

Rough measurement:

Fan.................Outer fan diameter (OD) ..........Inner fan diameter (ID)
Tornado 80 mm...........73 mm ............................42 mm
TT SF II ...................72 mm ............................37 mm
RPM .........................5700 ..............................4800

area = pi (OD^2 - ID^2) = pi (OD + ID) / (OD - ID)
CFM = constant (area) (RPM)

So
area_Tornado_80mm / area_TT_SFII = (73+42)(73-42) / (72+37)(72-37) = 0.934

CFM_Tornado_80mm / CFM_TT_SFII = 0.934 (5700) / 4800 = 1.11

That is, Tornado 80mm has 11% more CFM than TT SFII in open air (agree with the fan spec 84 CFM vs 75.9 CFM, also 11% difference).

Then when they are put over a heat sink, the pressure/air flow slope may be different for the two fans, in favor of the Tornado with 38 mm duct compared to 25 mm of TT SFII. The effective CFM could be higher for the Tornado 80 mm at the operating point of the combined fan pressure-flow characteristic and heat sink flow resistance.
 
Last edited:
Does that suggest that if you put a 7mm spacer under the sf2 to decrease the sink's flow resistance to be about equal to the tornado that the sf2 will subtantially close your temperature gap between the sf2 and the tornado? At low speeds having the fan right up against the sink would seem to help create pressure which is good, but at high speeds the proximity of the blades to the sink would create flow resistance which is bad.

btw the above area calculations do not appear to take into account the increased size of the tornado fins due to the 38mm fan height, but are two dimensional only. Shouldn't the blade height figure in somehow?
 
Back