Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!
There are many kits not available in mass sales. It's hard to pick many IC that will run at DDR4-4333 settings. There are other Trident Z which are black/yellow and cost less. On most motherboards you won't be able to set more than ~4133 anyway ( on some not even 3600+ ) and anything past 3733 is way too overpriced.
Personally I recommend Trident Z 3200 CL14/15 or 3600 CL16. Both should run at 4000/1.35V and relaxed timings.
I don't know what motherboard you have but from ASRock only OC Formula runs above 4000 without issues. ASUS has maybe 2-3 boards from ROG series but even Hero/Extreme can't make much more than 4000. From MSI you can find couple models that have guaranteed 4266. I don't know what about Gigabyte but most users have problems with them above 3200-3600.
4333 kit is simply designed for OCFM which has shorter traces than regular boards and is more like ITX board on micro ATX PCB.
DDR4 3000-3200 MHz CL14/15I was actually going to get a ASRock Z170M OC Formula board.
whats the sweet spot for ddr4 then, speed/timings wise.
There is more to it than that... also, the timings come from the XMP profiles on the stick AFAIK. Voltage to the sticks may be a factor, but, most will run u to 1.8V which a lot of DDR4 IC's don't scale too much past that... Im thinking its more in the BIOS and architecture of the board, really.It is mostly the voltage capability of the board + the sub-timings.
When you are transmitting a bit pattern you typically want less than 1 error per billion bits. Running all billion of the bit permutations creates an eye-diagram, the signal must always be outside the eye-mask, which delimits the zone of where fuzzy logic occurs and the receiver can't tell if it is receiving a 0 or 1, in order to be stable. By raising the voltage, you are expanding the eye-diagram and allowing more bits to fall outside the eye-mask, increasing stability. If your timings are too short, the risetime of the signal won't be fast enough to rise to a 1 state or fall to a 0 state and resulting in an undetermined state.
I dunno how much firmware plays into it really. Unless there is some sort of pre-emphasis built-in to the board to compensate for their specific board design.
Each board routes the traces between the memory controller and the DIMMs differently. Increase trace length (dielectric/conductor losses, compensated by more voltage) and reflections from the discontinuities (slower rise-times, compensate with looser sub-timings) will affect the signal integrity too.
For example, speed of light is 300 mm / ns in air and in a board made of FR4 its going to be roughly 150 mm / ns. 4GHz clocks are going to have a rise-time of roughly 0.09 ns. So the spatial extent of the rising/falling edge for the bits will be 13.5 mm. Any impedance discontinuity (turns/bends in trace) of 1/10th that feature size will produce reflections that will degrade the signal. At some point the "eye" in the eye-diagram will close and you will consistently fail to pass the eye-mask which leads to instability.
Firmware plays some role. There are plenty of instances where memory is clocking better with BIOS updates. Not sure why or how though.
What makes boards not able to handle the faster memory speeds It just can't be sub timings or is it?
Motherboard design. Generally shorter traces = less interference = higher max clock as signal is more stable. You can see that 2 memory slot motherboards like most ITX are overclocking better. One of the cheaper higher series Z170 boards is MSI Z170I Gaming Pro AC which has no issues to run at 4133 and max on the tested memory list is 4266. On this board was world record some time ago and first official DDR4-5000+. ASRock OCFM was made in similar way and it's like ITX length traces with additional stability on micro ATX board.
What also counts and is probably more important is how BIOS is designed. Most motherboards have BIOS tunned for stability, not max overclocking. It doesn't mean that overclocking series boards are less stable. Right now the best memory overclocking boards are OC Formula series from ASRock and higher MSI Gaming and OC boards. All other brands can barely pass 4000. Even highest ROG boards have often issues to pass 3866 stable in dual channel.
Timings depend from board design but also general system stability and OS. This is weird but you can't lower timings too much or Windows won't be able to address it without errors, even though memory can be stable.
On Skylake memory is scalling good up to at least DDR4-3600. However in games etc you won't see special difference above ~3200. Best is to buy good 3200 CL15-15-15 kit and overclock it if you want higher frequency. Trident Z 2x8GB 3200 CL14-14-14 and 15-15-15 are on the best Samsung IC which in most cases can handle 4000-4133 without issues.
From board to board ATX ITX standard, how do they shorten the tracers when the memory distance from the CPU to memory is the the ATX and ITX standard, memory ATX ITX distance to CPU for the intel cooler?
Also I read Asus has customized trace layout for reduced crosstalk and coupling noise on The Z 170A does it do better than all the other boards on memory overclocking? What do they change in the Bios for better memory overclocking besides VCCIO and VCCSA?
Crosstalk and coupling noise are the same thing. Crosstalk is energy from one conductor coupling to an adjacent line through mutual inductance and/or mutual capacitance. This always occurs during a rising or falling edge because there is a dI / dt at that moment where the edge is located on the transmission line as it propagates. It is also sometimes referred to as switching noise. Crosstalk comes in two types, near-end crosstalk (NEXT) and far-end crosstalk (FEXT). The only difference being which direction the energy travels after it couples over to the adjacent conductor.
As Woomack pointed it out, even if the length (or time delay) of the conductor (really should be called a transmission line) is constant. There are other effects that cause signal degradation, namely reflections. Any time a signal sees an impedance discontinuity, part of the energy of the signal will reflect off that interface and travel in the backwards direction and the signal travel in the forwards direction will be alterred, so if there are addition resistances, capacitors, inductors, adjacent traces, change in dielectric constant, bends, turns, vias, etc. that will introduce minor reflections which increases the noise in the transmission line and it will get worse as the frequency goes up. This is because as we overclock and raise the frequency, the signal is able to "see" more imperfections in the transmission line. See my previous reply above on spatial extent of the signal.
Another important point is that having less receivers (DIMMs) will result in a higher overclock. The transmitter or driver must be routed to all it's receivers and at the split there is a impedance discontinuity and upon arrival at the receiver, it must be perfectly matched to the characteristic impedance of the transmission line, if its not, you get a reflection off the receiver which travels backwards. So the more branches we have, the more small reflections we expect to be propagating alone the path before they dissipate from the losses of the transmission line.
A quick example is the step from DDR2 to DDR3. The main difference between the two technologies is the routing topology to the receivers. With DDR2, they were using a "star" topology where the signal is split and branches out and then they our individually routed to the receivers. The advantage here is each receiver is the same length (or time-delay) from the transmitter, so the timings of when to expect to read the bits is the same for all receivers. The problem with this are the issues I mentioned above, discontinuity at the split and the reflection from the non-ideal termination at the receiver. The reflections would then propagate backwards and go down the other paths, causing more noise the higher the frequency was raised (impedance of the transmission line is not constant with frequency ).
For DDR3, they switched over to a "daisy-chain" topology, where the transmission line is tapped at different points along the transmission line for each receiver. The advantage is that there is less noise from reflections propagating on the other paths. The disadvantage is that each receiver is a different length (or time-delay) from the transmitter. So the timing for reading a bit is different depending on which receiver is reading it. With some work you can get around that though by adding a built-in offset based on the difference in electrical length (changes with frequency) from the first receiver to receiver n.
How do they test or measure reflection off the receiver which travels backwards? You would think there would be a set specification for all motherboards memory tracers with DDR3 or DDR4.