Why would 1.45V be healthy for the CPU part and unhealthy for the NB/GPU part? Is it simply because it's 23% more voltage than stock? Perhaps the reason the voltage was kept that low is for power consumption reasons, not longevity reasons. Don't forget that the Trinity series were designed primarily as mobile chips that had max 100W TDPs. What if 1.18V just happens to be the minimum voltage to be able to run DDR3-1600 and an 800MHz GPU core? As far as my testing has gone, it IS absolutely the minimum.
Let's take a look at the general behavior of AMD CPUs and Intel CPUs from the past:
AMD:
(Phenom II X4/X6) - 45nm CPU: 1.5-1.6v CPU fine on air/water. Much above 1.35-1.4v CPU-NB, and degradation will occur in a matter of days/weeks/months. Set 1.6v CPU-NB on air and your chip will insta-pop dead.
Llano - Not too much overclocker's information on these as overclocking is a bit useless...not much gain in clockspeed, power consumption through the roof, etc.
However, Llano and Bulldozer use a very similarly structured CPU-NB:
Bulldozer - 32nm CPU: 1.4-1.5v CPU fine, degradation may occur above this on air/water. Above 1.5v is generally specified as "unsafe", such as pushing 1.6v+ into Phenom II. Benchmarking for short periods, should be fine. CPU-NB much above 1.3v will result in degradation, I've already seen 4 CPUs in experienced hands do this. No reason to run above 1.3v anyway, because scaling stops 1.25-1.3v for most CPUs and temperatures go through the roof rather quickly.
Trinity and Piledriver use essentially the same IMC, with minor tweaks similar to the change from Phenom II X4 to X6.
Intel:
Sandy Bridge 32nm CPU: VCCSA and VCCIO are to be kept under ~1.1-1.25v at all times because it is useless to go further, while some run the CPU at 1.5v.
Ivy Bridge: VCCSA and VCCIO are kept 1.2v or less most of the time, while it is fine to run 1.4-1.45v on the CPU.
Then look at 28nm/40nm and even 55nm GPUs - why are voltages such as 1.1-1.2v used, maximum ~1.3-1.4v for overclocking and 1.5-1.7v LN2 overclocking? Because the GPUs die much above that. Why would you assume the 32nm SOI GPU will take 1.45v with ease?
So, because of this information...
What logic would make you think 1.45v is safe 24/7 for the CPU-NB?
Generally, the rule of thumb too is that when you shrink a node, tolerable/useful voltages are lower too - why do you think 30nm DRAM runs at 1.35v stock vs 50nm DRAM that runs 1.5v?