• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

AMD 390X reference card will ship with AIO cooler

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I think my point is easier to explain making the analogy of when the 40nm GPUs from the HD5000 series are compared to the HD 7900 series. the HD 5870 was considered "top of the line" for single GPUs. It was the first GPU series by AMD in the 40nm shrink from 55nm. It had a TDP of 228 watts. It did not exploit the full, mature 40nm new shrink process. It used more transistors in a smaller space with some comfort room left over. Which means it can improve performance at a lower TDP, but each generation tends to increase in TDP to allow more transistors. Otherwise we would be left with the same number of shader cores at a lower TDP. The next generation, the 6970 was released. This one was also a 40nm process, but it had a TDP of 250watts. It basically maximized the TDP on the shrink before the TDP became "too much" for a solution without water cooling. Only then did it seem more economical by that time to release the 28nm 7970, which used, again 250 watts. But the difference between the 7970 on the new node and the 5870 was that there was a higher TDP starting out on the 7970 with less "left over" that wasn't used on the 40nm. Then the R9 290x was released, and it had a TDP of 290 watts. the 28nm is starting to be maximized in performance/watt with less optimization available. So, what is very much possible, is if this pattern holds true for two generations, we'll see a "top of the line" R9 390x around 200-225 watts or possibly 250 watts (possibly on a new nm), and then we'll see the year after an r9 490x with a TDP of 275watts or 290 watts again. It's not that the GPU can't handle more TDP, it's just historically imitating the tick-tock strategy of Intel where it uses a smaller die, then "maturing" the generation after, with more shaders, etc even if it doesn't change much else on the same node. This is all the while GPUs have increased in TDP over the much longer term of 10 years. In 2006, the X1950XTX had a TDP of 125W, but in 2007, the HD 2900 XT had a TDP of 225 watt at 80nm. Here is the link where I got this info: http://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Comparison_table:_Desktop_GPUs

Not sure what you're on about, it doesn't make sense with what EarthDog and I were saying.
 
Not sure either...

it's just seems obvious (to me) the reason for the aio is because it will have a hefty tdp. The only card above 300w I recall with an air cooler is powercolors 295x2 which is a one off. I don't see a point for an aio @ 200w...

Only time will tell though!
 
If it was a midrange 200W card 'for overclocking' then that would make no room in the market for the high end card. If this is true, I expect it to be 250W-300W stock. There are plenty of 250W cards out (7970/r9 290/290x, 780ti, etc) that don't require water cooling.

These would be reference cards. These are, from the latest rumors (first post of this thread, not a 3 month old rumor from the same website), high end card that would seem to require water cooling.

... but, its all rumors now. Some just make more sense, with a little logical thinking behind it, than others.

Not sure either...

it's just seems obvious (to me) the reason for the aio is because it will have a hefty tdp. The only card above 300w I recall with an air cooler is powercolors 295x2 which is a one off. I don't see a point for an aio @ 200w...

Only time will tell though!

I wrote that all to explain why I think that the r9 390x will start(stock) at 225W or lower TDP, rather than 250w. The assumption that people tend to make is that each new generation starts at 250w or more when historically that was not true necessarily. I already explained why I think the AIO would be used (for overclockers), but whether the TDP starts at 225 or 275 makes no difference because there's always "extra room" as you say in the 250 watt range because GPUs can be overclocked to 350 watts, etc. Just because the stock is 225 watts doesn't mean they are going to bin the absolute TDP so I don't see why they can't give us extra slack as a bonus or as a competitive measure to the Maxwell series, which are very power efficient on the 28nm. There's only two, non dual GPU (non 395) cards that I think this applies to. an R9 390 which might have a TDP of 200 watts and a an r9 390x, which would have a TDP of 225-250watts. If it's more cost-effective for AMD to include a water cooler so it can be overclocked than manufacture a chip with more transistors with more likelihood of of some manufacturing defect similar to EUV development. Here's one hypothetical scenario: if one GPU design with 2 billion transistors has a 90% wafer yield compared to a 3 billion transistor design with a 60% wafer yield, I think AMD would make a reduced GPU chip that is still has more performance than an R9 290x but at a TDP of 225w at 2 billion transistors stock rather than 3 billion transistors and 275 watts. Only after this would the the chip be overclocked, and would explain why there could be a wider variety of cards that vary from the reference design. The cost of development always increases at smaller nodes less than 20nm so yields become part of the manufacturing strategy, as is binning. I would imagine that a reference card with water cooling would benefit from a mild overclock- let's say it uses an extra 25 watts from a stock 225w tdp at 1000mhz core clock and increases speed to 1250mhz. This could make it as fast as a GTX 980. But let's say there's another vendor that wants to overclock it to 1500mhz, way more than the reference design. Having the water cooler helps and is what might make the TDP reach around 300 watts. That would appear to be normal by today's comparisons, but it shows that the fastest cards don't have to start at super high TDPs just because they're expected to...

These two articles point out how cost drives so much of the Moore's law progress. http://www.extremetech.com/computin...y-milestone-as-euv-moves-closer-to-production

http://www.extremetech.com/computin...y-450mm-wafers-halted-and-no-path-beyond-14nm
 
Wow..

I get bins, yields, and how that works with pricing etc...but appreciate the links. :)

The thing is, AMD themselves, meaning a reference card, will likely not strap on a more expensive cooler in an AIO just to have more overclocking headroom. Do understand that overclockers are a TINY portion of people that buy cards so there is hardly a market for it, especially considering that you don't get much more out of it clock speed wise with a small 120mm sized rad.

I again point out that it makes no sense to me why they would leave shedloads of headroom to handily beat a higher postioned card. That is shooting themselves in the foot. Simple business there.

I like your thinking, and wish it was like that, but, that doesn't make good business sense.
 
GPUs must be a lot different from CPUs then. CPUS are watercooled when they are overclocked. I'm assuming 125watt to upwards of 200 watts. But a GPU that uses 200 watts at stock doesn't need it. I think it was designed for all the board manufacturers that like to overclock it.
CPU can be air cooled too even when overclocked. I guess the best air coolers can handle 140W CPUs at medium OC (up to 4.4 Ghz or so) without problems, its all a matter of size and quality. Watercooler is usually for enthusiasts that like to OC a huge margin (above 4.4 GHz or so), but just a tiny percentage of all people do OC such a huge margin that a watercooler would be the only valid option. Finally it depends on how crazy a CPU have to be cranked in order to achieve the GHz target, the more difficult it gets the higher the need for special coolers and certainly not my favorite CPU.

One of the main reasons why Intel CPUs (AMD should be easyer) need very strong coolers is because the nm is extremely low so the density of the CPUs is very high. There is a lot of heat in a very small room in order to transfer to the sink. It can become very difficult to transfer so much heat in such a low space to the sink. GPU got a larger die in order to transfer the heat, it is easyer for cooling in that term. Another problem with CPU is that the heat can be extremely uneven... there is higher risk of "hot spots" compared to a GPU with massively array-driven cores. Finally a CPU is a good margin more challenging to cool down at comparable TDP. It is not always a technical limitation but as well because Intel want to save up cash and is not using a properly soldered IHS, but cheap thermal paste instead. A company simply wants margin... so they may create "crap products", as long as the bag is full of coins.
 
Last edited:
Chip efficiency has a lot to do with cooler requirements. There is TDP but also something like power leakage. Look that AMD CPUs can have 140W but require better cooling than Intel 140W CPUs. The same if cards have 200-250W TDP it doesn't mean that 100% of this power is exchanged to heat.
Current chips have also some range of VIDs so can be difference up to 10% even at stock settings.
Next thing is surface from which cooler has to take the heat. New chips have usually smaller cores and it's harder to cool them down when W/inch is much higher. Older cards/cpus even with the same wattage as new chips were easier to keep cool.
TDP is also not clear value. Every manufacturer is specifying it on its way. So you see chips with 125W TDP but in real they can have 150W+ or 100W+. Good example is Intel Pentium, i3 and i5. There are CPUs from the same generation with the same TDP even though there is different amount of cores, cache etc. It's simply not possible. GTX970/980 have much lower TDP than real wattage. The same most other cards.

AIO cost a lot so I don't think any manufacturer would use it if it wasn't necessary. Also they are adding one more point which can fail = higher RMA costs especially that high end cards have usually 3 year warranty. Noone wants it but still we hear that AMD wants to add it to new high end cards.
 
TDP is a difficult story, well known to me, but finally i need some measurement spec in order to talk about. In fact it is way more complicated and always hard to compare to each others. If i would truly take any factor into account i could write almost a book, not gonna work in a post and majority of users will run away... simply to much to handle and actually they just want some simplifications.

The efficiency and leakage can be different with every chip, adding another complicated layer to a matter already complicated. People want some "hard specs" but in fact it doesnt exist, every chip is simply different, but still not as different than humans.

I dont have much experience in CPU cooling but you have to separate amount of heat a chip will produce (i call it TDP but its actually a crazy simplification) and the amount of heat a cooler is able to "draw" away per mm3. The experience i had with Intel CPUs, in my case 3570K, is that it is very difficult to "draw" away the heat from the IHS, even if the TDP is actually very low. So this low 77W TDP chip is actually demanding on a cooler. I had a easyer time cooling down a 990X with comparable cooler because the heat is arriving at the sink, it does transfer well,... this is a 130 W TDP CPU in theory.

@Ivy, I'm not sure if you have any idea about Maxwell scalling with voltage. You can set 1500MHz+ without raising voltage on most GTX970 and many GTX980 what barely raises wattage. GTX750/Ti are scalling in similar way.

AMD are pathetic if you look at efficiency scalling while overclocking. Look at every series in last 5 years. 7970 needed huge bump in voltage to make 200MHz more. 290/290X the same so without really big changes in architecture ( right now there is not much except that memory bandwidth thingy ) I don't expect anything great from new series.

Maxwell is actually a sly little wolf in sheeps clothing, because most cards can not be Volt "bumped" without some mods or special software. However, it does work differently, it is simply capped by thermal and power design and as long as thermal and power is not upped, any other matter is secondary. I have graphs showing me that the Maxwell is losing a lot of efficiency at high clocks (from 100% down to ~65% at above 1500 MHz), even without Volt bump, so a Volt bump is not a requirement in order to lose efficiency. I bet many 970/980 are far above 200 TDP without any Volt bump, yet people barely notice it and are telling us "that card is doing the clocks out of nothing..." but this is not true. Nvidia simply used a dynamic approach because they know pretty well that the difference in thermal and power capacity can vary a lot and not every PC or card is able to handle it, so a very dynamic design is the result of those facts.

I would imagine that a reference card with water cooling would benefit from a mild overclock- let's say it uses an extra 25 watts from a stock 225w tdp at 1000mhz core clock and increases speed to 1250mhz. This could make it as fast as a GTX 980. But let's say there's another vendor that wants to overclock it to 1500mhz, way more than the reference design. Having the water cooler helps and is what might make the TDP reach around 300 watts. That would appear to be normal by today's comparisons, but it shows that the fastest cards don't have to start at super high TDPs just because they're expected to...
I actually have a hard time following you, more than most of the other stuff. The issue is, yes AMD was already adding reference water coolers on high end CPUs as far as i remember, this approach was already done in the past. But it is a totally different situation, AMD was already using the whole TDP headroom their CPU was able to get, and the only chance trying to barely compete with Intel was to add a water cooler and selling it as a "water cooled CPU", making some price-sensitive enthusiasts happy and trying to increase reputation on the "benchmark list" this was the whole reason for doing it, not the very low... truly very low margin.

The situation on the Radeon cards is different. They dont lack the power in order to compete, they have a realistic chance to compete without special solutions. The only risk is the flagship single GPU because this GPU is headed toward enthusiasts. It is a minor share on the market but it can be critical for marketing and reputation purposes: "hey they got a formula 1 car, we are gonna buy from this formula 1 car manufacturer, but just a street car still". Finally its not just in order to serve the few enthusiasts, but to set some "mark" in the mind of any customer... even to those that dont buy flagship stuff. So the thing AMD may want to achieve is to drive the flagship single GPU to insane levels out of the box, not after some "do it at your own risk you sorry user" approach. Now, a crazy "out of the box OC" is not gonna do the trick, because it will become bad at efficiency, bad at possible headroom and finally bad reputation, in that term the target was failed. AMD will have to use a sheer ressource pumped GPU, kinda comparable to the 290X in order to achieve the target. The 7970 was already bumped a lot at release, AMD never made such "ressource bumps" in the past, but theyr target is clearly to beat... to impress and to set a mark. The hunt for margin was not high releasing those ressource bumped stuff, but it did pay off for them because the 7970 for example was a very successful card, so the risk of the investment was actually fully covered. I guess they will continue doing so... but for flagship only because it will set the important "psychological mark" in the mind of any user, not just enthusiast, so it may be pretty possible that the next single core flagship can become a 300W water cooled reference card... who knows. There isnt a huge market share for users getting it but thats not the point, they wont have a wield high enough for mass production anyway. The big money is done with mid range cards... not with flagship cards, they have another role. AMD clearly want to prevent the situation users telling in forums "oh... i am gonna use a SLI 980, better than the 390X", it would destroy a important psychological mark, so target is to offer something competitive even for the "i love SLI" users.

Regarding 225W TPD cards... no they will not get a water cooler, it is against any economical rule nor target. Those cards are kinda same such as a 970, a good gamer card at affordable price. Water cooler may be used on 300W+ and no less. In order for OC, there will be many aftermarket designs from third party... it was always like that and it will stay this way, at least for any card not "flagship grade" and actually only enthusiasts may enjoy such stuff, not a big market but a market that got a voice on the web.

So, what is very much possible, is if this pattern holds true for two generations, we'll see a "top of the line" R9 390x around 200-225 watts or possibly 250 watts (possibly on a new nm), and then we'll see the year after an r9 490x with a TDP of 275watts or 290 watts again. It's not that the GPU can't handle more TDP, it's just historically imitating the tick-tock strategy of Intel where it uses a smaller die, then "maturing" the generation after, with more shaders, etc
Many interesting thought but i think it is a pretty feisty matter for the 300 series going your way because the 300 series is a "tock" (new architecture) already, this is almost certain because AMD can not beat or match Nvidia using a "tick" (20 nm shrink without new architecture) regarding the 300 series, it would be impossible reaching the required efficiency and performance even if they gonna use a 300 W TDP design. So this option is unrealistic and because of a still immature 20 nm node this matter would end into a financial fiasco, i do strongly disapprove doing so. So, taking this matter into account it cant be a tick (20 nm node shrink) according to this "rule". On top of that, AMD didnt release a new architecture for over 3 years already (it was all based on the old GCN with only minor improvements at some spots). So there is some reputation to hold up and in order to do it, the best deal is to make serious investment and not looking at the margin to much which is historically not a weak spot from AMD "looking at the margin". Nvidia is the one trying to make a margin battle and surely very successful doing so. AMD since the release of GCN was not going the "margin battle way" but they was on a road in order to truly set some reputation mark in many terms and they surely was successful doing so because their market share in overall is the highest they ever had, so the target is achieved. You have to take into account, the current nm-technology is already on a very high maturity and the chance is high that the 300 series will still be using 28 nm. So this means that the manufacturing (wield of 28 nm even at insane transistor levels) is not a issue and the resource can be boosted a lot (kinda comparable to 290X). The next series, the 400 series, will probably be the same architecture but it will be a "tick"-matter (shrink of the old architecture down to 20 nm), so AMD will probably use the new 20 nm node and as a result increasing resources even more and they can have a large headroom in term a liquid cooler is used as a "reference design". So it makes good sense putting the 300 series at a tock (new architecture), which is almost certain, there is no doubt about because the old GCN is now at the limit of the possible headroom since the release of the 290X which is basically setting the last type of "old gen" chip.

Your ideas could be correct in term AMD is using a "tick" (20 mn shrink) for the 300 series, but i think it is rather unlikely gonna happen. The new 20 nm node is still challenging at manufacturing efficiency, this means AMD would lose the chance to impress by using a "high boosted tock" (new architecture at high ressources on the old 28 nm node). Which could be the better option, and in that term saving up the "tick" (20 nm shrink) for the 400 series at a time of improved 20 nm maturity. The 20 nm, next year or so, would be able to give new TDP headroom, so that means they cold make a "tick" with a ressource boost included by using the same old cooler (maybe liquid cooler, who knows). But it surely would be insane trying to challenge Nvidias Maxwell by using a "20 nm GCN of the old architecture (tick)", it may grant up to 30% more performance at same TDP and it would be insufficient in order to beat a Maxwell. Especially because Nvidia could then move on using a 20 nm shrink on the Maxwell too and boosting the ressources to twice as much, this means Nvidia in term of efficiency would be so far ahead of AMD Radeon that the Radeon users will look like "weak volcano riders"... a pretty hot issue for sure. The 20 nm Maxwell would be more efficient and more powerful performance both at the same time... so it will simply be foolish trying to ride the old GCN without a "tock" (new architecture). So in my mind, once again, shrink (tick) is not a priority for now... architecture is the key.
 
Last edited:
http://www.kitguru.net/components/g...-its-radeon-r9-300-series-lineup-at-computex/

If it's really true and premiere of new top AMD gfx card series will be at the Computex in June then it's next AMD fail.
I understand they want to clean up the stock from older series but they already missed best moment to release high end series when was so much noise about GTX970. Knowing AMD, most new cards will be rebranded older series anyway so they could at least release highest models when they have good time to earn some money. After all high end hardware is mainly for marketing purposes and mass sales of cheap stuff is what is bringing them money. Still many users look at the top of the line graphics and buy lower series.

Titan X is ready and AMD still has nothing to even beat lower Nvidia series. One more time will be exactly the same story. AMD will release 380/390X and Nvidia will show GTX980Ti or whatever it will be called 1-2 months later.
 
if amd waits too long to release the 3xx cards, it will just make them cheaper when I am ready for new card. Timing really doesnt matter, at the end of the day its the performance of said cards at whatever point you are using them. you have to assume that something new is coming at any point
 
From our perspective timing doesn't really matter, I agree. But from a business/marketing perspective, I think they missed the boat here with the fallout over the GTX 970 'issue'. They could have capitalized one those that panicked... on the flip side though, with that panic, they likely sold more R9 290/290x and moved old stock.
 
true there for sure. there will always be somethig faster in a short bit, cant be sweating it constantly or you will dehydrate in a hurry
 
I just don't think that AMD will improve these cards in next 2-3 months and I bet they are already manufacturing them. Time doesn't matter for us ( at least most of us ) but if they won't release anything exceptional then will just repeat the same story as they make while every high end graphics card premiere in last maybe 5-6 years. Simply they release new graphics cards, shout they have the fastest gpu on the market and then nvidia is releasing something faster 1-3 months later and keep fastest gpu for 8-12 months only adjusting price ( if it's worth it ).
From the users point of view faster AMD premiere = faster Nvidia premiere = faster price drops for older cards. I don't think all want old AMD gpus which were already refreshed from 7000 series ( or even more ) so are on the market for like 3 years. Market is waiting for something new and not necessarily highest series.
Nvidia has nothing interesting between GTX750 and GTX970. The same AMD but they are not even thinking to fill this gap with new, interesting products on time.

I haven't really noticed any higher R9 290/X sales as prices went up some time ago on EU market ( I didn't really follow US market ). In Jan I could find new 290X for about ~$300 , now the cheapest I see cost ~$400. That's our local price with tax etc. I see distribution stock and they are barely selling anything from these series. Some distributors don't even buy it on stock as there is no demand. I bet in US it looks better but at least in EU barely anyone cares that on paper AMD cards are cheaper now and GTX970 have memory issues.

AMD wants to clean stock mainly from all these refreshed series which not many users want. There are many more AMD card manufacturers than Nvidia. They generate higher amount of lower series graphics on stock and later it's harder to sell it. At least it's what I see looking at the distribution and many online stores.
 
Sadly, I am planning to boycott nvidia in the future. So I have to have high hopes for everything amd in the future. It kills me that a company can act live nvidia does and have many stepping up and publicly supporting them and acting like they did nothing wrong.
 
It kills me that people have such hatred of them when nothing is proven (writing is on the wall, I agree, that doesn't mean it was true). Its like the world ended to some people (the other extreme from me). But, to each their own. :)

I mean what else did they do? Not release their PhysX stuff (which they JUST did now).

I digress though as this thread is about the new AMD card and not rehashing 970 woes.
 
In my opinion, it's OK if it launches a little later than expected. Hopefully that means that there will be fewer bugs in the design and more importantly less issues with drivers. If it takes an additional couple of months to have a closer to bug-free release of the next omega drivers, then so be it.

Of course, I would like to see it released now with some great 980-competing/beating performance for cheaper, but unless it can hold out the performance against the inevitable release of the 980ti I have a feeling their news prominence will be short lived as the fastest gpu (then of course the titanx when it comes out will likely be quite powerful too).
 
they lied about the abilities of another card, and did nothing about it
 
I dont think that the release will be to soon (guess already stated somewhere) not sooner than May or so, so i guess Kitgurus infos could be correct. Additionally a rebrand of the older models would be somewhat unfortunate because they simply cant hold up well with a new GCN 1.3 architecture and Maxwell. The gape would be simply to big, so it is somewhat foolish mixing up architectures at that point (as already stated last post, a entire line of new architecture is critical). Considering the last rebrand they did, they actually just used a ressource-boosted version of the old architecture and a GCN 1.2 which is a minor improvement in term of Radeon 285, but it surely is nowhere close to the 1.3 spec (or later). A unlocked GCN 1.2 spec chip at this stage is useless... to less juice... a bigger step is needed. So rebrand wont be a option anymore, in my mind it would be foolish,... it was used way to much already for the past and now they may have to make a serious overhaul.

Now the best option truly would be to "sell out" all the old GCN, and then start with a entire new line... this means a release could be probably May or even June but not sooner than that.
I mean what else did they do? Not release their PhysX stuff (which they JUST did now).
You know exactly what they did, they didnt tell the truth for whatever reason and in term it is revealed they call it a "unfortunate accident" or try to sugarcoat the matter... anyone will be doing it without exclusion. No one would stand up and say "yes we did crap, we failed hard and will improve the hardest we can", there is always tons of sugarcoating involved, because apparently there is never a true failure... just a "soso half baked failure...". It somewhat feels like we have a big "con" vs. "pro" in this thread, but i think the "middle way" is the highest gain.

they lied about the abilities of another card, and did nothing about it
Ultimately it's useless to hate a manufacturer for whatever reason because there is probably not a single bigger manufacturer without a skeleton in their closet, sad truth i assume. It would just lead to a "revenge act" in a never end story... i dont recommend. But i do recommend to know the companys in and out because many of them always use the same kind of "behavior pattern" and once you know it... it is much harder to fall into their trap because you know what to look out for. Nvidia is memory-issues... all kind of memory cuts and such... thats their weak spot* and i guess they never gonna change it unless for a crazy overpriced Titan, which is not a option to the most of us. AMD is notorious with having problems with "catching up" in many terms and is abused by other manufacturers in a endless "taunt war" in which they simply abuse the quasi-competition... although even a slow hen can someday lay down a golden egg, so the hope is always here... just patience and the trust in small wonders is required for it to happen. *It will probably always stay a weak spot because Nvidia is highly "margin-based" on their entire philosophy and by cutting valuable cache resources from the transistor bank, which is critical for high memory power, they can save up a lot of resources and in that term cash... a trick that will probably never grow old.

Although i totally agree with you that nowadays the issue is that almost any company is based on "profit" and "competition", but they rarely ever look at the "altruism spec", they usually care rather few if you (or any other customer) is truly happy as long as their "profit" is alright, but you can add this spec to almost any bigger company with the exception of the small ones with much more personal approach. So i hope in future they pay more attention to the "power of reputation through altruism" but as long as almost any powerful company is the same... the altruism factor is of low meaning because customers got no choice. Regarding IT market... fact is... we simply have a few rulers and just some quasi-competition, so it means... it's hard to avoid the "rulers". They know it and are abusing this situation but ultimately it is no use to place a "never end revenge act" on them, it will just lead to someone becoming unhappy and never solve issues, we ar'nt made for hatred..., it can solve "short term matters" but never "long term". I recommend to raise awareness and "spread the knowledge" but it's no use going into "hatred mode".

I bet in US it looks better but at least in EU barely anyone cares that on paper AMD cards are cheaper now and GTX970 have memory issues.
I think US customers are generally more sensitive when it comes to company matters, even more because for example they dont even know if they are eating a GMO bread, even this matter is usually hidden to them. In the EU in many terms there is actually a higher transparency but a majority of the the customers ar'nt even putting value into it. It is like, the ones that are crying for transparency are not getting it and the others are not asking for but may get a higher level of transparency still. So in the US there is a unusual high rate of "suspicion against companys" and although there is many exorbitant expressions i am aware where the issue is coming from, so the seed is well known. In any way, the customers in seek of genuine transparency is more or less just the topping on a tiramisu and the majority is somewhat in a state of never ending acceptance and lack of interest in countless terms.

In a attempt not to spread unnecessary fire i recommend to close the 970 matter, on top of that it is somewhat OT, it is already in a stale condition... outside "tasty" i feel.
 
Last edited:
Main problem with EU pricing is that manufacturers are selling hardware translating 1:1 $ to € when in real is 20% difference. Other issue is high amount of sub-distributors and every one of them is adding logistics costs and marge.

Regardless of price I thought that 390x or other faster card will be released in March so I could replace graphics cards in my daily pc and benching rig. Right now I have GTX980 which is a bad overclocker so I wanted to move it to gaming pc where is GTX580 which is not overclocking at all and already has some additional issues so I'm not sure how long it will live. The only choice now are overpriced GTX970, pathetic GTX960 or slower than GTX580, GTX750/ti. On the AMD side there is or many old/refreshed and not really special cards or R9 290/X which is overpriced in local stores or auctions. Simply there is nothing worth to buy and probably it wont change for longer.
 
Well, prices... i am used to high prices in my country, it is usually worse than most of the other countrys in the EU or Europe (not entire Europe is part of the EU) with the exception of Norway. I guess in average i pay around 10% higher hardware prices compared to US (including taxes) and it is probably not the vendors fault. Some distributors simply will charge a bonus price for shipment to my country or other countrys in the EU/Europe, so the issue is on many spots. Most of the vendors seems to have a very low margin, especially for PC parts... i know the issue pretty well because i had a lot of talk with a vendor and he told me... selling PC parts is usually of pretty low margin and i guess the main income he is doing is by doing assembling and many different consumer services (fixing PCs and such), but selling PC parts itself is usually not rewarding at all. I guess the truth is, every country simply will be charged according to the local market economics, not every country is treated the same and the prices may be dictated by the ones truly in power of doing it... not the vendors, a vendor is just a weakling at the end of the "food chain" and unless its a huge power vendor comparable to Amazon or Newegg... they are with very few power. In term a part is turned in for RMA by a customer, the vendor will have to handle RMA cost (and customer too) and the manufacturer simply will check it out, then they gonna say "yes or no", and in term they say "no, we do not give warranty or it is not covered"; the vendor and customer is "out of power"... a pair of weaklings at the end of a food chain, thats it. In term a bonus price will be charged, most likely it is some "in between, intermediate chain of the trade"... a company itself or a local relative such as a importer or distributor. In many terms avoiding those "intermediate line" can lead to lower prices, but it is not officially allowed and the manufacturer may punish such vendors trying to avoid the "official distribution", so the issue is a bit tricky. But i am pretty certain that the vendors in most terms are not charging a bonus price and if so it is pretty marginal.

Hardware prices is actually pretty fair, so i cant complain and i like to buy hardware... i dont feel robbed. The truly high prices is for example on other expenses such as food, those prices are much higher compared to US and even a lot higher compared to EU. You have to take into account that the taxes in the EU can be very different. Some EU countrys got taxes of 20% or even higher, 23% for Poland. It will add a lot of bonus to the price but in my country the taxes are only 8%, even lower than most of the states in the US (US is up to 10%, but may differ). Being a big company isnt always of advantage, for example in Germany the huge vendors are actually punished by huge taxes (up to 19%) but the very small vendors are free of taxes. I think because in Germany the huge vendors was monopolizing almost the entire market and they nearly wiped out the small vendors, so i guess as a matter of proper politics, the taxes have been removed for the small vendors. This advantage doesnt exist in Switzerland, even the smallest vendor will have to pay taxes, so it can happen that a distributor will become lazy and simply put the biggest vendors at advantage. For example at the launch when the PS4 consoles was delivered, the small vendors got nothing at all and the big vendors got almost everything and in that term a clear advantage... unfair but just how it was done.
 
Last edited:
Back