Is Anandtech Any Help In Selecting A Computer?
In my last article I talked about trying to decide on a new computer system. My current plan is to put something together after I see how much difference Bloomfield and Deneb make. Information would be nice.
People keep asking me what is wrong with Anandtech's reviews. In all honesty I would love for Anandtech's reviews to be helpful and solid because this is not an easy time to be looking for a new system. The problem is that as I go over Anandtech's information I keep coming up empty.
According to Anandtech's graph above, the Q9300 draws less power than the E6750. This would be quite impressive if true. However, this graph is easily falsified because according to Intel the E6750 is 65 watts whereas the Q9300 is 95 watts. So either Intel's documentation is wrong or Anandtech is. I'm pretty sure that Intel knows more about their processors than Anandtech so I'm going to have to assume the chart is wrong. And, this means that the comparison with AMD's processors is equally unreliable.
Okay, so no help on power draw. How about prices? The easiest way to see where the good prices are is to graph price versus clock. Generally the curve rises slowly and then much more steeply at some point. Anandtech seems to graph everything except these price curves. Doing my own I can see that AMD's curve is incredibly flat so there is no reason to buy anything less than a 9950 BE unless you are putting together a low end system and shopping for a dual core.
Of course dual cores and integrated graphics go hand in hand on a bargain system since you can get plenty of each for under $100. Unfortunately, Anandtech's 780G Preview isn't much help because they tossed out common sense by using the 45 watt, 2.5Ghz 4850E instead of the perfectly acceptable 65 watt, 2.7Ghz 5200+ which is the same price. However, a far better value would be spending $10 more and getting the 65 watt, 2.8Ghz 5400+ Black Edition with unlocked multiplier. Also, they didn't bother doing any real testing, just video playback, so the article is nearly worthless. Well, how about the IGP Power Consumption article? No, this article is particularly daft because it should already be obvious to everyone that integrated graphics consume less power than discrete video cards. Secondly, the article has no performance information so it is impossible to determine any value. So, how about the NVIDIA 780a: Integrated Graphics article? This article is also lacking because they use a 125 watt 9850 quad core. This is really bizarre considering their earlier insistence on a 45 watt chip. Wouldn't common 65/95 watt chips make more sense? They also leave out any comparison with Intel systems. This is quite odd since the inclusion of nVidia should have allowed a cross comparison by normalizing based on nVidia/AMD with nVidia/Intel. So, this is no help in choosing between an AMD and Intel system. Finally, the Intel G45 and AMD 790GX articles are only token announcements with no actual testing. Clearly if we are looking for a low end system Anandtech is no help at all.
But since I'm not looking for a bargain system I'm really more interested in quads. Even though it is obvious that the 9950BE is the best value in AMD quads the value of the 750 southbridge can't be determined by a price graph. Anandtech does have a 750SB article which does show increased overclocking as well as an increase in northbridge clock but since they fail to do any actual testing you have no idea what value this might be. If we wanted to find out about Intel overclocking then there is more information. Well, sort of. In Anandtech's Overclocking Intel's New 45nm QX9650: article he does overclock all the way up to 4.4Ghz. However, curiously missing are power draw and temperature graphs. In fact, besides synthetics there is almost nothing in the article. The article itself even admits this:
We hope to expand future testing to include real-world gaming results from some of the newest titles like Crysis, Call of Duty 4: Modern Warfare, Unreal Tournament 3, and Gears of War. Stay on the lookout for these results and others in our next look at the QX9650 when we pair this capable processor with the best motherboards companies like ASUS, Gigabyte, MSI, abit, DFI and Foxconn have to offer.
The problem is that this article is from December 19, 2007 and eight months later there is still no followup article. Anandtech does however have articles on Atom, Larrabee, and Nehalem which also have little value in choosing a system today. However, Anandtech's Nehalem article is quite odd because it says:
We've been told to expect a 20 - 30% overall advantage over Penryn and it looks like Intel is on track to delivering just that in Q4. At 2.66GHz, Nehalem is already faster than the fastest 3.2GHz Penryns on the market today.
If this is true then Intel hasn't just shot itself in the foot; it has taken a chainsaw to both legs. This would mean that the value of Intel's entire 45nm quad line has just dropped through the floor and the effect on the 65nm line would be even worse. Considering that today Intel is only at 1/3rd 45nm production by volume this would mean that Intel would have massively reduced the value of most of its line. And, if this is true then no one should buy anything higher than Q6600 until Bloomfield is released. The problem with this scenario is that Intel already went down this road with PII and PII Celeron so I doubt it will make this mistake again.
Graphics are also a bit strange at Anandtech. For example, in the HD 4870 article Anand says:
For now, the Radeon HD 4870 and 4850 are both solid values and cards we would absolutely recommend to readers looking for hardware at the $200 and $300 price points. The fact of the matter is that by NVIDIA's standards, the 4870 should be priced at $400 and the 4850 should be around $250. You can either look at it as AMD giving you a bargain or NVIDIA charging too much, either way it's healthy competition in the graphics industry once again (after far too long of a hiatus).
So, he likes the HD 4870. And, in the NVIDIA GeForce GTX 280 & 260 article he said:
the GeForce GTX 280 is simply overpriced for the performance it delivers. It is NVIDIA's fastest single-card, single-GPU solution, but for $150 less than a GTX 280 you get a faster graphics card with NVIDIA's own GeForce 9800 GX2. The obvious downside to the GX2 over the GTX 280 is that it is a multi-GPU card and there are going to be some situations where it doesn't scale well, but overall it is a far better buy than the GTX 280.
Keep in mind that the 9800 GX2 is dual GPU and cost $500 at the time of the article. The tone changes however in the Radeon HD 4870 X2 article. Anand admits:
The Radeon HD 4870 X2 is good, it continues to be the world's fastest single card solution
He also says:
But until we have shared framebuffers and real cooperation on rendering frames from a multi-GPU solution we just aren't going to see the kind of robust, consistent results most people will expect when spending over $550+ on graphics hardware.
So, he doesn't have a problem endorsing a $500, dual GPU card from nVidia yet he balks at endorsing a $550, dual GPU card from AMD that is much more powerful. This does seem more than a little arbitrary.
I might be interested in Linux but Anandtech hasn't done a Linux review since 2005. It is possible that they lost all of their Linux expertise when Kristopher Kubicki left. That's too bad but with a quad core I am very interested in mixed performance but Anandtech likewise has not done mixed testing since 2005 even under Windows. Yet, this is what the processor would typically be doing for me, running two or three different applications. Under normal circumstances I wouldn't be running four copies of the same code nor would I be splitting up one application among four cores but this is the only type of testing Anandtech does these days. This leaves a huge gap between the typical Anandtech testing which is only suitable for single and to some extent dual cores and the all out quad socket/quad core server benchmarks that they ran.
The bottom line is that Anandtech's testing is only dual core caliber (when it is accurate). However, the poor integrated graphics testing offers no help for a typical system that would be used with dual core. But even in the area of quad core/discrete graphics where Anandtech should be strong you have to deal with their schizophrenic attitude about multi-card/dual card graphics and their ambivalent attitude about power consumption. Pricing seems to get similar ill treatment at Anandtech mostly as a crutch when it happens to support their already drawn conclusions. So, my conlusion would have to be that Anandtech isn't much help in choosing a computer system.
37 comments:
The power numbers look about right to me. Here is what Tech Report got.
http://techreport.com/articles.x/14573/15
That first graph you posted is about encoding times, not about power draw. Was this an error in posting the wrong graph?
"According to Anandtech's graph above, the Q9300 draws less power than the E6750."
The graph you've linked to is just in terms of seconds required for encoding. Your conclusion seems incorrect based on the cited graph.
Thanks guys for pointing out the incorrect graphic. I had changed the url but hadn't reloaded the graphic so it showed the wrong one but if you clicked on it it was right. I have it fixed now.
qurious63ss
I'm not impressed by the Tech Report numbers since they only ran Cinebench to "load" the cores. Do you think Intel uses Cinebench to determine power draw? Prime95 Large FFT would have been the most accurate for meassuring just the CPU power draw under load.
In regards to the power consumption...
X-Bit has similar results
enumae
True, Xbit uses Prime95 but they don't say how. To do proper testing you have to turn the affinity on to tie the application to one core. Then you have to run three additional copies of Prime95 each tied to a core. It says they ran Prime95 but it doesn't say they ran 4 copies.
ho ho
"So how many 65nm CPUs you see on sale in 3-5 months?"
Intel's 45nm production should reach 100% in Q1 2009. However, I have serious doubts that Nehalem can ramp fast enough to displace the 45nm Yorkfields. Are you also claiming that Yorkfield production will be zero by Q1?
"Also, did you know they are having a shift in prices where $500 CPUs drop to around $260 when Nehalem comes?"
I guess you don't understand the problem. Today, Intel's Q9400 is setting at the price point $275 where the 2.66Ghz Bloomfield is supposed to be. However, Anandtech claims that this processor is faster than the QX9770 which currently runs $1,500. If this is true then the price of QX9770 would have to drop to where Q9300 is now. This would effectively eliminate all of the 45nm dual cores since the 45nm Yorkfields would be occupying the same price range. Frankly I don't see this happening.
ho ho
Yes, Intel does have a special program to test thermal loading but running a copy of Prime95 on each core on Large FFT is sufficient.
The x264 encoding doesn't seem to use all 4 cores of the Q9300 CPU, otherwise the quad-core would be a lot faster than the dual-core. So the power looks ok, one would expect two 45 nm cores to draw less power than two 65 nm cores. Also the TDP of the Q9300 is a bit overrated. Intel only specifies a few discrete TDPs, so if the Q9300 had a real TDP of 76 watts they would still say it's a 95 watts part.
Scientia...
"...This would be quite impressive if true. However, this graph is easily falsified because according to Intel the E6750 is 65 watts whereas the Q9300 is 95 watts."
Your point was that Anandtech had falsified the power consumption numbers.
There have now been two post showing you that the numbers are not false and they both show that the 95W 45nm Q9300 is using about the same amount of power as a 65W 65nm processor.
While these results may not use the application you want, or the configuration you would like, what are the chances of any user loading his system to the level of 4 copies of prime95?
And does this really reflect any realworld situation?
The results shown by Techreport and Anandtech would reflect the the maximum load that any mainstream user or even some enthusiast would ever put on said system and would be an accurate representation of real world power consumption.
So why is it you wont answer my question?
http://www.legitreviews.com/article/695/13/
E6750 and Q9300 all 4 cores loaded with pov ray and 2 watts within each other.
blueneutrino
Yes, that is a good point but if only two cores are loaded then you have half a test.
enumae
I'm not sure if english is your native language but you are missing a fine point. I never said that Anandtech falsified the scores; I said the results are falsifiable. This is not the same thing.
orly
"Loading" cores with Povray is a joke.
enumae & orly
I guess I'm puzzled why the two of you are having so much trouble understanding the word "maximum". It doesn't mean typical or average or what a normal consumer might see.
When I used to drive a 15 passenger van there was nothing that I could do during the course of normal driving that tested the limits of the vehicle. However, when I was towing a 30', 7,000lb travel trailer up the hills in Ohio I got a very good test. I can say that even though the load was within the factory specifications I would have preferred more torque or a higher axel ratio.
It sounds like the two of you would have driven the same van with 5 people onboard, towing nothing on a flat road and pronounced your test solid and complete. I'm sorry but that isn't a real test.
"Are you also claiming that Yorkfield production will be zero by Q1?"
Of course not, don't put words in my mouth.
"If this is true then the price of QX9770 would have to drop to where Q9300 is now"
Or they could do how they have almost always done: depricate the old high-end CPUs and start selling the new ones at the old price points.
Also, give me one good reason you needed to cencor every single one of my posts.
Scientia, there is a difference between TDP and actual power consumption. Intel tends to use 'family' TDPs, in that a whole series of chips all have the same TDP. For example, an E2140 (1.6GHz) and E6850 (3.0GHz) both have a 65W TDP, but obviously the E2140 doesn't consume the same amount of power.
http://www.lostcircuits.com/cpu/intel_yorkfield/4.shtml
Here's a QX9650 (130W TDP) consuming 65W under load using Prime95. An E6750 uses 57W. So its entirely plausible that a Q9300, clocked 500MHz slower and with 1/2 the cache of a QX9650, could pull ~10W less, which would put it right in line with Anandtechs results.
And before you question the results, its obviously stressing all 4 cores, just look at the 65nm C2Q results.
Epsilon
"there is a difference between TDP and actual power consumption."
True. TDP should represent maximum power consumption.
"Intel tends to use 'family' TDPs, in that a whole series of chips all have the same TDP."
Link?
I looked at the Lost Circuits article. I couldn't find anywhere in the article where they stated what they used to load the cores or how many.
" using Prime95"
You'll have to link to this because I couldn't find any mention of Prime95.
enumae
If you can show me where a review site has used Large FFT with Prime95 on all four cores of a 45nm Intel quad for measuring power and temperature that would be good. If no site has done this then I'm not sure what your point would be.
Do you not want people to debate you? Is that why you delete and or cut and paste post?
---------------------------------
You don't need a link to know Intel groups their processor into TDP families. E1000 - E8000 are all 65W.
Like I asked, do you believe that an Intel E1000 would use 65W?
PS: The lost circuits article says Prime95 at the top of the image.
Also, if you search old articles, when testing dual core they would run multiple copies of Prime95 large FFT's. Thats about as close as your gonna get.
orly
Thank you for the link to Lost Circuits. The problem I have is that in this Lost Circuits article they claim that AMD's quad Phenom X4 9950 only draws 75 watts. That is only 54% of AMD's rated full power. I'm having a hard time believing that AMD adds 46% padding to their estimates. Occam's Razor suggests that the testing is inaccurate.
ho ho
"Or they could do how they have almost always done: depricate the old high-end CPUs and start selling the new ones at the old price points. "
Okay, see if you can answer a simple question.
Assume Anadtech is correct and that the 2.66Ghz Bloomfield really is faster than the 3.2Ghz quad QX9770.
Assume that this same 2.66Ghz Bloomfield will be released at the same price as Q9400.
Are you saying then that Intel will happily drop the price of the QX9770 down to where Q9300 is today or are you saying that people who buy Intel processors are dumb enough to pay more for a Penryn processor when a cheaper and faster Nehalem is available? Which one are you saying? Or are you saying that Anandtech was wrong?
enumae
"E1000 - E8000 are all 65W.
Like I asked, do you believe that an Intel E1000 would use 65W?"
As I recall Intel doesn't normally rate processors below 65 watts so the 65 watt rating should include everything that is 65 watts and below. Right?
'The problem I have is that in this Lost Circuits article they claim that AMD's quad Phenom X4 9950 only draws 75 watts.'
One thing we noticed was that different motherboards gave a substantial spread in the maximum power numbers. For example, everything else being equal, on Gigabyte's MA790FX-DQ6, both CPUs drew approximately 10% more power than on the ASUS M3A32 MVP (latest BIOS in both cases). In exact numbersw, the 9350e punched 58W on the Gigabyte board and the 9950 came in at a whopping 112W.
Reading is hard.
'That is only 54% of AMD's rated full power. I'm having a hard time believing that AMD adds 46% padding to their estimates.'
This isn't new, read some of the other results for AMD.
'Occam's Razor suggests that the testing is inaccurate.'
The numbers for other tests are consistent with other power consumption tests and are correct. How does a board problem with phenoms even effect the results of other platforms?
Face it, you're wrong.
BTW how is xbitlabs cheating to favour intel when they use the same overclocking methods on amd cpus?
"The problem I have is that in this Lost Circuits article they claim that AMD's quad Phenom X4 9950 only draws 75 watts."
It does? I see it drawing 100.4W. Are we looking at different charts?
"Are you saying then that Intel will happily drop the price of the QX9770 down to where Q9300 is today or are you saying that people who buy Intel processors are dumb enough to pay more for a Penryn processor when a cheaper and faster Nehalem is available? Which one are you saying?"
My guess is they will stop producing CPU s >=Q9650 and switch their high-end to new architecture. They've done it with pretty much every generation change.
Scientia wrote:
Are you saying then that Intel will happily drop the price of the QX9770 down to where Q9300 is today or are you saying that people who buy Intel processors are dumb enough to pay more for a Penryn processor when a cheaper and faster Nehalem is available? Which one are you saying? Or are you saying that Anandtech was wrong?
I think its possible we will see further cuts on Yorkfield prices once Nehalem is out. Going back to the Conroe launch, Pentium D prices literally halved overnight:
http://www.anandtech.com/showdoc.aspx?i=2795&p=2
About the Lost Circuits power consumption tests, look more carefully, it says 'Max Power (Prime95) W' at the top of the chart.
Now, either all reviewers have got their power tests wrong, or you can accept the trend that 45nm C2Ds/C2Qs don't consume anywhere near the stated TDP wattage under full load. If you're so hell bent on attacking Anandtech, why aren't you criticizing all the other review sites that come to the same conclusions?
Sorry about the double post but I just noticed this:
Scientia wrote:
As I recall Intel doesn't normally rate processors below 65 watts so the 65 watt rating should include everything that is 65 watts and below. Right?
You asked for proof earlier, thats exactly what I meant by the 'family' TDPs. The entire C2D lineup is 65W, even the 45nm variants.
On the same note, the entire C2Q lineup (45nm and 65nm G0) is 95W, with the exception of the Extreme Editions at 130W/136W/150W.
You really can't judge CPU power draw by the TDP rating alone, actual power testing clearly proves there is often a big discrepancy, especially with the 45nm Yorkfields.
Despite your hatred of Anand and misunderstanding of what TDP actually here is a benchmark of Nehalem running the single-threaded Pi calculation and being about as fast at 2.93Ghz as the existing Penryn parts are at 3.2Ghz.
I remember that you said there was a conspiracy to eliminate single-threaded benchmarks, but it looks like Nehalem is easily faster than Intel's previous generation on single-threaded marks as well.
Hi,
Scientia I need some help deciding what to buy for my wife's business. We need to build a server (to run apps (FormDocs, Word, Ecel) and store files), the server will hopefully be strong enough to allow us to establish 7 to 10 NComputing L230 (Access point)work stations that will run off the server. Any recomendations would be greatly appreciated.
ken
"We need to build a server (to run apps (FormDocs, Word, Ecel) and store files), the server will hopefully be strong enough to allow us to establish 7 to 10 NComputing L230 (Access point)work stations that will run off the server. Any recomendations would be greatly appreciated."
I haven't put together any servers like that in the past few years so my advice would be of limited value. If you really need advice I would suggest asking this on AMDZone since there are people there who do have current experience.
Are you saying then that Intel will happily drop the price of the QX9770 down to where Q9300 is today or are you saying that people who buy Intel processors are dumb enough to pay more for a Penryn processor when a cheaper and faster Nehalem is available? Which one are you saying? Or are you saying that Anandtech was wrong?
Isn't that what happens when a new architecture is released? You end up with more performance at a lower price. When the Core 2 was released, the E6300 was fast enough to beat Intel's $1000 Pentium D Extreme Edition yet the E6300 cost only $180.
Intel isn't going to cut the price of the QX9650 or QX9770 CPUs, but I don't expect many people would buy them.
Of course, buying Nehalem will entail the additional expenses of a new X58 board and 3 sticks of DDR3 memory. We're talking a minimum of $250 for a motherboard, probably more. Three 2GB sticks of DDR3 memory, at 2GB each would cost at least $200, probably more.
With all those expenses in mind, Penryn still makes sense for those who want to upgrade an existing system, or those who want to build a new system with cheaper memory or a cheaper motherboard (P35, P45 etc).
Gentlemen
I have to say that this has been one of the strangest learning experiences I have had in awhile.
Several people here pointed me to Lost Circuits to bolster Anandtech's testing and prove I was wrong about the site. I wasn't expecting the testing at LC to be any better than the other websites.
Eventually, someone pointed me to a thread in the Lost Circuits Forum where KTE who is apparently an Intel engineer is discussing power testing with MS, the person who does the reviews and testing at Lost Circuits.
I have to say that the discussion is excellent. These two are clearly professionals and knowledgeable about what they are doing. KTE has apparently done testing of his own and he backs up MS's testing methods and results. I have no doubt that these two know what they are talking about and clearly have more experience on the subject than I do. So, I have to conclude that the power draw testing at LC is accurate.
The problem though is that KTE also says:
The inaccuracies and shortcoming in the reviewers own knowledge, methodology, end results, effort, criticism, commentary and hardware usage is now becoming so rampant at every other review that with short time, you have to learn to ignore them but the trusted and consistently accurate websites. AnandTech for a few months has fallen out of this group. Not something I want to comment on, but in my and many of my online friends views, they are the new THG with a twist -- with exceptions to Kris, Johan and a few others.
So KTE seems to support what I've been saying about the testing being poor in many places. He also seems to agree that Anandtech is not a good review site right now.
Chuckula
Sorry, your language prevents me from posting your comment as it was written but your link to What Nehalem is really about at Anandtech is a good article. What is interesting is that I was going to point out that this was written by Johan and not Anand before I read KTE's comments. I would agree with him that Johan is still worth reading even if Anand is not. As you mentioned, Johan says:
Nehalem is about improving HPC, Database, and virtualization performance, and much less about gaming performance. Maybe this will change once games get some heavy physics threads, but not right away.
Given this comment by Johan I think we can conclude that pricing won't be an issue, that we will not suddenly see dirt cheap Bloomfields that beat the 3.2Ghz quads. I'm sure this is going to be a more gradual replacement into 2009.
For power draw on 9850 and 9950 KTE says to MS:
I arrived at CPU only power figures of 97-101W for 9850BE and 100-105W for 9950BE (DC), along with many others for different CPUs. All of my numbers reconcile with yours very much.
There were bands of users claiming Phenom 9850BE uses around 100W more than C2Q Q6600 on a similar setup system for a long while now, some radicals even generalised this to Phenom 9500. Now obviously I am not going to waste time over such fanatic drivel, I am not interested in them or their thick skulled motives but I was appalled at how misleading this lie was to onliners who were following this blindly as if Bible speak. Personally, having both of them, I knew very well what the power differences were and articles such as yours are excellent sources to make such trolls quit regurgitating nonsense. It shows very clearly than 65nm Phenom 9950 at 2.6 GHz has lower power draw than 65nm Core 2 Quad at 2.67GHz has.
KTE also explains in technical detail what TDP is so I get it now:
TDP - is a theoretical figure for cooling solutions as you know. It obviously depends highly on the core materials since 65nm SOI at 5 atoms thick oxide layer is going to have immense leakage, i.e. not used but wasted power, that will add to the heat. TDP is worked out using Supply Vdd * Icc. The maximum current available to K10 arch. is 110A and 20A to the CPU-NB, same for Shanghai. However, each CPU has the max current available locked to a certain value giving its TDP. The max current available is encoded within a CPU register at boot, so you can retrieve it (and use VID to work out max TDP). The difference in 125W and 140W isn't VID or voltage supply at all, they remain the same but the maximum current available has increased. I don't remember the 9850BE values (around 23.5A per core IIRC) but the 9950 in my estimate should have maximum 110A current available and that is why its the best and the top model to come AFAIK.
scientia.. very nice article, and also great information on the discussion at LostCircuit forum.
Correction
I'm sorry I misread one of his lines:
the 2nd being an Intel engineer, apart from me
I had first read this that he was saying an Intel engineer apart from him. I now realize he was just saying that he had only talked to two people apart from him that knew that power draw increased as the chip got hotter. So, KTE is not an Intel engineer.
If I remember correctly, Anandtech's big break came when an OEM slipped them a K6-3 for an early review. Where is the love?
http://reviewage.co.uk/content/view/33/1/
Deneb in the news!
'So either Intel's documentation is wrong or Anandtech is.'
'So, I have to conclude that the power draw testing at LC is accurate.'
I'll take door 3, scientia is wrong. Might want to update the article on that one.
Scientia, I'm not trolling here. I'm taking the time to respond to your article the least you can do is respond to my posts. There no trolling about it. I'm asking a simple question(s) and getting nothing back while being heavily moderated. I'll have a conversation with you but you seem hellbent on moderated it to hell and back.
I did get around to reading your post on amdzone and find it rather odd.
'I'm curious why you would object to using the tougher test.'
Where did I object? I linked to LC and they showed data backing anandtechs (and the communities) testing.
'Xbit is obviously being dishonest when they claim that they are using the toughest test when they clearly are not. '
I can't seem to find this claim. Show where they claimed this.
'Are you concerned that the tougher test would pull down Intel's maximum clock more than it would AMD's? '
Why would I be concerned? There's no data to say one way or the other. My specualation is that it would take both down 1-2 speed grades. The difference between small and large ffts is that small tests memory more then CPU. AMD has the memory controller onboard so it'd possibly limit their clocks more, who knows (again, speculation)
So you are suggesting that Intel's rating is wrong? You believe that the Q9300 is in reality a 65 watt chip.'
The rating is the cooling that a OEM must provide. Its easier to rate a bunch of chips at x watts and have a cooler cover it all. AMD have done similar things a number of years now.
'Again comparing with everyone else. I already stated the reason: you use a price/clock curve to show the best value. This is obvious to everyone familiar with graphs and statistics even if they aren't familiar with computer hardware.'
I'm not sure what you want here scientia. Every single site provides a nice table of price/speed (clock speed) and some other somewhat relevant features. It applies to just about anything where the more the you pay the less value for money you're getting. Its applies here.
'Which changes nothing. 9800GX2's are still $150 cheaper than GTX 280. This shift in endorsement seems to be only related to the vendor and nothing more. So, either Anand is schizophrenic or Anand is biased. Which is it?'
Scientia where did anand endorse the 9800x2? He simply said that it is $150 cheaper but has the obvious downside in being 2 cards in one. I agree with anand that if you're going to spend that kind of money that the 9800x2 is better value for money despite the dual gpu problems. This problem will depend on what games you play as well, very big factor!
The price has come down on the GT200 series quite a bit thanks to the R700 series (thank you ATI!) I'm not sure of the price off the top of my head.
'Well, it could be that Anandtech lost all of its technical Linux expertise when Kubicki left or it could be that Anand didn't like seeing Intel get beaten in the Linux tests.'
Or it could be that most (if not all) of the readers at anandtech dont use linux. I love linux to death but it has very poor gaming (and other app) support. If no one cares about it (outside of servers but windows IS gaining momentum in this) why bother reviewing it?
'Anand himself did mixed testing under Windows back in 2005, however, Intel did poorly. This may be why mixed testing was never done for C2D. Mixed testing is not that difficult to do.'
Key word there is may. You're speculating.
You may believe that I'm a troll and fanboy but I'm not. I read everything you have to say and respond to it. The very least you can do here is do the same to me. I think you're ok guy and its cool you're pointing out bias in the media (its there and all) but I believe you're taking it too far.
So, perhaps you could explain asset smart again?
Hey Scientia,
think you'll find some time to write something about Nehalem and Shanghai? I'd love to read your thoughts.
Post a Comment