Friday, February 15, 2008

Bandwagon Journalism (AMD is sooo last year)

Much like wearing Prada, it has become fashionable to attack AMD. Often it seems that web authors say negative things less because there is any valid reason and more because they simply want to be part of the crowd. A good example of this type of trash journalism is this Extreme Tech piece by Joel Durham. Durham and many others suggest that the evidence is everywhere that AMD became lazy and stopped innovating. The reality is that there is no such evidence.

Let's start with the argument that AMD has been generations behind Intel in terms of process technology. In early 2003, Intel's Northwood P4 was at 3.2Ghz while AMD's Barton K7 was at 2.2Ghz. Both were using a 130nm process.

Intel P4
2003, 3.2 Ghz - Q4 2006, 3.8 Ghz, 19% increase in 14 quarters

AMD K8

2003, 1.8 Ghz - Q4 2006, 2.8Ghz, 56% increase in 14 quarters
Equal to K7, 2.0 Ghz -2.8 Ghz, 40% increase
Equal to P4, 2.2 Ghz - 2.8 Ghz, 27% increase

After changing processes twice P4 topped out at 3.8 Ghz on 65nm, a very modest 19% increase in clock. AMD increased clock by 56% in the same period of time. Of course, it could be argued that K8's initial 1.8 Ghz was slower than the fastest 2.2 Ghz Barton at the time. However, looking at either the 2.0 Ghz point where K8 matched the fastest K7 or the 2.2 Ghz point where K8 was faster than the fastest P4 we still see that AMD increased clock more than Intel over the same period of time.

The second argument is that AMD has been doing much worse with 65nm than it has before with process technology and is way behind where it should be. This is not exactly true when compared to AMD's previous track record with 130nm SOI. It took AMD about a year to match K7's old process speed of 2.2 Ghz and deliver 2.2 Ghz K8's in volume in early 2004. We see an almost identical scenario with 2.8 ghz 65nm K8's now arriving about a year after their 90nm counterparts.

The third argument is that AMD's 65nm process is broken. The supposed evidence of this is that K8 hit 3.2 Ghz on 90nm while 65nm is only now at 2.8 Ghz. This may sound good but it isn't a fair comparison. AMD stopped developing the process used for K7 because K7 was on the old socket A and therefore had a limited lifespan. If K7 had continued to be developed it very likely too would have been at a higher speed in early 2004 when 2.2 Ghz K8 arrived. We could easily have been comparing 2.2 Ghz K8 to 2.4-2.6 Ghz K7 much as we see today. In fact, this very thing did happen to Intel with P4. Intel continued to develop the Northwood core on the old 130nm process and reached 3.46 Ghz which exceeded the initial 90nm clock speeds. In fact, if we use Intel's highest 130nm clock then Intel's efforts look particularly poor as we only then see a tiny 10% increase in clock speed to 3.8 Ghz on 65nm in the next 3 years. By the logic of the bandwagon analysts Intel's 90 and 65nm processes must have been broken.

However, the reality is quite a bit different from such a superficial view. In between the period of 2003 and 2006 both companies shifted to dual core on P4 and K8 which slowed clock increases. We really can't compare one to one a dual core Tulsa at 3.8 Ghz to a single core Northwood Xeon at 3.46 Ghz. We clearly saw that even though AMD's 90nm process was mature by 2005 the initial clock speeds for X2 were 400 Mhz slower than the single core speeds. Adding in the core doubling factor we can see that the actual clock increases were greater than the apparent increases. Similarly today we see speeds being held back because of a shift from dual core to quad core.

It is clear then that K8's clock speeds advanced at a normal pace and that 65nm matches the rate of development of AMD's 130nm SOI process. This leaves the question of why the notion that AMD became arrogant and lazy has become so pervasive since 2006. There is no doubt that Intel scored a big win both by introducing an architecture with increased IPC and increasing clock at the same time. This is similar to Intel's introduction of Northwood P4 where the IPC increased over Williamette and the improved 130nm process allowed a faster clock. Compared to this AMD's necessary shift to revision F for DDR-2 seemed very disappointing. Thus at the end of 2006 Intel was at 3.0 Ghz on a 65nm process with quad core compared to AMD which was just introducing 65nm and was only at 2.8 Ghz with 90nm dual core. Some have tried to claim that AMD should have moved to 65nm earlier but FAB 30 was not capable of 65nm production and any money spent on outdated 200mm tooling for upgrades would have been wasted. AMD had to wait on FAB 36 and the 65nm ramp there seems inline with industry expectations.

So, in looking at AMD and Intel more closely we simply don't find the arrogance, laziness, or lack of innovation that it has become so chic lately to attribute (with airy wave of hand) to AMD. The change to a DDR-2 controller no doubt consumed development resources but added no speed to the processor itself. The bottom line was that Intel's 65nm process was mature when C2D arrived because it had already been wrung out with Presler and Yonah and there was simply no possible way that K8 with internal 64 bit buses was going to compete with C2D with new 128 bit buses. Intel basically got lucky with quad core since the shared FSB architecture was adaptable to this. I saw a lot of people claim that AMD should have done an MCM type design with K8 but I still haven't figure out how well this would have worked with a single memory controller feeding two chips and the second chip being fed via an HT link. Presumably the performance would have been very similar to dual socket with only one chip having a connection to memory and these only showed 50% performance for the second chip. I still have doubts that this at 2.8 Ghz would have had much effect in late 2006 and it seems that the memory bottleneck would simply have gotten worse as the speeds increased to 3.0 and 3.2 Ghz. Rather than laziness it is clear that 2006 found AMD with very few options to respond to Intel's successes.

I've seen comment after comment claiming that the purchase of ATI was a huge mistake. I'll admit that it cost a lot of money when AMD had none to spare but what exactly was the alternative? If AMD had not purchased ATI the five quarters worth of losses would have been the same. There was nothing about the ATI purchase that affected AMD's processor schedule. I've also seen claims that AMD overpaid for ATI and the supposed proof of this is the $1.5 loss of goodwill charged in Q4 07. The problem with this idea is that the purchase price had to include ATI's prospects including business from Intel. Naturally, ATI lost this business when it was purchased by AMD. Since the loss of Intel business was a direct result of AMD's purchase this loss of value at ATI was inevitable. However, on the positive side AMD acquired the 690G chipset which remained the most popular chipset for AMD systems through 2007. Likewise it is a certainty that the 790 chipset will be in 2008. AMD also gained a purpose designed mobile chipset. The lack of such a chipset prevented the superior Turion processor from matching Penium M for the past several years. Gaining this chipset is difficult to underestimate. This also puts AMD in a much more competitive position with Fusion. There is no doubt that AMD has troubles but I can't see any which were caused by the ATI purchase. Without the purchase AMD would have more money but its competitive position would be worse.

I've unfortunately found that when people state my position they usually only get it right about 1/3rd of the time. So, I'll try this clearly. It is obvious that AMD is behind and the clearest indication of this is the lack of FX chips. The Black Edition doesn't count since this is actually a volume chip, more like a poor man's version of FX. True FX chips are at the bin limits and therefore only available in limited quantities. The fact that FX chips have been replaced with Black Edition shows that even AMD knows that it is behind. AMD's official highest clock on 65nm for X2 is 2.8Ghz. This X2 4000+ review at OC Inside shows 2.93 Ghz at stock voltage. This 200Mhz margin is the difference between common and low volume. In other words AMD should therefore be capable of delivering FX 65nm X2 chips at 3.0Ghz. Of course, there would be no reason to since these would not be competitive. However, using the pattern of X2 we would assume that X4 would be 2.4Ghz common volume and 2.6Ghz FX volume. Again, a 2.6Ghz X4 would not be competitive as an FX chip so there are none.

These clocks match closely with what we've actually seen. I have seen accounts of 2.7 and 2.8Ghz on Phenom X4 with stock voltage. This of course would contrast sharply with suggestions from places like Fudzilla that Phenom will top out at 2.6Ghz since one would assume that another quarter or so would give 2.8Ghz common volume for X4 in Q3. This would then seem to allow 3.0Ghz at FX volumes. These are both good questions: whether AMD could truly deliver 2.8Ghz in Q3 and whether AMD would consider 3.0Ghz fast enough for an FX branded chip. I have seen suggestions that AMD will abandon 65nm in favor of 45nm at mid year. However, this would not seem to match AMD's previous behavior since 65nm chips use the same socket and therefore would not be end of life as K7 was in 2003. It would seem more likely that AMD would continue to rely on 65nm during 2008 for the bulk of its chips and highest speeds and that 45nm even if at reasonable volumes in Q4 will not reach competitive speeds until early 2009. In other words, barring a big process leap for 45nm I would expect AMD's best in 2008 to be 65nm. I don't suppose we will get any real idea of AMD's 45nm process until someone gets ahold of some 45nm ES chips and that probably won't happen any earlier than late Q2.

21 comments:

Scientia from AMDZone said...

AMD is clearly behind Intel; there is no doubt of that. And, there have been delays with K10. True indeed. But articles that smugly bash AMD like this one are simply nuts:

Extreme Tech

"You might have gotten lazy when you were the dominant one and tried to coast on your rep. Well, reputation only goes so far. You know I hate clich├ęs, but as they say, if you talk the talk you gotta walk the walk. You've been crawling."

There is no evidence of this at all. I suppose in hindsight (which is always better) AMD might have thrown in a few tweaks with the Revision F upgrade for DDR-2. Maybe too in hindsight they would have tried to get FAB 38 up a quarter sooner. But based on what AMD knew their course was reasonable. And, we see a steady advance of clock speeds on 90nm. 2.8Ghz in late 2006 was in keeping with AMD's expected pace. The fact that Intel was able to get 3.0Ghz is a feather in Intel's cap but no evidence of laziness on AMD's part.

The fact is that AMD got hit from three different sides in 2006. If C2D had not had such a bump in IPC then 3.0Ghz versus 2.8Ghz would not have been such a big deal. And, if Intel had not released quad core then again AMD would not have been so far behind in comparison. But dual core K8 with 64 bit buses at 2.8Ghz was no match for quad core Kentsfield with 128 bit buses at 3.0Ghz.

I haven't seen anyone show any better course of action that AMD could have taken in 2006. Nor other than the dubious idea of having more cash have I seen any good reason not to purchase ATI. AMD is behind but not because of complacency, arrogance, or laziness.

Aguia said...

Well Scientia another very well done article.

I just think AMD doesn’t release faster 65nm dual core parts because they don’t want to break the 65W TDP. I'm pretty sure AMD could do 3.0Ghz/3.2Ghz dual core K8 at 65nm but without the 65W TDP more likely the 89W TDP.

Scientia from AMDZone said...

polonium

I'm not sure why you deleted your comments. Naturally, I can read them without any trouble but I wouldn't repost without your permission.

*****

The problem with the idea that AMD got lazy or arrogant is the timeline. Without just making up things the timeline does not fit.

AMD had a lot of trouble with yield on the new 130nm SOI process, much worse than the bulk 130nm process used for K7. Then AMD's volume actually fell in 2004 as the larger K8 dies displaced the older K7 dies so there would have been no complacency. If you look at the actual share history you can see that AMD's volume share hardly changed until the second half of 2005.

Now, anyone familiar with cpu design would know that Revision F would have been finished by then so AMD could not have slacked off on it. FAB 36 would also have been nearing completion so again nothing to slack off on. The 65nm ramp likewise doesn't fit with the idea of holding back; it appears to be a maximum effort once FAB 36 is operational.

That only leaves the question of K10. The best possible case that anyone could reasonably make would be that AMD slacked off on K10 design starting late Q3 2005 and then realized the problem in early Q2 2006 when C2D was shown. However, this is a far cry from the accepted view that AMD began slacking off in 2003 when K8 was launched.

I doubt that in reality AMD had any such holiday in late 2005 and early 2006 with K10 development. The word was already out about C2D in late 2005 so AMD would have been cautious. I think it was simply the case that C2D was better than AMD expected and there was little that AMD could do to excelerate the plans they already had in place.

I've thought about possible tweaks to K8 in Rev F but I can't get around the cold, hard, reality that C2D used 128 bit buses to K8's 64. There's just no getting around that.

Scientia from AMDZone said...

Let's look at these two statements:

"while you and ATI just kind of, well…you don't seem willing to take charge like you used to. You might have lost the will to compete."

On Feb. 16, MemoryExtreme Team Italy, a group renowned for record-breaking overclocking achievements and better known online as giampa, Leghorn and giorgioprimo, set a new world record 3DMark05 score of 39,133, surpassing the previous record also set using ATI Radeon(TM) HD graphics cards. The same day, the team captured the 3DMark06 record by posting a score of 30,662, edging the previous world record by 56 points.

As of Feb. 20, 2008, ATI Radeon(TM) graphics cards take a clean sweep of the current top 20 records in 3DMark05 and 16 of the top 20 3DMark06 scores, for an aggregate 36 of the 40 best 3DMark05 and 3DMark06 scores posted on the Futuremark website.

*****

These two statements just don't seem to match. How does losing the will to compete get you to a world record? I'm not saying that ATI is ahead of nVidia (which is doing a fine job) but apparently ATI is not doing as badly as claimed. The truth is that ATI is competitive today both in chipsets and discrete graphics.

Scientia from AMDZone said...

Copied this from roborat's blog. Since it is addressed to me I might as well answer it here.

InTheKnow said...

"Well, Scientia's latest blog was almost provocative enough to get me to post there."

Actually I wasn't going for provocation, just clearing up some misinformation.

"The first thing that I find interesting is that the "analysis" starts in 2003 and ends in 2006. It's rather curious that 2007 seems to have passed without being accounted for."

Well, not exactly. I did mention 2007 in reference to 65nm Brisbane speeds.

"That gives Intel Strained Si, low-K dielectrics and nickel silicide in 2004."

Well, the problem with this statement is that base technology is largely irrelevant. The only real factors are cost and performance. You need low enough cost to make a profit and high enough performance to be competitive.

"AMD was clearly introducing new tech faster than Intel at 130nm and has not been since."

These two are not comparable at the process level since we would be comparing SOI with bulk. Again, cost and performance is important, not specific chemistry.

"I have one simple question regarding this statement. What happened to 90nm? Did this get left out because it didn't support the conclusion?"

Well, your comment is insulting, ignorant, and rude all at the same time. AMD introduced the 130nm 3800+ Newcastle core 2.2Ghz in June 2004. When the 90nm Winchester core was introduced in October 2004 it was only at 2.2Ghz. AMD didn't reach 2.4Ghz on 90nm until April 2005 which would be 10 months after 130nm. Again, close to a year later.

"However, something does seem to be wrong as both K8 and K10 have both performed poorly at this node."

Not that I'm aware of. 65nm K8 seems to work as well as 90nm K8. Obviously there is no 90nm K10 to compare with.

"That same process node produced a good product in the 90nm/Pentium M combo."

Not exactly. Pentium M put far fewer demands on Intel's 90nm process.

"So I would conclude that AMD has hit the thermal wall on 65nm just as Intel did at 90nm and needs a micro-architecture that will allow them to compensate for that."

This could be true. I really have no idea. This was one of the questions at the end of my article as to whether X4 could reach 2.8 or 3.0 Ghz on 65nm. Right now, 2.6Ghz does look like the near term limit but I'm not sure that it is an actual process ceiling.

"Clock speeds alone aren't an indicator of performance. Multi-core and micro-architecture also have impacts. The total package is best evaluated through extensive benchmarking."

Well, not exactly. I mentioned earlier comparisons of K8 with P4 and K7 but beyond changes in power draw there wasn't much change in performance/clock with K8 over that time span.

"It does seem odd that all of the precede analysis was based on a that "superficial view", however."

Not by a long shot. I could have mentioned AMD's abiity to keep power draw within 89 watts as clock speeds increased as well as Intel's ability to reduce Prescott's high power consumption as it shifted from 90nm to 65nm. However, this is not typically necessary since the upper bound on thermal is normally also the fastest common chip. I excluded leading edge chips (EE, FX, etc) because these are not as consistent.

By the way, to get to your claim of a superficial view on my part you must have missed this line:

By the logic of the bandwagon analysts Intel's 90 and 65nm processes must have been broken.

The point was to show the absurdity of claiming that AMD's process was broken by using the same superficial comparison with Intel. Remember, this comparison is used all the time to "prove" that AMD's process is broken. Clearly, Intel's 65nm process worked fine with C2D and it would be silly to suggest it was broken.

Polonium210 said...

I was trying to correct a typo and things went wrong, but you have my permission to publish the comment-typo included!

Scientia from AMDZone said...

Reposted: Polonium210 said...

I have to agree with your analysis regarding process technology and "bandwagon journalism". I also think that people STILL just do not get the imperative of AMD purchasing ATI. I shall save my breath on the subject as I think that by 2009, people will see the benefits.

Most people miss the point about what has gone wrong at AMD and criticise them for the wrong reason, namely process technology and the ATI purchase. Where I think there is justifiable criticism, however, is in the cancellation of two or three projects to improve K8-at least this is the information that has been passed to me. Whether it is true or not, the fact that Intel had a static target to aim for simply helped them beat AMD with C2D. The situation would have been very different had K8 been improved and C2D had a smaller gain in performance over K8 on release, instead of the larger one it had. Couple this with the various delays to K10 and you have the present predicament AMD finds itself in.

I remember people arguing that K10 was so important to AMD's future that it would be executed flawlessly. I also remember Phil Hester waxing lyrical about the accuracy of the simulations that AMD ran when designing K10-a claim that made me laugh at the time as I can regale you with stories of many a research career that has been sacrificed at the altar of simulations.

Every design is a compromise and I appreciate that time constraints are utmost but I am of the view that no company has any business bringing to market a design that has only a 15% improvement in IPC over a previous design UNLESS it is operating at the limits. Since AMD is claiming IPC enhancements for Shanghai over Barcelona, then I can only assume that it was time constraints that prevented these enhancements being included in Barcelona. It could, of course, be the case that AMD is referring to the increased cache sizes of Shanghai when it says that these core are IPC enhanced but since it also mentions the caches in the same breath, it would seem that they don't just mean the caches.

Well, this post is probably going to get me off the Christmas card list of some at AMD but I think that no one helps AMD by pretending that everything is hunky-dory.

Ho Ho said...

scientia
"These two statements just don't seem to match. How does losing the will to compete get you to a world record?"

Too bad that ATI GPUs are only good for recods while running under loads of LN2. Simply they chose a process technology and architecture that reminds me of Netburst: loads of heat but when you can cool it you can do miracles (>8GHz P4). Of cource having ridicilously high memory bandwidth helps too.

In real-world they are beaten to the ground by more than year-old GPUs in terms of performance. NV has GPUs with half the theoretical peak bandwidth and shader power that can beat Radeons in most games quite easily. 3Dmark is not too good at showing real-world performance, just as SuperPi isn't for CPUs.

Aguia said...

ho ho,

Last time I checked the Nvidia cards are superior in AA modes.
Ati without AA.

But it all depends which tests are done.
For example I wonder if Nvidia superiority is "just" at 4X AA or is extended in all the others AA modes (2X, 6X, 8X, ...).

NIKOS said...

It's only at 4xAA that nV has the upper hand at higher AA nodes ATI is the king of the hill.I know that from a first hand experience as I owned a 8800GTX then bought a 3870 X2

Scientia from AMDZone said...

ho ho

I never claimed that the record proves that ATI is on top. In fact, what I really said was:

I'm not saying that ATI is ahead of nVidia (which is doing a fine job) but apparently ATI is not doing as badly as claimed.

Again, the record in 3DMark does not prove that ATI is in the lead. However, it does prove that ATI has not just been sitting idle. ATI has been far more competitive since the 3xx release than has been suggested.

"they are beaten to the ground by more than year-old GPUs in terms of performance"

This is just ridiculous exagerating on your part. ATI does not get beaten into the ground versus nVidia offerings of the same price. I've only seen slight advantages in the tests. Or are you talking about some of those demos that have nothing to do with actual gameplay?

Scientia from AMDZone said...

You can look at the HardOCP 3870 X2 vs 8800GTX review for comparison.

For example, if you were to claim that 3870 gets badly beaten on Crysis, Call of Duty, and Unreal Tournament then 8800 GTX is massacred on Half Life. So, anyway, overall, it looks like GTX is a little ahead. GTX goes for as little as $440 but I've seen it for more than $500. So, GTX is a better deal if you shop carefully. Unless of course you want SLI and you have an Intel motherboard then ATI is a bargain.

Ho Ho said...

Nice, comparing dual-die GPU against the oldest G80 and complaining that the older one gets beaten in a game based on relatively old engine when it wins in the latest and greatest.

Did you notice that NV used higher shader quality settings in Crysis and still arcieved much better results? In CoD4 NV used higher AA and won (explain that, nikos). Also NV had had much more stable FPS in pretty much every game, only in HL2 both were quite similar.

It kind of makes one think of why is Radeon so inefficient when that x2 has around 25% more memory bandwidth and 2x more computing power.

Btw, want to make some comparisons of 9600gt vs anything on AMD side in terms of performance and/or price?


My point is that sure, ATI is delivering new GPUs quite rapidly but it still is far from getting back to be the leader.


"ATI does not get beaten into the ground versus nVidia offerings of the same price"

So what does ATI have to fight against 9600GT at around $180? 3870 with much lower quality settings doesn't look all that great. Feel free to bring examples from other price points.

Scientia from AMDZone said...

ho ho

"Nice, comparing dual-die GPU against the oldest G80"

This was what was available at the time. If HardOcp has a review with a newer nVidia product feel free to link it. The age also works somewhat against AMD since the drivers also improve over time.

"My point is that sure, ATI is delivering new GPUs quite rapidly but it still is far from getting back to be the leader."

ATI is delivering new GPU's and is not in the lead. Saying far from the lead is a stretch however.

"So what does ATI have to fight against 9600GT at around $180? 3870 with much lower quality settings doesn't look all that great."

Again, do you have a HardOcp review to reference?

Ho Ho said...

scientia
"Again, do you have a HardOcp review to reference?"

Didn't you read my post? I had a link there. Sure, it is OC'd by around 4% but this is small difference to the reference GPUs.


The BFGTech GeForce 9600 GT OC’s gaming performance is very interesting. The first thing to note is that in all three games tested the 9600 GT was in fact faster than AMD’s fastest single GPU video card, the ATI Radeon HD 3870. At this price point of $169-$189, the 9600 GT competes more with the Radeon HD 3850, but lo and behold, it was besting the 3870 in our game testing. In Crysis it simply had faster shader performance which allowed several in-game settings to be set at “High” versus “Medium” on the 3870."

Aguia said...

What do you guys think of this Phenom wins?

Scientia from AMDZone said...

ho ho

Let's put it this way. 8800GT is a little better than 9600GT which is a little better than 3870 which is a little better than 3850. Realistically, 3870 should be priced a little lower than 9600GT if gameplay is the primary criteria. Again, however there is no massacre, no beating into the ground and no great advantage. These are small differences. ATI has done a great job of closing the gap from the 2900 series.

Secondly, the 690G chipset was the most popular chipset for AMD (beating nVidia) and the 780/790 chipsets are certain to be as well. Overall, ATI is doing reasonably well.

Scientia from AMDZone said...

In this Hexus interview with Phil Hester he talks about AMD's position with 65nm. This is right from the horse's mouth:

Hester:

You have to separate yield and speed; those two things don't correlate. Okay, so the yields along the 65nm technology literally since we first started ramping have been fine. There've been no issues. Contrary to the FUD that Intel might be creating there are zero yield issues on the technology itself.

Anytime you introduce a new microprocessor you have to kind of tune that microprocessor to the technology. You have to find all the critical paths, the speed paths in it. You have to go validate all the new pieces of logic. That, if you will, matching and validation has taken longer than we would like.

And so, getting higher speed grades really requires kind of tunning if you will the design points, looking at how the individual transistors and the critical path are specified and work and mapping that onto the technology. So, that's the process right now that really needs the execution improvement that I talked about, is that mapping process. The underlying silicon has been fine since day one.

*****
So, again according to AMD, their 65nm process is not broken, is not behind, and has never had poor yields. This is quite believable as the clock ramp does match the clock ramps for both 130nm and 90nm SOI.

That leaves the question of whether Intel has been able to ramp faster due to tighter design restrictions as some have suggested or whether Intel simply has more manpower to tweak the design after the fact. Nevertheless, either such advantage for Intel could be bad for AMD.

Scientia from AMDZone said...

Enumae, as I mentioned, I don't wish to have a big discussion about George Ou's biases here; it's off topic. However, if you want to discuss the chart that Ou talked about showing Nehalem outrunning Shanghai that is fine.

Scientia from AMDZone said...

Here is the Chart that was leaked from Sun's website.

If the Nehalem EP is:

4 core - Very impressive
6 core - Modest improvement
8 core - Very little improvement

In other words, if this is quad core then Shanghai is likely to lose ground in early 2009, if 6 core then Shanghai could be competitive, but if 8 core then Shanghai should be fully competitive. In fact this may offer only a modest increase versus Dunnington. We should find out for real perhaps end of Q2. That is about when I would expect some ES information to start leaking or demos to be done. However, it is possible that Intel will hold back on Nehalem information to promote Dunnington. That may even be why this chart was not made public.

enumae said...
This comment has been removed by the author.