The Top Developments Of 2007
It looks like both AMD and Intel have been as forthcoming as they are likely to be for awhile about their long range plans. The most significant items however have little to do with clock speeds or process size.
The two most significant developments have without doubt been SSE5 and motherboard buffered DIMM access. AMD has already announced its plan to handle motherboard buffered DIMMs with G3MX. This is significant because it means the end of registered DIMMS for AMD. With G3MX, AMD can use the fastest available desktop DIMMs with its server products. This is great for AMD and server vendors because desktop DIMMs tend to be both faster and cheaper than register DIMMs. This is also good news for DIMM makers because it would relieve them making registered DIMMs for a small market segment and allow them to concentrate on the desktop products. Intel may have the same thing in mind for Nehalem. There have been hints by Intel but nothing firm. I suppose Intel has reason to keep this secret since this would also mean the end of FBIMM in Intel's longterm plans. If Intel is too open about this it could make customers think twice about buying Intel's current server products which all use FBDIMM. So, whether this is the case with Nehalem or perhaps not until later it is clear that both FBDIMM and registered DIMMs are on their way out. This will be a fundamental boost to servers since their average DIMM speed will increase. However, this could also be a boost to desktops since adding the server volume to desktop DIMMs should make them cheaper to develop. This also avoids splitting the engineering resources at memory manufacturers so we could see better desktop memory as well.
SSE5 is also remarkable. Some have been comparing this with SSE4 but this is a mistake. SSE4 is just another SSE upgrade like SSE2 and SSE3. However, SSE5 is an actual extension to the x86 ISA. If AMD had been thinking clearer they might have called it AMD64-2. A good indication of how serious AMD is about SSE5 is that they will drop 3DNow support in Bulldozer. This clears away some bit codes that can be used for other things (like perhaps matching SSE4). Intel has already stated that they would not support it. On the other hand, Intel's statement means very little. We know that Intel executives openly lied about their intentions to support AMD64 right up until they did. And, Intel has every reason to lie about SSE5. The 3-way instructions can easily steal Itanium's thunder and Intel is still hoping (and praying) that Itanium will not get gobbled up by x86. Intel is also stuck in terms of competitiveness because it is too late to add SSE5 to Nehalem. This means that Intel would have to try to include it in the 32nm shrink which is difficult without making core changes. This could easily mean that Intel is behind in SSE5 until 2010. So, it wouldn't help Intel to announce support until it has to since supporting SSE5 now would only encourage development for an ISA extension that it will be behind in. Intel is taking the somewhat deceptive approach of working on a solution quietly while claiming not to be. Intel can hope that SSE5 won't become popular enough that it has to support it. However, if it does then Intel can always claim to be giving in to popular demand. It's dishonest but it is understandable for a company that has been painted into a corner.
AMD understands about being painted into a corner. Intel has had the advantage with MCM quad cores since separate dies mean both higher yields and higher clock speeds. For example, on a monolithic quad die you can only bin as high as the slowest core. However, Intel can pick and choose individual dies to put the highest binning ones together. Also, Intel can always pawn off a dual core die with a bad core as a lowly Conroe-L but it would be a much bigger loss for AMD to sell a quad die as a dual core. AMD's creative solution was the Triple Core announcement. This means that any quads with a bad core will be sold as X3's instead of X4's. This does make AMD's ASP look a bit better. I doubt Intel will follow suit on this but then it doesn't have to. For AMD, having an X4 knocked down to an X2 is a big loss but for Intel it just means having a Conroe knocked down to Conroe-L which is not so big. Simply put, AMD needs triple cores but Intel doesn't. On the other hand, just as AMD was forced to release a faster FX chip on the older 90nm process so too it seems Intel has been forced to deliver Tigerton not with the shiny new Penryn core but with the older Clovetown core. Tigerton is basically just Clovertown on a quad FSB chipset. This does suggest at least a bit of desperation since after working on this chipset for over a year Intel will be lucky if it breaks even on sales. To understand what a stumble Tigerton is you only have to consider the tortured upgrade path. In 2006 and most of 2007 Intel's 4-way platform meant Tulsa. Now we get Tigerton which uses the completely incompatible Caneland chipset. No upgrades from Tulsa. And, for anyone who buys a Tigerton system, oops, no upgrade to Nehalem either. In constrast, 4-way Opteron systems should be upgradable to 4-way Barcelona with just a BIOS update. And, if attractive, these should be upgradable to Shanghai as well. After Nehalem though, things become more even as AMD introduces Bulldozer on an incompatible platform. 2009 will without doubt be the year of new sockets.
For the first time in quite awhile we see Intel hitting its limits. Intel's 3.33Ghz demo had created the expectation of cool running 3.33Ghz desktop chips with 1600Mhz FSBs. It now appears that Intel will only release a single 45nm desktop chip in 2007 and it will only be clocked at 3.0Ghz. The chip only has a 1333Mhz FSB and draws a whopping 130 Watts. Thus we clearly see Intel's straining to deliver something faster much as AMD did recently with its 3.2Ghz FX. However, Intel is not straining because of AMD's 3.2Ghz FX chip (which clearly is no competition). Intel is straining because of AMD's server volume share. In the past year, AMD's sever volume has dropped from about 25% to only 13%. Now with Barcelona, AMD stands to start taking share back. There really isn't much Intel can do to prevent this now that Barcelona is finally out. But any sever chip share that is lost is a double blow because server chips are worth about three times as much as desktop chips. This means that any losses will hurt Intel's ASP and boost AMD's by much more than a similar change in desktop volume would. So, Intel is taking its best and brightest 45nm Penryn chips and allocating them all to the server market to try to hold the line against Barcelona. Of the 12% that Intel has gained it is almost certain to lose half back to AMD in the next quarter or two, but if it digs in, then it might hold onto the other half. This means that the desktop gets the short end of the stick in Q1 2008. However, by Q2 2008, Intel should be producing enough 45m chips to pay attention to the desktop again. I have to admit that this is worse than I was expecting since I assumed Intel could do a 3.33Ghz desktop chip by Q1. But now it looks like 3.33Ghz will have to wait until Q2.
AMD is still a bit of a wild card. It doesn't appear that they will have anything faster than 2.5Ghz in Q4 but 3.0Ghz might be doable by Q1. Certainly, AMD's demo would suggest a 3.0Ghz in Q1 but as we've just seen, demos are not always a good indicator. Intel's announcement that Nehalem has taped out is also a reminder that AMD has made no such announcement for Shanghai. AMD originally claimed mid 2008 for Shanghai and since chips normally appear about 12 months after tapeout we really should be seeing a tapeout announcement very soon if AMD is going to release by Q3 2008. There is little doubt that AMD needs 45nm as soon as possible to match Intel's costs as Penryn ramps up. A delay would seem odd since Shanghai seems to have fewer architecture changes than Penryn. AMD needs a tapeout announcement soon to avoid rumors of problems with its immersion scanning process.
331 comments:
1 – 200 of 331 Newer› Newest»Sorry for the moderation but Gutterat can't seem to stop trolling.
Gutterat's comments seem to be mostly based on the notion that if Intel didn't announce it then it can't be important. Such an idea is of course absurd since it was AMD that did the 64 bit extension to x86.
The official guide for SSE5 is here:AMD64 Technology: 128-Bit SSE5 Instruction Set
The reason SSE5 is different from SSE4 is because SSE5 uses the DREX byte. This extends the ISA in a way similar to AMD64's use of the REX byte.
SSE4 includes new bit codes for instructions but does not include another control byte. Obviously, AMD wouldn't have been able to support SSE4 in, say, Shanghai in the brief period since the SSE4 announcement.
It is possible that Bulldozer might support some of the SSE4 instructions however it is also possible that the 3DNow bit codes interferred with these. This may be why 3DNow was dropped.
For anyone who is actually familiar with assembler SSE5 is amazing. Specifically:
• Fused multiply accumulate (FMACxx) instructions
• Integer multiply accumulate (IMAC, IMADC) instructions
• Permutation and conditional move instructions
• Vector compare and test instructions
• Three and four operand instructions
The above are the very things that programmers on other architectures have felt were much better than x86. Multiply accumulate instructions save an entire instruction. Conditional moves and vector compares can potentially save several instructions. Standard x86 instructions have two sources but use one of the sources as a destination. This means that when you need both original values you have to copy one elsewhere to save it. 3-way instructions which have a separate destinaton avoid this step (saving an instruction). These can make SSE processing much faster.
These things are the very items that professional programmers have been wanting for years. For someone to try to downplay these additions I assume they would have to basically know nothing about assembler programming.
I apologize again but the only way I can do these in order is to copy them. It seems fitting to begin with strongest objection first.
intheknow said:
Fascinating.
The two biggest developments for 2007 are things that won't show up on the market until 2009, ~1.5 years from now.
No mention of the fact that Intel just demoed a system running quickpath (aka CSI) and an IMC. I know you have tried to paint these as challenges for Intel, but they are also the two biggest advantages that AMD has held for the last several years. I'd really like an explanation of why this is an advantage for AMD and a disadvantage for Intel.
Then there is Silverthorne. The power of a Pentium M with a 0.5W power draw. A more than worthy competitor for the Geode.
But despite two examples of working products demoed in 2007 you chose the most significant developments to be something that won't see the light of day until 2009.
intheknow
"Fascinating."
Obviously you feel that I am trying to shortchange Intel somehow by ignoring its accomplishments.
"The two biggest developments for 2007 are things that won't show up on the market until 2009, ~1.5 years from now."
Correct. These are of the same timeframe as AMD's Torrenza and Intel's Geneseo initiatives. If you'll recall, the timeframe for Intel's TeraFlop initiative was even longer.
"No mention of the fact that Intel just demoed a system running quickpath (aka CSI) and an IMC."
Yes, this will allow Intel to create memory distributed systems. However, this isn't new. AMD has already shown that just putting a registered memory controller on the chip doesn't solve all of the problems. And, Intel to its credit tried to solve the same problems in a novel way with FBDIMM. Judged on the whole we can see that the 128 bit data path of AMD's IMC is a losing strategy as is relying on registered memory. We can also see that although FBDIMM solves the bus width problem it too has limitations in cost and power consumption.
Motherboard buffered memory access is quite clever because it allows the narrow bus width of FBDIMM which makes board layout so much easiser while also using fewer buffer chips for better power draw. It is brilliant because it does not rely on the memory manufacturers for its advanced features. The memory is nothing but commodity DIMMs with the real advances being on the processor and motherboard based buffer chips. This seems to be exactly what both AMD and Intel need to maintain high memory bandwidth while keeping cost and power draw low.
I don't know exactly what Intel has for Nehalem but if it isn't quite like G3MX I'm sure it will be soon. For example, Intel could just put a derivative of FBDIMM communications chip on the motherboard and use the same FBDIMM port on the processor. There is also no reason why the motherboard chip couldn't increse the fanout from 1 per DIMM to 1 per 2 DIMMs and this would immediately cut the extra power draw in half. Again, Intel will have no trouble coming up with a similar solution.
"they are also the two biggest advantages that AMD has held for the last several years. I'd really like an explanation of why this is an advantage for AMD and a disadvantage for Intel."
They aren't disadvantages for Intel. Think of it this way: Intel's first improvement to 4-way was Tulsa in 2006, second was Tigerton in 2007, and third is Nehalem in 2008. If AMD weren't releasing DC 2.0 then Nehalem would completely catch up with AMD's 8xxx line. As it is, Nehalem does catch up in 4-way leaving AMD's advantage only in 8-way and above which is a smaller market. Or, it could be that this was intentional on Intel's part to try to leave room for Itanium.
"Then there is Silverthorne. The power of a Pentium M with a 0.5W power draw. A more than worthy competitor for the Geode. "
True indeed. But, this is not a new area for Intel which has been involved with embedded processors for awhile. This is just a better processor. We can see that both Intel and AMD have the same idea about replacing the previous ARM and MIPS based embedded processors with x86 based ones. In terms of overall function I don't see it as being that different but it does simplify development both for AMD and Intel and the makers of palmtops and cell phones.
"But despite two examples of working products demoed in 2007 you chose the most significant developments to be something that won't see the light of day until 2009."
Right. Because I see those two things as fundamental shifts rather than just progress. The only other area that I can think of that might be similar is virtualization which could have profound consequences in servers. I've seen estimates of a machine reduction to 1/4 of what some companies field now. That's quite a shift. The only reason I didn't mention that is because I'm not certain that this will have an effect on the desktop while I'm certain the others will.
Mo said:
Sci.
Intel is putting the binning where it's needed the most. The server Space. The 3.0ghz, 1600FSB Xeon is only 80W.
The enthusiast chip (which wasn't even suppose to launch this year) is 130W dew to lower binning.
Does the enthusiast care about TDP? hardly, it's one of their last options to consider. This same exthusiast who will be running 2 8800GTXs or two 2900HDs hardly cares about TDP.
You like to talk about server space a lot, so let's look at the Xeon 45nm which is 80W at 3.0ghz.
pop catalin sever said:
I find the fact that Intel will launch only 3.0 GHz Penryn this year a little disturbing as some people were expecting 4.0GHz Penryns.
Still AMD is very quiet about Phenom X2 which I would have expected to be available in great quantities by year's end and Christmas shopping time.
Any ideea about what volume of Phenom X2s will be available in Q4 2007 and at what clock speeds?
mo
I wasn't saying that Intel didn't have any chips that could clock above 3.0Ghz or chips that were lower power. Obviously these are going into server offerings. Given the limited initial volume this does seem to be the best solution.
My point was that many people seemed to assume that Intel would have enough volume right away to do whatever they wanted and clearly this is not the case. This shows a distinct rationing on Intel's part. I'll repeat what I've been saying which is that Intel won't have enough chips for this until Q2.
pop
Obviously the article says "eventually 4.0Ghz", not 4.0Ghz at launch. I think however that the 3.33Ghz demo did suggest to some people that Intel might be able to deliver a 3.33Ghz chip at launch. Obviously, this was not the case. I thought that Q1 for 3.33Ghz was more likely but now with the January date for the 3.0Ghz QX chip it is looking more like Q2.
I have never been expecting great quantities of Phenom in Q4. As I've already mentioned the general plan was:
Q3 - K10 server release
Q4 - K10 server volume, K10 desktop release
Q1 - K10 desktop volume, mobile release
Q2 - mobile volume
This was confirmed by AMD executives during the Q2 earnings report.
gdp77 said:
Any idea what this is all about?
Xtremesystems
AMD
gdp
The picture certainly looks provocative. Although, if you look closely you can see that there is no rattle on the tail.
Other people have said that it appears to be an ATI announcement (because the server is actually ATI's). Perhaps the rattlesnake image is to combat the skull image that Intel has been using. Given the image I'm thinking a chipset announcement is unlikely so either a discrete card or something new for Crossfire.
AndyW35 said:
I don't think AMD will be having trouble with the architecture of Shangai because, as you said, it is similar to Barcelona and although Barcelona has a few issues, which meant a late release date, there should not be the same big problems to overcome.
However, considering the jump from 90nm to 65nm for AMD on it's SOI process it does make you wonder what sort of problems the jump to 45nm is creating using the same SOI style technology. Intel had to go to entirely new materials to (seemingly) have good 45nm results, are AMD struggling with their SOI move to 45nm? 45nm would have high heat output per mm which would have to be overcome.
Given the above I can easily see Shanghai slipping and then coming out once again in the midstream and low voltage sectors rather than the high end, as per the 65nm process for K8.
Intel on the otherhand seem to have a more difficult hoop to jump through with Nehalem than with the die shrink to Penryn. It's certainly a non-trivial task to produce not only your first native quad core but also do major architectural changes as well. True it is loading operating systems currently which is a good sign, but I can see Intels release of Nehalem slipping as well, possibly into Q4.
Given the above this gives AMD and Intel a longer period to duke it out with K10 and Penryn. Therefore it is vital for AMD that they can get the clock speeds up. I'm guestamating we will see a 2.8GHz part in Q108 for AMD against a 3.2GHz part for Intel.
The worrying thing for AMD must be not the top performing Intel part but the possibility of having to price against billions of cheap 45nm cheesburgers flying out of the Intel kitchens ..to quote a phrase.
If a Q6600 is at such a low price now it does make you wonder what they can price the 45nm version at and stay financially at the same level or better.
Christian M. Howell said:
I think that the G3MX is the biggest thing this year for the reasons of engineering resources. DRAM prices have taken a hit in the last 6 months so an initiative that will allow Corsair, etc to limit the different SKUs.
Also, as far as Shanghai, I don't think it's the same kind of "tape-out" as it's just a shrink. If you look at Brisbane, it taped out only a few months before it was available, if I'm not mistaken.
AMD did show off 45nm wafers a while ago so I would say that from the engineering standpoint there are ready but financially they may need to wait until Barcelona gets a foot in the door.
It has begun already as Newegg actually has 2347 now though it's EXTREMELY EXPENSIVE right now.
Also, InfoWorld has a piece about how AMD is working for ISVs and OEMs not the other way around.
Every major vendor is in line behind Barcelona - unlike the Opteron launch in 2003.
And since Barcelona fits into Socket F, upgrade cycles are handled for the Googles, Blizzards and Rackables, not to mention Microsoft that uses nearly 100% AMD for compiling Windows. I would think that Barcelona will give them 30-70% faster builds which means alot when a build of Server 2008 will take a large part of a day.
Anand also showed a preview of Harpertown which should have shown more as it was 33% higher clocked.
Another big one is HTX which is set to make appearances in HP servers with Barcelona. That gives AMD at least a year lead over Geneseo and the coherent nature of Torrenza will be more powerful than the PCIe versions of APUs.
By the time Geneseo appears it's possible that companies like ClearSpeed will have Socket F APUs while IBM has already committed to Socket F for some of their CPUs in the future.
I long said that AMDs cooperative nature will keep them propped up until Fab 38 comes online. That will allow AMD to provide 40% of the world market for CPUs.
I'd say that AMD is doing fine as a platform company, while Intel is still the CPU company that has to muscle into whatever they can.
I mean for them to intro an EPP Competitor won't get them SLI from nVidia.
Yes, Newegg (and others) have Barcelona so that should remove the notion that it was a paper launch.
I'm not sure what to say about the $790 price tag though. I doubt it will stay that high for long.
ho ho
I saved yours for last because it is loooong. So, rather than post and then copy and comment I'll answer yours inline.
"Isn't G3MX kind of like external memory controller? I wonder how much latency will it add."
It will add some without doubt. You can look at it several ways though. For example, DDR3 would increase latency anyway, secondly AMD now has enough L3 and robust enough prefetch to hide some latency, and finally the latency wouldn't be worse than FBDIMM.
"[SSE5] Though for now it is simply a marketing stunt as real HW won't be available for 1.5-2 years. Until then AMD will need something else to be competitive."
It isn't a marketing stunt. The lead time is the only way for AMD to get development in place before Bulldozer is released. Secondly, the lead time does pre-empt Intel since they won't have anything similar any sooner. For example, Intel could have announced a different spec in early 2009 or perhaps the 2008 IDF ahead of Bulldozer's release. Obviously it won't help AMD in the short term.
"3Dnow wasn't all that important actually. It was a big deal when Intel didn't have SSE but after that came it was superior to 3dnow. I've sometimes thought why did AMD keep 3dnow for as long as it did."
3DNow was better in terms of horizontal instructions until SSE3. I still don't think you are seeing the big picture though. AMD is willing to evolve x86 while Intel can only do this if they are willing to sacrifice Itanium.
"[SSE5] My guess would be that they have their own somewhat similar version coming. I'm sure that Nehalem will have something more than SSE4.x."
Not likely. If Intel had something like SSE5 in Nehalem they would have announced it right away since their hardware would easily have trumped AMD's plans. Again, I think Intel is conflicted about extending x86 and putting pressure on Itanium.
"Your theory is interesting but I wouldn't count on it. There are still things left that will be there on Itanium and not on x86 CPUs. Pure FP throughput is not essential for it."
It's more than a theory. Popcnt was an Itanium instruction; AMD didn't need it to compete with Xeon. The other items seem to be geared towards Itanium as well since again Xeon doesn't have 3-way instructions.
Seriously, the only desktop competior in this class was Apple's G5 mac with the powerful Altivec instructions. Since the PowerMacs are gone AMD isn't doing this to compete with them.
"As I said before I'm quite sure they have something new added to it. They have talked about much increased single threaded performance and to get that they will need something new. I wonder if macro-op fusion to generate madd's would be doable."
Nehalem may very well have something but it won't be an ISA extension like SSE5.
"I would think that 32nm Nehalem refresh comes before that around 2009 or so assuming Intel keeps to its tic-toc model."
Yes, it has been the question of whether SSE5 could be added at 32nm and still work well without additional architecture changes.
"As I said it takes a long time before SSE5 becomes available. Before that there really is nothing that could make it popular as nobody can use it. After the HW becomes available and Intel does have SSE5 support in 32bit Nehalem upgrade it won't give AMD long time to be alone on the market with SSE5 supporting CPUs."
You're missing the point. I was talking about Intel's reasons for claiming now that they wouldn't support it. Your comment seems to assume they will.
"Tigertown is a platform. As no Penryns are yet available they will simply support the other things that are, Clovertowns. I can't see whats so big deal with it. Should Intel have waited with the release until a new CPU is available? What good would it have done?"
I'm sorry but your statement makes no sense. We are talking about Penryn's being released in November. That is close enough for Tigerton to have used Penryn instead of Clovertown. I think Intel is all too aware that it doesn't matter since even Penryn is going to get clobbered on 4-way and Clovertown is only slightly slower.
"So basically with Intel people have one year with 4P Tigertons and with AMD they have at around 1.5-2 years with Socket F, assuming that Bobcat won't be delayed too much."
I assume you mean Bulldozer. Now, that is 3 years for socket F with upgradability versus 1 year for Tigerton. I think it is clear that Tigerton is a stop gap between the outdated Tulsa and the distributed memory Nehalem.
"What about AMD 3GHz desktops, when will we see those?"
I've said before that it takes 6 months for production to catch up to cherry picking. So, I'm assuming Q1 08. This should prove that Intel's demo was also cherry picked and not production.
" They were shown long before the release of desktop chips but they aren't even on the roadmap."
Let's not play that game. When Intel demos something everyone claims that it represents production and then when they find out that it doesn't they fall back on the roadmaps. I've never claimed that AMD's demo was production; I said 6 months. Do you think Intel is going to have 3.33Ghz in Q1?
"I highly doubt that AMD can get several huge boosts with Barcelona revisions. One might be doable and provide +500MHz but I doubt they could do a second one that soon that could give just as much. Assuming that AMD indeed does get up to 2.5GHz by the end of the year it would mean it has reached the point it was hoping to reach months ago when Barcelona was supposed to be released."
Okay, so you don't see a 3.0Ghz K10 in Q1 as possible. I think it is entirely doable as a quad core FX chip.
"When looking at the prices of the server chips things are not that clear, at least not with 1 -2P. Of course 4P costs way more but it also has considerably less share.
I'm not sure what you are trying to say. Again, the ASPs for server chips are higher than desktop chips.
"Do you honestly believe that AMD could get that much share back before the year ends? I'd be surprised if they can get more than couple of percent as Barcelona doesn't seem to have too great availability when looking how long it takes for the big guys to start selling the systems based on it."
Yes, that is true. If they take until November to deliver Barcelona systems then Q1 seems more likely a timeframe to gain back 6%.
"They have 65W 3.33GHz dualcore Xeons coming in Q1, I see no reason why a quadcore based on them couldn't be delivered. To my knowledge Intel hasn't shown its Q1 desktop roadmap yet so anything is possible."
That is true. Intel could come up with a 3.33Ghz chip near the end of Q1.
"Considering how long it take for AMD to get Barcelona finally working I wouldn't be surprised if they had similar problems with Shanghai. After all it is a whole new production line with quite a few differences compared to the older one."
It's a different set of problems though. Barcelona was architecture while Shanghai is process.
"Are you sure there is no cmov versions for vectors available on current CPUs?"
Let me check. There are none listed. Also, in chapter 6.6 it says Use muxing constructs to simulate conditional moves in SSE or MMX code. Presumably, if these have to be simulated they aren't in there already.
"MADD and other three-operand instructions in SSE5 seem to use three inputs and output will be written to the place of the third input. Of course it will still be better than doing the same with two separate instructions."
Yes, that is true for 4-way instructions. In other words, 3-way instructions are true 3-way with two sources and a separate destination. 4-way instructions are the same as 3-way with the 3rd source coming from the destination register. In this sense, these are not true 4-way instructions. These are still better than standard x86 instructions since the destination is still specified explicitly.
"Just for the record, what are your experiences with low-level programming?"
I learned 6502 assembler about 25 years ago. Since then I've learned Z80, Motorola 68000, PDP, VAX, MIPS, IBM 360, and x86. And, just for fun, PIC.
____
"One thing that I wonder is if AMD will also add new XMM registers. With only 16 of them things will get crowded, especially when someone wants to use lots of those new three-operand functions."
It doesn't look like it. If you look at the DREX bit fields they are only showing 16 registers.
"Talking about CPUs with lots of SIMD power, does anyone have any predictions what kind of functionality will Larrabee contain? My guess would be at least 32, possibly 64-128 512bit SIMD registers with pretty much all the functions one could find in current GPUs, including MADD and other nice things. There will likely be additional functions also, one being texture filtering. I wonder how will the fixed-function filtering be incorporated into the instruction set. Perhaps a special kind of memory read instruction? I remember that with Power one could set up memory reads in a way you could texture a pixel with single instruction, perhaps something similar will be available with Larrabee?"
Right, but how would these functions be accessed? SSE5 is part of the cpu ISA. Adding Larrabee functions with a driver would be far less sophisticated.
"There have also been rumors that Intel next architecture after Nehalem will be able to reach >200GFLOPS with double precision. I wonder what tricks are they using to reach that high performance. Currently it would take Tigerton with the fastest quadcore Penryns to reach anywhere near that performance level."
Unless you are are talking 16 cores with double the current SSE performance, that large of an increase would have to be GPU based. The only question is how tightly the GPU is integrated.
_______
"As for Nehalem 8-core I've yet to get a confirmation if it is native or MCM. Intel itself seemed to talk about native but others have reported MCM too. If it indeed is native then it would make the whole Intel talk about why it didn't create a native quadcore with 65nm moot as native 8-core CPU should be even bigger than 65nm Barcelona. Could it perhaps be just the same kind of marketing talk that AMD used when praising its native quadcore as being a better solution?"
I don't see a monolithic 8 core Nehalem as practical. Such a chip could easily be 520 sq mm. This seems especially true since their ressurected HyperThreading would give the appearance of 8 cores. MCM would seem more likely.
"Another thing I read today was rumors about RV670. If the specs are right and there are no new shader units added then HD2950 seems to get closer to 8800GTX performance but I doubt it could pass it. Problem will be that by that time NVidia should have its new generation out and I doubt the GPU would be slower than G80."
spam said:
intheknow:
No mention of the fact that Intel just demoed a system running quickpath (aka CSI) and an IMC.
This is "The Top Developments Of 2007", not the "Top developments of AMD and Top developments of Intel".
So, with the context of the blog entry in mind, CSI and IMC are old, old news.
The two biggest developments for 2007 are things that won't show up on the market until 2009, ~1.5 years from now.
I suppose it's an interesting observation, but SSE5 has already been developed and spec has been released. I'm not sure how true this is of G3MX though.
Then there is Silverthorne. The power of a Pentium M with a 0.5W power draw. A more than worthy competitor for the Geode.
Just as with CSI and IMC, you list impressive technology, but not "Top Developments". Silverthorne uses less power and is pretty fast. This is nice, but is not a game changing technology.
G3MX is set to totally alter the landscape of server memory. This is a game changing technology.
SSE5 takes x86 to places it has never gone before, with the addition of 3 and 4 operand instructions. This is huge for x86, and is a game changing technology.
However, I would have liked to see Scientia mention Intel's on chip silicon lasers. These are really starting to shape up, and fiber optic interconnects are also a a game changing technology.
spam
I think fiber links on the motherboard itself would be significant technology as well.
I think there is no doubt that signal to noise ratio is going to be a problem if link speeds keep increasing. Fiber doesn't suffer from radio interference.
In one tick of the clock at 3.0Ghz light can only travel about 4" (10 cm). It seems to me that this is going to become a problem if speeds keep increasing. It may be possible though to run more than one channel through fiber. This wouldn't help with latency but it would increase overall bandwidth.
I could potentially see fiber being used to update USB. Imagine one USB port that could not only handle all peripherals (including exernal harddrives) but could act as a network port as well. Hmmm, sounds a bit like Appletalk on steroids.
giant
"Of course you won't post this Scientia,"
Why wouldn't I? It's just a news item; it isn't trolling.
" but Penryn will eventually reach 4Ghz next year. They're already testing 3.4Ghz versions of Penryn:" Skulltrail + benchmarks
The system description shows a dual socket skulltrail motherboard running with dual 3.4Ghz Penryn's with 1600Mhz FSBs and dual nVidia 8800GTX graphic cards. The only somewhat odd item is the DDR2-800 FBDIMM memory.
This makes me wonder if it is intended to be a workstation or a gaming system.
Anyway, the second system seems to match the roadmap for a QX9650 processor at 3.0Ghz. Perhaps this means that Intel will indeed be able to release 3.33/3.4Ghz before the end of Q1.
Maybe 3 cores isn't such a bad idea after all. Epic comments
Between the Lines at Zdnet. Yes, I would say this pretty well sums up the truth about 3 cores.
Oddly enough, this observation was inspired by James Reinders, director of product marketing and business at Intel. He stopped by the CNET offices Thursday and we were discussing the semiconductor landscape with a focus on what folks will actually do with more cores. Reinders’ views the news through an engineer’s lens. His view on AMD’s yields was just a hunch on his part, but it was dead on.
When you read the rest of the article and see the author's goofy estimates of 85% yields plus another 10% from triple core it is clear that this he is correct when he says he knows nothing about it on his own.
It doesn't surprise me at all though that Reinders was dead on. The bottom line is that selling triple cores is a good bounce for AMD's quad core profitability and something that Intel would do as well if they needed to.
The system description shows a dual socket skulltrail motherboard running with dual 3.4Ghz Penryn's with 1600Mhz FSBs and dual nVidia 8800GTX graphic cards. The only somewhat odd item is the DDR2-800 FBDIMM memory.
Sure, it shows that... water cooled... How is this any indication that Intel can get these out by Q1?
spam
I assume you are getting that from Skulltrail system image.
It looks like water cooling to me too but I'm not really up on leading edge systems.
Assuming that this is the same system referenced in the memo then, yes, water cooling would most likely mean a TDP over 130 watts. Best case would probably be getting TDP down after one more production run which is about 3 months total. This would be consistent with hitting an April release.
I'm curous why people have been talking about Silverthorne. If my understanding is correct, Intel's Tolapai is intended as a controller for things like HDTV, media players, and settop boxes. Silverthorne seems to planned only for UMPC-class handhelds. The only other mention I've seen for this is possibly for value priced notebooks for emerging markets.
Larrabee is interesting but I haven't seen anything so far that would suggest that it doesn't use a driver. All the current external processing engines like Stream and the Physics Engine use drivers.
If you look at the Ars Technica article it seems clear that this is an external component and not part of the cpu itself.
Also, was there anything from IDF that contradicted Beyond 3D:
Justin Rattner, Intel's CTO, also noted that Larrabee will be their "first tera-scale processor" and that it is aimed at a 2010 release, or possibly 2009 if things go especially smoothly.
That sounds like late 2009 at the earliest.
Oh, one more thing about Larrabee. My understanding is that this is one TeraFlop with four Larrabee cards or 250 GFLops per card.
Thanks for the explanation, you are looking for disruptive technologies. In that case, you and spam are right on Nehalem, it isn't new or disruptive, but it is a nice piece of current technology.
Regarding Si lasers, which I think is potentially disruptive, I would refer you here. This indicates that it will be fed into a fiber optic system. With the big emphasis that Intel is starting to put on SOC I don't see a 1-2" distance limit being an obstacle.
As to why I'm talking about Silverthorne, and not Tolapai, it is because of where view I view where Silverthorne is taking us, not where it is now. I see this as a large step towards a future of hand-held, voice-operated devices with full PC capabilities. Silverthorne is a long ways away from this but I see it as a major evolutionary step that has to be taken before a disruptive breakthrough can occur.
Sci
First time commenter here
My question on your G3MX piece and commodity dimms is ECC
most ECC dimms are registered
do you see a plethora of non registered ECC DIMMS or how do you see ECC being accomplished?
wyrmrider
I wonder how many people realize that wyrm is an archaic term for dragon.
Anyway, you are making an understandable mistake. You have it backwards. Buffered or registered memory is ECC (at least I've never seen any that wasn't). However, you can get plain unregistered DIMMs that are ECC. There are 45 examples of this at NewEgg. ECC is nothing to add to DIMM; it's just an extra chip so that you have a parity bit.
InTheKnow
"Thanks for the explanation, you are looking for disruptive technologies."
Well, I don't consider them disruptive, just more of a shift.
One interesting shift occured some years ago with typewriters and printers. Back when I was younger you had to use a daisywheel printer to get letter quality. It was good but it was slow. Dot matrix printers were more common but still not cheap.
I recall when early inexpensive dot matrix printers either used thermal paper or a rotating platen with a single element print head instead of individual pins. Around this time Coleco tried to make a low cost daisy wheel printer for their Adam compputer (because it was the only way to get letter quality). Unfortunately, by the time they got their printer slow enough that it wouldn't fall apart it needed about 10 minutes per page.
Eventually, the pin technology caught up and we had first near letter quality and then letter quality with dot matrix. Of course, dot matrix was still noisy.
Then the initially expensive ($3,000) ink jet technology caught up and replaced dot matrix. Laser printers were also originally very expensive ($3,500) but they came down in price too. I recall when Atari offered a laser printer with no controller. You used the computer itself as the laser printer controller and this knocked the price down to $1,500. The same with scanners. I recall when a 4" handheld scanner was $600 and a nice flatbed was $2,500. You probably don't recall when Apple used a tiny fiber optic scanner cartridge in place of the ribbon on the Apple ImageWriter to make a slow but low cost scanner. Same thing with photocopy and fax machines. Copiers used to cost thousands of dollars and fax machines were so expensive that FedEx originally leased them (ZapMail). That has sure changed. You can get a combination fax, copier, printer, scanner now for a few hundred. This was a huge change for small business/home office.
And it wasn't so long ago that only doctors and lawyers had pagers. Now everyone has a cell phone.
Interestingly, electric typewriters then shifted from individual keys to daisywheel technology because the slow speed was still very fast compared to a human typist.
Nehalem and Silverthorne are both advances for Intel but they are not industry shifts. Intel's Robson technology could represent an industry shift if computers start booting from memory cards instead of harddrives. In an amazing twist we would be back to what we used to have with magnetic core memory.
"Regarding Si lasers, which I think is potentially disruptive, I would refer you here."
I think they are on track for a photonic link but photonic switching is a problem. Right now, the gain is negative and actual switching circuitry has to have gain. Even the old vacuum tubes had gain.
"I see this as a large step towards a future of hand-held, voice-operated devices with full PC capabilities."
That would be interesting. We're more than two years away from that though. Maybe if they applied some of that massive parallel processing to speech it could work better.
That rattlesnake should be RD790 / 780. And yes RD790 should be getting Quad X fire.
AndyW35 said
True it is loading operating systems currently which is a good sign, but I can see Intels release of Nehalem slipping as well, possibly into Q4.
Maybe you should check your eyes more often.
The simple fact that Nehalem booted several OSes , ran in DP mode a 3D app on A0 silicon is truly outstanding engineering execution.
Which is a world apart from the pathetic K10 struggles.
Tell me again then : why should Nehalem slip instead of being pulled in?
Given the above this gives AMD and Intel a longer period to duke it out with K10 and Penryn. Therefore it is vital for AMD that they can get the clock speeds up. I'm guestamating we will see a 2.8GHz part in Q108 for AMD against a 3.2GHz part for Intel.
So in 2Q AMD gains 40% speed ( 2-> 2.8Ghz ) while Intel basically stays flat ( 3.16Ghz QC will be released on Nov 12 ).Outstanding display of logic from you.
What were you saying about your green tainted glasses ?
What happens when Intel prices the QuadCore at the AMD tri-core level?
oh and sorry for going off on you earlier.
a man can only take so much, It REALLY frustrates me when the conversations are ON topic and you still decide to delete them. Hopefully you saw the error in deleting MY post and not GREGS.
Another note you might want to consider. You can't be right 100% of the time, you're human like us and you can make mistakes too. If you made a mistake in the last entry and you got called out on it. Go and fix your entry and say you made a mistake. Instead you deleted all the posts, corrected your entry and pretended like it never happened. This just lowers your statue.
This is a discussion blog, Just because things are not going your way, does not justify censorship.
Savantu said
"Maybe you should check your eyes more often.The simple fact that Nehalem booted several OSes , ran in DP mode a 3D app on A0 silicon is truly outstanding engineering execution."
In response to me saying
"True it is loading operating systems currently which is a good sign"
Maybe Savantu can check his eyes more often to actually be able to see what I wrote. You just repeated what I said but have a different spin on things. There is a long way to go to Nehalem is released and there is still a lot of things that can go wrong or will cause delays.
You then said:-
"So in 2Q AMD gains 40% speed ( 2-> 2.8Ghz ) while Intel basically stays flat ( 3.16Ghz QC will be released on Nov 12 ).Outstanding display of logic from you."
Where are you pulling these figures from, out of your ass? I am pulling them from what is benched at the moment which is 2.5Ghz for AMD and 3Ghz for Penryn, not old past cpu's and not projected future cpu's as you do because it suits your cause.
Sorry, but that is logical, use what you have at present and extrapolate using past history as a guide. What you are doing is picking your numbers to suit your argument because you are one of the most biased fan boys on the planet.
You then said
"What were you saying about your green tainted glasses ?"
I said nothing about them. This is something from your imagination again you made up to prove a point.
Don't bother trying to refute my posts until you can escape from looking like an imbecile when you do.Just a little tip for you. No charge.
Fiber-optic and USB mentioned in the same sentence? How very odd. Now, Fiber-optic, lasers and *Firewire* in the same sentence would actually make some sense.
USB is inherently a stop-and-wait half duplex protocol design, you will always be giving up at least 50% of your theoretical throughput at these very high speeds. It would be an utter waste.
mo
From your point of view, the problem only began when I deleted posts. That is not my point of view and I'm not going to agree with you on this. The problem began with Savantu's post:
"For your wonderful Linpack tests allow me to portray your lack of understanding :"
The entire issue concerned an assumption or inference about the configuration because the actual test configuration is not stated explicitly. That's a very aggressive statement from someone who is doing nothing more than making a different assumption.
Axel's first comment was the most on point:
"Could a logical explanation possibly be incompetence on the part of Techware? I've never heard of Techware anyway."
I realized that my original assumption was no more likely than that Techware was wrong (which was what savantu and axel were assuming). So, I edited by article and posted a statement saying that I did.
"I edited the main article to show that the configuration and therefore the interpretation is not clear."
If I were trying to hide this (as you incorrectly implied) I certainly wouldn't have posted a comment about it.
Now we get to the second problem. Axel became more aggressive and continued to post based on the original article. Axel is wrong when he says that he did this before my article was edited. I edited the article and then posted the comment about revising the article and then Axel posted this:
" Therefore the entire foundation for your latest blog entry is flawed, and hence so are any conclusions you made."
He did include a lot of information about his assumptions but there was nothing new, just more detail about his previous assumptions. I wasn't feeling particularly charitable with Axel's spouting off based on nothing more than a different assumption. And, it only pertained to the original article which I had already corrected (and axel ignored). My patience had pretty much ran out.
Naturally, you, axel, savantu, and giant only cared about the deletions. You didn't see that I was already putting up with overly aggressive and insulting posts. You didn't seem to notice that claims were made based on nothing but assumptions. Nor did you notice that people apparently couldn't be bothered to read my comment saying the article had been corrected.
Greg's post was more accurate:
"maybe you should try calming down a bit and drinking something soothing before you post."
Now, I understand that to you this is an issue of being right or wrong. But it isn't. You get to claim you are right when you have facts on your side, not just different assumptions. I can't say that Axel's or Savantu's assumptions were right because there isn't enough information. However, what I allowed for was that they certainly could be right.
Now, you say that this is a discussion blog and nothing justifies deleting posts. Well, that could be true but you need to realize that nothing justifies being a total jerk. I am not going to put up with unlimited amounts of aggressive and insulting posts and mistatements about my arguments. You need to understand that the fact that your opinion is different does not give you license to be insulting.
If you want your comments to stay on my blog then acquaint yourself with courtesy. I understand that verbal jousting, attacks, insults, sarcasm, and hyperbole are seemingly second nature on other blogs. However, that is not going to be the style here.
savantu
Yes, I deleted your post. Stop being so aggressive. Yes, Andy's reply was aggressive but you started it.
This isn't a place where you get to vent all of the day's frustrations. If you think someone is saying something factually incorrect then you can say it without personal attacks.
Let's look at facts for a change. We know that Intel demonstrated 3.33Ghz Penryn in January. Typically, this level of maturity would suggest release 6 months later as it did for Woodcrest. But there was no Penryn in July. So, what can we infer from this?
We have to assume that in spite of Penryn's clock speed that the processor must have been using some BIOS patches that would not have been acceptable for release. I have to say that I'm also puzzled because the V8 system in January didn't seem to use any special cooling (at least not that I could see) but now the 3.4Ghz Penryn shows up with water cooling. This one baffles me because processes don't get hotter over time. I'd like to know what honest conclusions you can draw from this for Nehalem.
savantu
I'll copy the one factual point you made in the post I deleted:
There's no such thing as an AMD 2.5GHz chip , it has no official ship date. OTOH Xeon 5460 is a QC part that will be released on Nov 12.
Quite true. We do expect a 3.16Ghz Xeon but we really don't know what AMD is going to release. We know that AMD is supposed to release at least 2.3Ghz. 2.5Ghz is likely but not certain and we have heard rumors about faster clocks which are far less likely.
We could have a 5% increase in clock for Intel versus 25% for AMD but this is mostly irrelevant given that AMD at 2.5Ghz would still be well behind Intel. Perhaps the clocks will be closer in Q1.
Scientia why is there NO update on Shanghai TO, why is there no update on 45nm progress?
Because like Barcelona AMD needs to keep quiet about things as they aren't going well. You can't promise something on time when you already know it isn't and is late. Better keep your mouth shut or face a shareholder lawsuite.
Roadmaps are fine, execution is what matters in the business world. At the moment AMD has shown it isn't executing anywhere but in making fancy powerpoints.
Can you substantiate why you believe this claim
"). Intel is straining because of AMD's server volume share. In the past year, AMD's sever volume has dropped from about 25% to only 13%. Now with Barcelona, AMD stands to start taking share back. There really isn't much Intel can do to prevent this now that Barcelona is finally out"
Or you can delete this too
lex
Can I substantiate which part? That AMD lost server chip volume share or that they might take it back?
I knew that AMD had lost server volume share because I ran through their ASP's and the numbers wouldn't match up unless you assumed that their server chip sales were way down. This was later confirmed by another source.
Lets be VERY CLEAR It is scientia's "assumption" that AMD will take back Server share. It's not a fact.
I just want to be clear that assumptions work both ways.
mo
I'm trying to decide what to make of your post. It comes across as someone who is sulking and trying to pick a fight.
Obviously, your emphasis on assumption has nothing to do with what I just said because I said:
"or that they might take it back"
I think it is reasonable to assume several things. First, if AMD hasn't lost more share by now then they probably won't. Secondly, Barcelona is an improvement on K8 so it should prevent any further losses. Third, we know that Intel did very well in HPC since November 2006 but we also know that AMD has serveral HPC projects for Barcelona. Finally, Barcelona seems to have good support among vendors. Obviously I don't know that AMD's server volume will increase but it seems very likely.
So, I assume that the real point of your post is a feeble attempt to draw some parallel between what I just said and what I said earlier about Savantu's and Axel's assumptions concerning the Techware piece. If you honestly can't understand the difference then I'll explain it to you:
Show me where I claimed that someone was completely wrong and unable to understand the facts based on nothing more than my assumption that AMD's server share would increase.
lex
"Scientia why is there NO update on Shanghai TO, why is there no update on 45nm progress?"
This is a good example of the false arguments that I deal with here. You state this as though you are disagreeing with me when in reality you are just repeating things that I've already said. This is from my current article:
Intel's announcement that Nehalem has taped out is also reminder that AMD has made no such announcement for Shanghai.
So, why are you pretending to disagree?
"At the moment AMD has shown it isn't executing anywhere but in making fancy powerpoints."
Obviously this isn't true. AMD has delivered the first batch of Barcelonas. We'll have to see about the second batch though.
Let's look at facts for a change. We know that Intel demonstrated 3.33Ghz Penryn in January. Typically, this level of maturity would suggest release 6 months later as it did for Woodcrest. But there was no Penryn in July. So, what can we infer from this?
Umh no.
Core was taped out on may 2005 and released in June 2006.
Penryn was taped out in January 2007 and will be released in November 2007.
Core validation took 13 months ( fairly typical for a new CPU which is 12-18 months ) , Penryn took 10 months again slighty less than what we were used to ( 12 months )
If anything the health presented in January works to the launch day : everything is running smooth.
We have to assume that in spite of Penryn's clock speed that the processor must have been using some BIOS patches that would not have been acceptable for release. I have to say that I'm also puzzled because the V8 system in January didn't seem to use any special cooling (at least not that I could see) but now the 3.4Ghz Penryn shows up with water cooling. This one baffles me because processes don't get hotter over time. I'd like to know what honest conclusions you can draw from this for Nehalem.
We have to assume nothing.As I said , Penryn was a home run for Intel and this shows.
About the V8 , don't firget we're dealing with an extreme system.A system meant for overclocking and lots of GPUs.
The CPUs obviously don't need watercooling as even current Kentsfields reach 3.6GHz on air on a daily basis ; however , that's not "extreme".
What they do with V8 isn't rational , they are simply trying to impress.V8 was showed factory overclocked to 4GHz.I doubt an air cooling system will allow such flexibility.
Think a little :
-QC Yorkfield running at 3GHz has an 80w TDP
-DC Wolfdale running at 3.16GHz runs at 40w TDP
These are outstanding values and a testimony for Intel's work with Penryn.The 130w TDP doesn't mean the chip burns 130 , probably it burns 110 , maybe even less since it uses stock voltage (1.125V) , but they use a family TDP.
As for Nehalem , what's to worry about ? That it runs fine on A0 silicon , boots different OSes , runs 3D rendering apps in dual processor mode ?
Glen Hilton , Nehalem chief architect , said they are ahead of schedule and progress is fantastic.
Pat Gelsinger mentioned that Westmere , Nehalem shrink to 32nm is also on track and advancing nicely.
Contrast this execution with what the greens are doing.
ho ho -
"Isn't G3MX kind of like external memory controller? I wonder how much latency will it add."
Why do you have to wonder? I thought you've already read my blog on G3MX, haven't you?
ho ho
"Are you sure there is no cmov versions for vectors available on current CPUs?"
Nop. The closest are the BLENDx and PBLENDx instructions in SSE4, but they are no where comparable to the CMOV in SSE5.
"MADD and other three-operand instructions in SSE5 seem to use three inputs and output will be written to the place of the third input."
If you think of it then for as few as 16 registers this is the better way than having true 4-operand instructions.
Again you can go to my latest blog to take a look at some details of AMD's SSE5.
"3DNow was better in terms of horizontal instructions until SSE3."
scientia, 3DNow does not have horizontal instructions. The 3DNow extension has only two horizontal instructions: PFNACC and PFPNACC, which are about useless to anything but limited 3D graphics.
The 3DNow extension was definitely not better just for these meager two instructions.
Lex, AMD's K10 is hardly "garage sale junk". It's a very fine architecture, with great performance per clock and per watt. It scales admirably with sockets and clockspeed. AMD will improve its clockspeed over time, thus making it more attractive to more x86 users. Be thankful we have the choice.
Given the reality of K10 (architecture, performance), I think we'd all be happy to buy them at "garage sale prices" .. especially when the Phenom variants are released at higher clocks :)
Lex Said:
"Game over AMD, fanbois get used to garage sale prices on garage sale junk from the guys in green"
Wow lex, not hostile or obnoxious in the least.
I'm really really glad there are people like you out there that hate a free market. I mean, if you had your way, we'd get rid of stupid things like reasonable prices, a reasonable pace in the advance of technology, equipment that actually does what the customer wants, and companies that aren't mostly about marketing and management.
Way to fight the good fight man, seriously...
Believing that AMD having a reasonable pricepoint, with far better performance/power or heat metrics, and a more flexible and scalable product isn't likely to gain it market share shows intentional ignorance or denseness.
Anyone with an ounce of sense could tell you that even being MORE competitive should give AMD a huge advantage in terms of likelihood to gain market-share. It doesn't matter if their product isn't as competitive as it was expected to be, it's still more competitive (and yes, competitive is actually relative, so you can't say that AMD's product is not "competitive").
Also, Gutterrat, even though I was completely incapable of understanding half the things in Abinstein's blog at the time I read it (though they make perfect sense now), I was still perfectly capable of skipping to the near-conclusion and seeing where he specifically points out the latency added by G3MX. So, even if you can't understand the article, or are pathetically impatient (I mean, it's a good and pretty straight-forward, no-fluff or much opinion article) you could have been reasonable enough to take the time to skim through it and see what is, essentially, his reply.
And your childish reply about psychology is entirely unnecessary. As I just pointed out, it takes no effort or amount of intelligence to find the information you needed, and it is in no way necessary to wade through anything remotely similar to an opinion piece.
Gutterat, I'm not defending abinstein. I'm merely pointing out that your replys are becoming offensive to anyone with an ounce of reason. If you're going to talk so much, at least do so in a way that doesn't take up so much of this boards space and that doesn't get deleted so often.
Gutterrat -
"His blog is the last place I'd go to attempt to get educated on 'G3MX' or 'SSE5.'"
I can understand why you say that - because my blog is heavily technical, scientific, and accurate. Unfortunately these seem to be too much for you.
I also understand why Intel fans or maybe employees like you can do nothing but spread FUD about my blog. Just take a look at my analysis on G3MX, SSE5, and Barcelona scalability. There are facts and reasonings showing the truth that Intel doesn't want its fanboys to understand.
"Abistein is an AMD fanboi. His 15 minutes were over long ago."
No, I am not a fanboi of either Intel nor AMD, and unlike you, I seek no minute. As I said, it is completely fine with me that people choose to be ignorant, but if they somehow don't, I make sure there is one place where they can get accurate analysis.
What I've been seeing OTOH is a pack of Intel fans and employees attacking whoever daring to speak of the truth about x86 microarchitectures whenever it's slightly favoring AMD. Sadly for them, AMD has been right and Intel has been wrong in too many places in recent years - scalability, memory architecture, SSE5, native quad-core, ...
Just looking at some of the same reviews again and figured some of you could comment on this image, or review.
Thanks
Gutterratt, your replies in general have always done that, but when you reply to abinstein, you become particularly childish. It's annoying, and not much fun to deal with.
Particularly in that you refuse to simply look at the last couple paragraphs of his blog just to find one number I mean, it's that simple, and yet you somehow are too stuck-up to bring yourself to do it. If you want to have anyone respect you, you should compose your replies in a way that doesn't make you sound like you hate them.
Abinstein, let's be clear about native quad core. Intel never said there weren't advantages to it. What they said is that native quad core wasn't cost effective at 65nm.
Since IDF we have a little more information about Nehalem than we had before. So let's revisit my earlier guesstimates about die size.
First, our reference point, quad core Conroe at 2 X 143mm^2 for a total area of 286mm^2 for a native quad core solution.
Next is Penryn at 2 X 107mm^2 for a total of 214mm^2 for a quad core solution. The net reduction in die size is on the order of 25% while increasing cache sizes substantially.
Penryn has a total of 890M transistors. We are told that Nehalem has a total of 731M transistors. Since both Penryn and Nehalem are built on the same 45nm technology transistor count should be a fairly good estimate to die size. So Nehalem should be 731/890*214mm^2= 175mm^2
This puts Nehalem quad core with IMC and QuickPath at only 22% larger than a single Conroe dual core die.
I think that it is a no-brainer to see that quad core Nehalem on 45nm at 175mm^2 will yield better than quad core on 65nm at 291mm^2 (Barcelona). I think the numbers bear out Intel's official statements on why quad core on 65nm wasn't worth doing.
To be honest, I think another big enabler for Intel was Quickpath. Without the need for huge caches to overcome the weakness of the FSB the resistor count dropped and an additional reduction in die size was possible.
enumae said ...
Just looking at some of the same reviews again and figured some of you could comment on this image, or review.
It looks to me like the big difference here is idle power. The AMD chips run cooler at idle. Since the time frame that they are collecting data over contains a fairly large amount of idle time, you would expect that to weight the numbers towards AMD.
If you look at the integrated power usage over the render time only you get a very different picture. The Intel processors use more power, but also render the image substantially faster. So the Intel processor is actually more power efficient during the actual render operation.
The question that needs to be answered before determining which processor is more power efficient is what types of loadings will a specific server see and what will the ratio of process time to idle time be for each configuration. That would not be a trivial piece of modeling.
My takeaway from that is that if Intel can get away from the power hungry FBDIMMS they may actually be able to claim the power efficiency title. Until then, it will probably go to AMD.
Thanks intheknow.
intheknow -
"Penryn has a total of 890M transistors. We are told that Nehalem has a total of 731M transistors. Since both Penryn and Nehalem are built on the same 45nm technology transistor count should be a fairly good estimate to die size. So Nehalem should be 731/890*214mm^2= 175mm^2"
Your estimate is somewhat inaccurate.
First note the 890M transistors of Penryn include 12MB L2 cache which takes 12MB*8*6=576M transistors just for the data part; together with cache tag and mux it's more than 600M transistors for L2, and less than 290M for core logic. However look at Penryn's die photo the 290M core logic transistors take almost half of the die area. Assume it's 100mm^2 we get ~2.9M core logic transistors/mm^2.
Then look at Nehalem's 731M transistors. Lets assume it (like Shanghai) has 6MB L2 cache, taking ~300M transistor, leaving 431M transistors, or 431M/2.9=149mm^2 die area, for the core logic. Adding back the cache die area of roughly 50mm^2 (half of Penryn) the Nehalem should have die area roughly 200mm^2.
"This puts Nehalem quad core with IMC and QuickPath at only 22% larger than a single Conroe dual core die."
The more accurate estimate shows quad-core Nehalem (200mm^2) roughly 40% larger than dual-core Conroe (143mm^2).
Yes, Barcelona's 283mm^2 is large, probably the largest x86 microarchitecture ever released. But unlike Intel, AMD does it probably because it can be done. In any rate, the native quad-core, integrated memory controller, and hypertransport of Barcelona offer better scalability, better performance under heavy (server/workstation) workload, and higher power efficiency.
No Barcelona's not more efficient than Penryn when running a single task of Povray but I'm not sure how many people really need Povray performance. It's most ridiculous to see Intel fans using Povray for comparison when they most readily disregard SPECfp, knowing that Povray is but a subtest of the latter.
But I appreciate your technical point of view and I believe through discussion with you we get closer look to the trade-off reality.
intheknow -
My estimate of Nehalem die size was even too conservative. Since the tag overhead of the L2 cache can be as high as 1/8 the data storage (probably even higher), the number of transistors used for L2 is mostly likely greater than 576*9/8=648M, leaving only 242M transistors for the core or just ~2.4M/mm^2.
Using this more accurate estimate Nehalem's die area would be (731-324)/2.4+50=220mm^2 if the 731M transistors include 6MB L2, or (731-432)/2.4+67=192mm^2 if the 731M transistors include 8MB L2. In either case (especially the former) it's much larger than the simple 175mm^2 estimate.
intheknow -
I'll have to modify my estimate yet again due to a possibly typo in the information you gave previously: Penryn quad-core has 2*410=820M, not 890M, transistors.
Using this value -
Penryn L2 cache: 648M transistors, ~80mm^2.
Penryn cores&others: 172M transistors, ~1.7M/mm^2.
Nehalem L2 cache (6MB/8MB): 324M/432M transistors, 40/53mm^2.
Nehalem cores&others: 407M/299M transistors, 239/176mm^2.
Nehalem total die area: 279/229mm^2. Hardly any better than Barcelona.
Yes, Barcelona's 283mm^2 is large, probably the largest x86 microarchitecture ever released.
Tulsa was over 1.3bn transistors with a die size of 435mm squared. This is far larger than Barcelona. Indeed, Tigerton is much cheaper to produce than Tulsa.
savantu
"Penryn was taped out in January 2007 and will be released in November 2007."
You did a great job of avoiding the question. So, I'll ask again. If Penryn was showing at 3.33Ghz in January why was Intel unable to release until November?
1. Was it process? Was the demo not representative of real manufacturing?
2. Was it errata? Did these chips have some defect (in spite of being able to run software) that prevented their release?
3. Was it overheating? Was Intel able to hide the fact that Penryn was running too hot at 3.33Ghz?
So, again. Why wasn't Intel able to go into production in February and have 3.33Ghz Penryn's available in May?
intheknow & enumae
As I'm sure intheknow already realized, the power draw test is extremely sloppy. Aside from the fact that Povray isn't a representative load for anything (accept running Povray) there are other factors. So, let me say this as simply as possible.
If you compare K10 to K8 you can get some idea if there has been improvement. Likewise if you compare Penryn to Core you can get some idea. However, if you try to compare across platforms (K10 to Penryn) you won't get the right answer.
To be able to compare directly you would need much better testing. For example if you want loaded testing you could run the entire SPEC suite. That wouldn't be a bad starting place because of all the differing code sections.
intheknow
"Since both Penryn and Nehalem are built on the same 45nm technology transistor count should be a fairly good estimate to die size."
I'm going to have to disagree. The transistors used for cache and those used for logic have differing densities. Specifically, cache is denser. So, if you reduce cache on Nehalem and then add logic you don't get 1:1 scaling. Nehalem is going to be larger.
"So Nehalem should be 731/890*214mm^2= 175mm^2"
Only if the ratio of cache to logic were the same.
"I think that it is a no-brainer to see that quad core Nehalem on 45nm at 175mm^2 will yield better than quad core on 65nm at 291mm^2 (Barcelona)."
My current estimate is 260mm^2 for Nehalem.
abinstein
My best estimate is that the logic circuitry shrank by 28%. However, Intel managed a 47% reduction for the L2 cache.
Just estimating the die size from the Penryn and Nehalem wafers gave me a rough estimate of 259mm^2. What is interesting is that when I allow for the difference in transistor density based on a comparison of Conroe and Penryn I get the same estimate.
Okay, your current estimate for Nehalem is 229mm^2 versus my estimate of 259mm^2. It will be interesting to see who is closer. However, it is clear that the actual size is not 175mm^2.
Scientia
Okay, your current estimate for Nehalem is 229mm^2 versus my estimate of 259mm^2. It will be interesting to see who is closer. However, it is clear that the actual size is not 175mm^2.
If this picture is to be believed, both of you are way off.
"You did a great job of avoiding the question. So, I'll ask again. If Penryn was showing at 3.33Ghz in January why was Intel unable to release until November?"
So they won't dent their own margins. AMD hasn't put any competition for above 3.0 GHz Intel CPUs on the market => Intel does not need to realease > 3.0 GHz cpus. They can have the top shelf with 3.0 GHz Penryns while also having increased yields. By having top performaing cpus in the high end Intel doesn't need to concentrate on the high end any more but, on other segments, they need to decrease costs and increase performance on low and mid tier. Which is something Intel is doing with .45 starting 2008.
axel
What can you tell by looking at the packaging? We would need a picture of the actual die.
pop
That didn't really answer the question. Why didn't Intel replace the 3.0Ghz Clovertown with Penryn in May? Why wait until November?
If someone really is trying to claim that Intel didn't choose to then we can ignore that argument. Obviously, Intel would have been happy to replace Clovertown with a cheaper chip with more cache.
Scientia
That didn't really answer the question. Why didn't Intel replace the 3.0Ghz Clovertown with Penryn in May? Why wait until November?
That one's obvious. D1D did not have the volume in May to supply the mass market. Also it doesn't make sense to try to meet market demand with D1D alone six or seven months ahead of output from at least one other large 45-nm fab, which IIRC is slated to begin its volume ramp in November or December.
Scientia
What can you tell by looking at the packaging? We would need a picture of the actual die.
In most FC type packaging as shown, the CPU die extends essentially out to the edge of the epoxy material. People who've damaged their CPUs when installing the heatsinks have discovered this the hard way, by applying excessive pressure on one side of the die without balancing on the other side. All it takes is a tiny sliver off the edge of the epoxy and part of the die is damaged.
axel
Well, if that is true then Intel lags at least a quarter behind AMD. Last year, AMD did 65nm testing in Q2, then 65nm production in Q3, and this was then released in Q4. If Intel is equal to AMD then one would expect if they were doing 45nm testing in Q1 that production chips would be available in Q3. Q4 lags by one quarter.
Secondly, I'm not going to jump to conclusions on die size based on a picture of the package. Also, there is no way that Nehalem is half the size of Penryn. But, feel free to believe that if it comforts you somehow.
Giant -
"Tulsa was over 1.3bn transistors with a die size of 435mm squared."
Tulsa is not an (integrated) microarchitecture. It is two sitting side-by-side to save cutting cost. The micorarch is single-core and is just over 200mm^2.
Scientia
If you compare K10 to K8 you can get some idea if there has been improvement. Likewise if you compare Penryn to Core you can get some idea.
This is what I wanted to hear from you...
Let me quote a portion of one of your comments in the last topic or article ...And, what happened to the big reductions in power draw for 45nm?......
A reduction of 92W or 30% would be considered a big reduction, no?
Also if you looked at the next page it is about 64W and about 20% when using SPECjbb, and very similar results were shown on Anandtech.
Hopefully this answers your question.
scientia -
"Okay, your current estimate for Nehalem is 229mm^2 versus my estimate of 259mm^2."
There are various different estimates, and my 229mm^2 is more like the lower-bound. In this blog the author estimates Nehalem die-size around 270mm^2. It really depends on how "efficiently" can Nehalem use transistors on its L2 cache.
My purpose above is just to show the interpolating die size straight-forwardly can lead to quite wrong (25-33% less) estimation. The main reason is that cache and logic have very different transistor density.
scientia -
After closer look it seems to me Nehalem die size is probably ~270mm^2, or closer to my upper-bound estimate.
I'm not saying this from any percentage scaling PoV, but from this wafer picture. The 300mm diameter is squeezed about 22 Nehalem dies horizontally and about 15 vertically. This makes Nehalem about 13.6mm by 20mm, or roughly 270mm^2.
gutterrat -
"abistein can claim he's not an AMD fan all he wants but the evidence suggests otherwise."
Are you not the one who refuses to read the facts in my blog and still blindly call me a fanboy?
I dare you read my analysis on SSE5 vs. SSE4, memory bandwidth, G3MX, scalability and scalability, ... and point out which is not factual? Which hints a bit of fanboyism?
You like the Intel that you adore is indeed master of FUD.
Scientia
Secondly, I'm not going to jump to conclusions on die size based on a picture of the package. Also, there is no way that Nehalem is half the size of Penryn.
Looking into this a bit more closely, it looks like Fudzilla is spreading FUD as usual, whether intentionally or not:
The technical documentation on Core 2 Duo shows that the substrate is 37.5 mm square. By using a ruler on the bottom CPU in the Fudzilla image and scaling, you'll find that each of the Yorkfield MCM dies is 11.7 mm x 9.2 mm. This works out to 107.6 mm2, which is spot on with the reported Penryn die size of 107 mm2.
Using the same ruler technique on the upper CPU in the Fudzilla image, we find that the die is about 8.8 mm on a side for a total area of 77 mm2. This is clearly impossibly small for four-core Nehalem at 45-nm.
So then what is that CPU? One possibility is that it's a dual core Nehalem without an IMC. This would make the quad core Nehalem somewhere between 135 and 170 mm2, depending on L3 cache size and whether the IMC is included.
Or it could be a budget Wolfdale with reduced L2 cache, which could make the rectagular standard Penryn die into a square.
Either way Fudzilla proves once again that you can't trust a damn thing on that site without doing your homework first.
enumae
There are two problems with your statement. The first problem is that the 3.0Ghz Penryn in the test is rated at 80 watts instead of the 120 watts for Clovertown. It is not clear how much of this comes from the 45nm process and how much comes from selective binning. It certainly appears that the 45nm parts that don't bin as well will simply be sold as desktop chips.
Secondly, the article itself says:
"I used a beta BIOS for our SuperMicro X7DB8+ motherboard that supports the enhanced idle power management capabilities of G-step chips. Unfortunately, I'm unsure whether we're seeing the full impact of those enhancements. Intel informs me that only newer revisions of its 5000-series chipset support G-step processors fully in this regard. Although this is a relatively new motherboard, I'm not certain it has the correct chipset revision."
Again, this limits the number of conclusions that can be drawn.
Scientia
...80 watts instead of the 120 watts for Clovertown. It is not clear how much of this comes from the 45nm process...
Intel did not have a 80W quad-core at 3.0GHz on the 65nm node but they will at 45nm, so why try to avoid comparing the two?
The 45nm processor performs better at the same clock speed while using less power.
What is it if not 45nm?
You wouldn't be claiming that Intel could release a 80W 3.0GHz quad-core on the 65nm node, would you?
"I used a beta BIOS for our SuperMicro X7DB8+ motherboard that supports the enhanced idle power management capabilities of G-step chips..."
Please look at your quote... "idle power states".
All of my numbers were referencing load and they all show a large reduction in power usage.
If I am missing something please point it out.
Thanks
a possibly typo in the information you gave previously: Penryn quad-core has 2*410=820M, not 890M, transistors.
You are correct. That would put my original estimate at 214*731/820 = ~191
You are also correct in saying that the cache is more dense than the logic. But unlike Penryn which is a die shrink, Nehalam is a new design from the ground up. So the logic circuitry will be better optimized for the die layout. I'm assuming that the Quickpath and IMC circuitry will be comparable in density to the logic. So we will add 15% to the raw comparison to account for that. I'm now at 219mm^2. Let's just call it 220mm^2 for a round number.
I'm not saying this from any percentage scaling PoV, but from this wafer picture. The 300mm diameter is squeezed about 22 Nehalem dies horizontally and about 15 vertically. This makes Nehalem about 13.6mm by 20mm, or roughly 270mm^2.
Let's refine this a bit. First of all a 3mm edge exclusion is typical. So the wafer diameter is 294mm.
In addition the scribe lines need to be accounted for. Call each of those 0.15mm. We get n-1 scribe lines in each direction.
In the 15 die direction we have 14 scribe lines or 2.1mm. So die size in the 15 die direction is 291.9/15 = 19.46. As crude as we are being here 19.5mm is as close as we are going to get.
In the 22 die direction I have 21 scribe lines or 3.15mm. So die size in the 22 die direction is 290.85/22 = 13.22mm. We'll call this 13.3.
So guesstimating from the wafer photo give 13.3mm x 19.5mm = 259.3mm^2 die area.
By my calculations this puts Nehalem somewhere between 220mm^2 and 260mm^2. I'll split the difference an put my stake in the ground at 240mm^2.
Now let's compare this to quad core MCM conroe. Our quad core conroe comes in at 283mm^2. Nehalem with IMC and Quickpath comes in at 240mm^2 (my number). This is ~85% of the 65nm solution. While this isn't great it isn't horrible when you consider the addition of the IMC and Quickpath, which if I remember right, was estimated at ~20-25% of the die area on Barcelona.
Just for kicks, I've fed various die size estimates into the ICKnowledge die calculator.
My assumptions were a poisson model with a 0.05 defect density and a 3mm edge exclusion. Let's look at Net die per wafer since the concern here is the economics of the whole thing. I'll use a price of $360 dollars since this is about what Barcelona (1.9GHz) is going for on new egg.
At a die size of 220mm^2 you get 241 die or $86760
At a die size of 230mm^2 you get 225 die or $81000 and leave $5760 on the table.
At a die size of 240mm^2 you get 213 die or $76680 and leave $10080 on the table.
At a die size of 250mm^2 you get 205 die or $73800 and leave $12960 on the table.
At a die size of 260mm^2 you get 197 die or $70920 and leave $15840 on the table.
At a die size of 270mm^2 you get 186 die or $66960 and leave $19800 on the table.
At a die size of 280mm^2 you get 171 die or $61560 and leave $25200 on the table.
At a die size of 290mm^2 you get 166 die or $59760 and leave $27000 on the table.
Now all the money left on the table is relative to 220mm^2. That is the low end of my estimate, so let's say that Nehalem comes in at 240mm^2 or $76680 per wafer.
Moving to 260mm^2 would give $70920 per wafer and leave $5760 per wafer on the table.
Moving to 280mm^2 from 240mm^2 leaves $15120 on the table for each wafer. Which incidentally is where Barcelona is running.
When you multiply the delta from 240 to 280 out by 20K wafer starts per month, you get $302.4M of unrealized gross revenue per month. Or nearly a billion dollars per quarter if you prefer.
It will be interesting to see where Nehalem really comes in at for die size. With a solid number we can get a better feel for what waiting for 45nm before going to native quad core really saved Intel. Or, conversely, what going to native quad core on 65nm cost AMD.
My apologies for the length of this. The ability to paste a simple table would have made this much simpler to follow.
intheknow -
I appreciate your detail but there's a flaw in your analysis. If you're going to use the die size to calculate # die/wafer, then you can not exclude the edge lines or scribe lines or any inter-die overhead. The reason is simple: they occupy wafer space as well.
For yield/volume purpose the 270mm^2 should be used.
Abinstein, think about what you are saying here. When the die size is measured the die is already cut out of the wafer. Just look at all the images of dice that are shown. they are cut out of the wafer.
The scribe line is destroyed during the cutting process. That is why it is there. To allow you to cut out the die without destroying the circuit. When you measure the final die area, it is gone. Scribe lines are not part of the die area.
If you can find a way to cut out a die without destroying material, patent it, because you will be a very wealthy man indeed.
intheknow -
"The scribe line is destroyed during the cutting process. That is why it is there. To allow you to cut out the die without destroying the circuit."
Yes, that's why I said for yield and # die/wafer purpose, you need count in the scribe line, which is not part of the die but still part of the wafer.
"If you can find a way to cut out a die without destroying material, patent it, because you will be a very wealthy man indeed."
Actually you should patent your way of making scribe line not occupying wafer area, if you still think your way of calculation was not flawed.
Abinstein, Let me try and explain this another way. Let's say that I'm processing a wafer and something goes wrong and I dump hundreds of particles on it. If all of those particles through some freak of nature ended up in the scribe lines, I would be high-fiving people in the aisles. Why? Because anything in the scribe line is going to be cut out and thrown away. The die are still good. So once again, the scribe line does not count as die area and doesn't matter for yield. In fact, metro tools don't even scan the scribe lines when the wafer is examined since there is nothing of value there.
If that doesn't make it clear I give up. I'll leave you to suffer in ignorance. You can view it how you want. I'll go with the industry definition which is that die size does not include the scribe lines. We'll just have to wait and see what the official die size comes out to be. I'll tell you now, it won't be bigger than 260mm^2.
I'm sorry that I cannot make a long comment, I just wanted to ask something.
Abinstein, would you repeat your calculations with any other CPU thats die size is known and compare the size on wafer to the real CPU size? I did a really quick and dirty calculations with Barcelona and came to a die size of ~300mm^2, much more than the official 283mm^2.
Btw, according to chip-architect Intel can cram 1MiB of cache into 6mm^2 area at 45nm. With 8M of it Nehalem should have around 48mm under cache leaving quite a bit more to the core and other non-sram units. If the cache indeed takes roughyl 1/3 of the die then this extremely crude calculation would put the die size to around only 150mm^2 and that cannot be right. Perhaps Nehalem will have more than 8MiB of cache?
intheknow -
"If all of those particles through some freak of nature ended up in the scribe lines, I would be high-fiving people in the aisles."
The area occupied by scribe lines is not affected by defect, but they do occupy wafer area. IOW, excluding the scribe line will only affect percentage of good die, but not number of dies per wafer.
In any rate, you can count the number of dies on this wafer, can't you? I did a rough conservative count and there are ~226 whole Nehalem dies. Believe it or not, this is exactly the number of 270mm^2 dies per 300mm wafer, using the standard dies per wafer equation.
Let me remind you the 270mm^2 is approximated from 20mm x 13.5mm. Lets now exclude scribe line widths and make it 19.7mm x 13.2mm (0.3mm or two scribe line widths along each axis). Die size is now ~260mm^2 and you can use this number together with your favorite defect model to calculate yield.
Please note that yield is heavily affected by both technology and circuit complexity, so I have no idea of what yield Nehalem would have or even its relative yield compared to Barcelona. But I'm sure of one thing: yield of 260mm^2 dies is very close to that of 270mm^2 ones. Besides, you are not going to get 0.05 defect density on Intel's 45nm, and thus all your $$$ calculation seem - sorry for the bluntness - worthless to me.
Ho Ho -
"I did a really quick and dirty calculations with Barcelona and came to a die size of ~300mm^2, much more than the official 283mm^2."
Actually with the "scribe line" overhead Barcelona die size is roughly (300/18)x(300/17) ~ 294mm^2, or 4% larger than the 283mm^2 proper. the former (294mm^2) number is used by many, including "intheknow" himself, to calculate # die/wafer of Barcelona.
"If the cache indeed takes roughyl 1/3 of the die then this extremely crude calculation would put the die size to around only 150mm^2 and that cannot be right."
It's possible if you're talking about dual-core Nehalem. For quad-core Nehalem it should be clear to anyone who can count that Nehalem has die size roughly 270mm^2.
abinstein
"It's possible if you're talking about dual-core Nehalem"
but I wasn't, I was talking about quadcore. If indeed it does have 1/3 of die under cache and the size is ~270mm^2 then it should have a total of around 90mm^2/6=15MiB of cache and that is pretty much out of the question.
My guess why the die shots show cache as a rather small part of the die is that Intel was able to pack logic better than with penryn and that higher-associativity caches take a bit more space.
I'm not quite sure what the die size will be, I'll have to examine things a bit closer for that. I do think that 270mm^2 is a bit too big.
Forgot to say a few things about caches.
I'm quite sure that Nehalem has all of its highest-level cache shared by all cores and they have equal latency to it, just the same as with Barcelona. The fact that caches show up as two or four separate areas doesn't mean anything. It has been like this with Conroe and with Barcelona and they all have unified shared cache.
To be honest I'm actually surprised that Barcelona has it unified as L3 is rather far from two of the cores.
Another thing about Nehalem die shot is that I'm not sure if the dark areas on the side of the chip have anything to do with caches. Nehalem should have multiple Quickpath connections and if it is similar to Barcelona it would have two on the sides and two in one end of the chip.
Very good abinstein,
I don’t know why all this estimated die size talk, when you can “simply” count how many dies are in the wafer.
One thing is for sure it was impossible for Intel create a quad core with less than 200mm2, unless Intel now had several different dies, some without L3, other without IMC, ...
But since Intel (also applies to AMD and others) never had more than two different dies (normally just one) I don’t see they changing their strategy now.
PentiumD = 2 x Pentium 4
Celeron = Pentium 4 with less cache
(Same die for 3 processors)
Core 2 Quad = 2 x Core 2 Duo == Pentium E2xx = Core 2 Duo with less cache == Celeron = Core 2 Duo with disabled core
(Same die for 4 processors)
Ho Ho -
"I'm quite sure that Nehalem has all of its highest-level cache shared by all cores and they have equal latency to it, just the same as with Barcelona. ..."
According to AMD's patents, different parts of Barcelona's L3 cache most likely have variable latencies to different cores.
As to Nehalem's L2, it's most likely four separate structures latched together. It's definitely very different from Barcelona's L3.
aguia -
That's quite true... Intel probably just have one or two processor design. It doesn't make sense to make an optimized design with IMC and another without.
My guess is the integrated northbridge in Nehalem is probably the part with lowest yield, and chips that have the integrated northbridge not working (well) will be sold a low-end non-IMC ones.
Isn't an IMC and the FSB two very different ways of communicating with the rest of the system? I'd find it odd if Nehalem's die had logic for both IMC and FSB. That sounds to me like wasted die space.
Core 2 Quad = 2 x Core 2 Duo == Pentium E2xx = Core 2 Duo with less cache == Celeron = Core 2 Duo with disabled core
(Same die for 4 processors)
There are two die, one with 2MB L2, with with 4MB. The 4MB die is used for the higher end C2Ds and the quad core CPUs. The 2MB models is used for the E4xxx and the Pentium E2xxx as well as the Celeron D 4xx.
Originally, the E6300 and E6400 were based on the Conroe die, with half the cache disabled. It wasn't until the Allendale die that parts started having only 2MB cache onboard. This has the obvious advantage of being cheaper to produce thanks to a smaller die size.
Jason -
Nehalem probably won't use conventional FSB at all, even if the chip version doesn't have a (working) IMC.
I did a rough conservative count and there are ~226 whole Nehalem dies. Believe it or not, this is exactly the number of 270mm^2 dies per 300mm wafer, using the standard dies per wafer equation.
I agree that 226 seems like a pretty good number, but might I suggest you try using an online tool like the one here.
If you enter a die size of 20x13.5=270 the calculator gives 213 gross die per wafer. To get 225 gross die per wafer at this aspect ratio, you need to put in 19.6x13.23= 259.3mm^2. I have no intention of tweaking this until it gives 226. 225 is close enough for me.
But I'm sure of one thing: yield of 260mm^2 dies is very close to that of 270mm^2 ones.
You aren't thinking in terms of volume if you believe that.
The same tool will give net die given a specific defect density and yield model. To make you feel warm and fuzzy let's pick a really rotten defect density that Intel will be sure to exceed. We will pick a defect density of 0.16. That corresponds to a 65.1% yield. Intel would have been bankrupt long ago if they couldn't exceed that yield target.
So for our 270mm^2 die we get 138 good die. For the 259.3mm^2 die I will get 146 good die. So you get 8 more good die per wafer. Not much on it's own, but now do the volume calculations.
8 times 60K wafers per month (This is probably conservative for Intel once they are fully ramped) gives 480K die per month. So over the course of a quarter, that is 1,440,000 more die. At our $360 Barcelona price tag that is $518,400,00. In short enough to put AMD back in the black.
As you assume lower defect densities the difference in good die between the two die sizes gets larger. So the 1/2 billion dollar estimate is a low end figure.
Besides, you are not going to get 0.05 defect density on Intel's 45nm
That would be purely an assumption on your part. Since neither you nor I know their yields on 65nm it would be impossible to predict what they will get on 45nm.
Please don't try and sell me your opinion as fact. In return, I'll try and make it clear when I am giving an opinion and when I'm giving known facts.
all your $$$ calculation seem - sorry for the bluntness - worthless to me.
Since I'm not emotionally invested in having you value my calculations, that isn't a problem for me. :)
intheknow -
"To get 225 gross die per wafer at this aspect ratio, you need to put in 19.6x13.23= 259.3mm^2."
No, you input the die size proper into the calculator, in the case of Nehalem should be somewhere between 250mm^2 to 270mm^2. What I estimated the "270mm^2" OTOH is the effective wafer area taken by each die. These two are different.
You can read more on the comment area here.
"So for our 270mm^2 die we get 138 good die. For the 259.3mm^2 die I will get 146 good die. So you get 8 more good die per wafer. Not much on it's own, but now do the volume calculations."
There is one truth and one flaw in you "volume calculation" argument.
The truth is that volume calculation shows that chip manufacturing is much a high fixed-cost business. Even a few dies per wafer (by slightly shedding off some die area, or even making the die more square) can make the company a lot more money.
The flaw is that there is no need to compare 270mm^2 vs. 260mm^2 in terms of die output per wafer, because we already know the wafer generates ~220 Nehalem dies. As I said, times that die output per wafer by your favorite yield estimation you get number of good dies per wafer. Any other calculation is unnecessary as long as one knows how to count. The "8 good dies" difference is thus non-existent.
If you guys are interested it is actually 228 die on the Nehalem wafer.
Axel -
Where's your ruler? You can use it here...
... result: Quad-core Nehalem > quad-core Yorksfield.
(Note to above: in terms of die size)
No, you input the die size proper into the calculator, in the case of Nehalem should be somewhere between 250mm^2 to 270mm^2.
I see you didn't even bother to follow the link.
You can read more on the comment area here.
Come now, using your own blog to support your own argument?
Even a few dies per wafer (by slightly shedding off some die area, or even making the die more square) can make the company a lot more money.
Square is actually not the best shape. The aspect ratio of Nehalem gives more die per wafer for the 283mm^2 than using square.
The flaw is that there is no need to compare 270mm^2 vs. 260mm^2 in terms of die output per wafer, because we already know the wafer generates ~220 Nehalem dies. As I said, times that die output per wafer by your favorite yield estimation you get number of good dies per wafer. Any other calculation is unnecessary as long as one knows how to count. The "8 good dies" difference is thus non-existent.
I believe I finally see where you are coming from here. If I understand you correctly, then you are saying we use die size to calculate die per wafer. Since we already know die per wafer, we don't need die size to calculate yield, correct?
As to the difference between 270mm^2 and 260mm^2, you made the claim there was very little difference. You gave no qualifiers to this statement. I was just pointing out the fallacy of that argument. The fact still stands that a 10mm^2 reduction in die size translates into big bucks.
So we have 225 Nehalem die on a wafer from the die calculator vs. 205 die per wafer for a square Barcelona at 283mm^2. In order to match yields, Intel can afford to have 1.8 times the defect density of Barcelona. Therefore, if Intel can do better than 1.8 times as high a defect density than AMD, they will be in better economic shape than AMD when producing "native" quadcores.
And don't bother responding with claims that this will be an advantage for AMD when Shanghai comes out. We can have that discussion when it appears.
If you guys are interested it is actually 228 die on the Nehalem wafer.
Nice work Enumae. Though I think you come in at just a tad under 228 since it looks like you left out the 3mm exclusion zone at the edge of the wafer.
Scientia, The more I look at Nehalem, the more convinced I become that the die size is ~260mm^2, which I believe was what you were guessing. Nice work.
What I'm puzzled over is why Nehalem is so much bigger than 2 Penryns. Quad Penryn should be 214mm^2. If I recall correctly, you said that IMC and HT is about 25% of the barcelona die. Plugging that in would give 267.5mm^2 for quad Penryn with IMC. That's right in the ballpark for Nehalem.
But the cache on Penryn is around 50% of the die area. It's only around 25% of the die area on Nehalem. The core area seems to be almost half the die, so this would mean bigger cores than Penryn. So I find myself wondering, what does all that extra core area bring to the table? Anyone have any thoughts?
intheknow
Nice work Enumae...
Thanks.
I know the image is not perfectly clear (Autocad printing to PDF shifts the lines a little), nor symmetrical or my line work perfect, but the scribe line is clear of the exclusion zone when you zoom in in Autocad.
Areas that the image and the sloppy line work would be put into question are...
1. Top and bottom rows, numbers 1 and 6
2. Top and bottom number 1 on rows with 14
3. Top and bottom number 14 on rows with 14
Hope that makes sense.
They are clear of the exclusion zone.
Did you possibly mean another area?
If not, then there are 128 potential die.
The Top Developments of 2007
AMD continues to talk and not deliver. Loss and debt mount, executives leave, products late and slow to promise. Lots more talk about future plans and lawsuits but still no action
INTEL continues to show roadmaps and deliver products. 45nm comes in on schedule and tick tock products humming. 65nm C2D continues to dominate across the board. profits in the billions.
And that is the status of 2007 and will be the same in 2008
AMD will continue to deliver products late and slow as their inferior silicon and limited manufacturing hamper both output, speed, and product learning.
INTEL will ramp 45nm volume faster then plan, cut prices as yields come in better the forecast and beat revenue and profit guidance.
Draw the trend lines from 2007 thru 2008, nothing that AMD has done or said shows that anything will change.
AMD has no Opetron II trick to surprise INTEL and INTEL got its ship on the right track.
You can delete this too, but that is the truth!
abinstein
Where's your ruler?
Using the same technique as I used before on Fudzilla's bogus image, I get a Nehalem die of 17.3 mm x 13.6 mm = ~235 mm2. This is for eight cores, according to the image. Ydinta in Finnish means core.
So according to that site, a quad-core Nehalem would be around 120-150 mm2, depending on the size of the IMC and how much L3 cache was shared between all the cores.
However, there is a possibility that the site is mixing up cores with threads. As each of Nehalem's cores can handle two threads, the site may have misinterpreted quad-core as eight-core.
I said
However, there is a possibility that the site is mixing up cores with threads.
Running that Finnish page through a translator, it appears the authors are claiming that the pictured Nehalem can handle 16 threads. So indeed, they do believe that it's an octa-core Nehalem.
However, my opinion is that they're mistaken. That picture is of a quad-core Nehalem.
Axel -
"Using the same technique as I used before on Fudzilla's bogus image, I get a Nehalem die of 17.3 mm x 13.6 mm = ~235 mm2."
You could be right, although this number is somewhat smaller than the number estimated from the wafer picture, by scientia's scaling projection.
"This is for eight cores, according to the image. Ydinta in Finnish means core."
As you said the author confused thread with core. Just look at Nehalem's die photo - it's clearly 4-core, not eight. AFAWK the 8-core Nehalem doesn't exist yet.
intheknow -
"What I'm puzzled over is why Nehalem is so much bigger than 2 Penryns."
This one is simple: 2-way SMT. To do it right some noticeable die area must be paid (i.e., critical structures reproduced for each thread).
"If I recall correctly, you said that IMC and HT is about 25% of the barcelona die. Plugging that in would give 267.5mm^2 for quad Penryn with IMC. That's right in the ballpark for Nehalem."
That's probably by chance. :) You didn't account for the reduction from less L2 cache and the increase from SMT & other core tweaks.
That's probably by chance. :) You didn't account for the reduction from less L2 cache and the increase from SMT & other core tweaks.
Which is what puzzles me about the die size. It may be the 2-way SMT, but the 4 cores still take up nearly 50% of the die. I'm wondering what else they have done to the core, because that seems to put it quite a bit larger than Penryn's core.
Enumae, no, you have identified the dice in question. So it looks like there will be 228 per wafer.
New Image.
This is an image of the die, and the die with the scribe lines showing both sizes as best I could.
Hope this helps?
enumae -
I believe the scribe line width you used there was too large. A usual scribe line should be more like 0.2-0.3mm each direction than the ~0.7mm you used.
intheknow -
I did try the "tool" you mentioned but really see no contradiction to what I said. To use a proper die size excluding the scribe line (supposedly 0.3mm) you should enter 13.2 by 19.7 (260mm^2 die size proper); with an edge exclusion between 2mm to 3mm you get # dies 225 to 229.
Also it seems to me the Nehalem "cores" occupy about 40% die area, not 50%. A Penryn core is about 23mm^2; suppose a Nehalem core is 15% larger (just a rough guess) then it would be roughly 26.5mm^2. So four cores amount to 110mm^2, or ~40.7% of 260mm^2.
Lex, did someone ask you to be obnoxious, or did you grant yourself the liberty?
The only reason to post here, is if you have an argument, and reasonable people on all sides of the discussion here are likely peeved that you waste this board's space when there are actual VALID conversations taking place.
Besides, Intel is not "early" with 45nm, as all sides agree, and being that ramping is a controlled process and happens either as fast as it can happen (which Intel would know mostly in advance) or at a rate that is planned to increase profitability. Thus, they either wouldn't want it to go faster than planned, or don't know exactly how fast they plan on ramping.
Do you understand what legal procedings are actually taking place? It's not that AMD hasn't done anything in the courtroom yet (in fact, there's been way too much news for you to actually have gone without reading any) but that the courts still cannot act yet as they have not gathered enough information. Also, AMD has been successful at litigating in Japan, EU, and Korea, so the US is really all that's left.
I would draw a trend line from 2007 to 2008, but being that neither is finished, and 2008 has not yet come, that would be impossible...
Axel
I would draw a trend line from 2007 to 2008, but being that neither is finished, and 2008 has not yet come, that would be impossible...
What's so difficult about this?
- K10 benchmarks are out and it will not beat Penryn per clock in the markets that matter for revenue.
- Penryn and Harpertown pricing has been revealed and it shows that the price war will only intensify going into 2008. Due to Intel's IPC, clock speed, and die size advantages, they will set the pricing to ensure that K10 will not turn a profit for AMD.
- It looks like Intel are set to execute the 45-nm ramp as planned.
The trend line is easy: AMD's losses will continue to mount and they will not make it without a major restructuring next year. Period. End of story.
I have yet to see a single compelling counterpoint expressed by anyone on any of these blogs to the above logic that has been obvious all year. It doesn't matter anymore if K10 ramps to 3.0 GHz by the end of Q1. It's simply not enough.
- Penryn and Harpertown pricing has been revealed and it shows that the price war will only intensify going into 2008. Due to Intel's IPC, clock speed, and die size advantages, they will set the pricing to ensure that K10 will not turn a profit for AMD.
- It looks like Intel are set to execute the 45-nm ramp as planned.
The trend line is easy: AMD's losses will continue to mount and they will not make it without a major restructuring next year. Period. End of story.
I have yet to see a single compelling counterpoint expressed by anyone on any of these blogs to the above logic that has been obvious all year. It doesn't matter anymore if K10 ramps to 3.0 GHz by the end of Q1. It's simply not enough.
It should show that Intel is abusing its monopoly position to set prices low, while they keep a premium on certain chips.
Paul Otellini is taking lessons from Tony Soprano. Fuck you, pay me. Or maybe that's Paulie from GoodFellas.
christian m. howell
"It should show that Intel is abusing its monopoly position to set prices low, while they keep a premium on certain chips."
What the ...? Do you yourself believe what you are saying?
There is no way AMD could supply entire market or majority for that matter. Would you like Intel to price all its chips high, wouldn't that be abusing monopoly? Is it Intels fault that AMD doesn't have a competing product?
abinstein, there is some interesting discussion going on on Roborat's blog with some corrections to your post. I'd suggest you to take a look at it and fix your post.
It should show that Intel is abusing its monopoly position to set prices low, while they keep a premium on certain chips.
So... are you saying that Intel should raise prices so we as consumers will have to pay more for our CPU's and allow AMD to make money?
Ho Ho -
"abinstein, there is some interesting discussion going on on Roborat's blog with some corrections to your post."
Thanks Ho Ho for the head up. I suppose they (or even you) are free to make comments on my blog, if they believe there is anything to "fix". Otherwise I'm not in the business of tracking comments all over the net and especially have no time for a bunch of fanboys who don't dare to write to me but on my back.
I appreciate your note, though.
lex -
"Barcelona is out and AMD has talked to their roadmap for the next two years. There is NOTHING."
I'd say the shared "non-inclusive" L3 cache is something very special. There's been no other processors that I know that has done that. The split power planes and separate clocking domains are quite novel, too.
Also the split memory controllers is something that can reduce effective memory access latency quite much; at higher frequency and heavy loading the system could scale better to higher performance.
"How is it that INTEL make billions and the competitor loses money?"
Absolute amount of cache is of no significance because of the scale (hundreds of millions of chips you're talking about). What is important is gross margin - and Intel is higher but not much.
And the main reasons that Intel can have higher gross margin is because of its big size and monopolistic tactics. Intel can lose money at the segment where AMD has competitive products to drive down AMD's sales, at the same time Intel can profit at the segments where AMD has no competitive products. This is illegal in most countries, and I'm sure it is also in US.
For example, let's say AMD's Opteron is competitive at the server, but Turion not so at notebooks. What Intel does is to lower its Xeon prices so that it sells 100 chips for the price of 75, effectively (ideally) eliminate AMD's revenue there, while strong arming the OEMs and drive up prices on the notebooks.
I'm afraid, if you didn't realize this simple dynamics, then you have rather insufficient understanding of the semiconductor market.
abinstein
"For example, let's say AMD's Opteron is competitive at the server, but Turion not so at notebooks. What Intel does is to lower its Xeon prices so that it sells 100 chips for the price of 75, effectively (ideally) eliminate AMD's revenue there, while strong arming the OEMs and drive up prices on the notebooks."
What makes you think Intel is loosing money on selling their Xeons? At most they won't be getting too big profits from the low-end chips but they are definitely not loosing any money on them.
Ho Ho -
"What makes you think Intel is loosing money on selling their Xeons?"
Please, use your brain. When I said "lose money at the segment" it doesn't need to be red on the balance sheet. You don't make as much as usual then you lose. Plus in my example I didn't say Intel "lose money on Xeon" so you can stop putting words to my mouth.
In any rate what I said to lex applies to you as well.
abinstein
"You don't make as much as usual then you lose."
So in what way is AMD doing differently by pricing their K8 at extremely low prices? After all at the low-end AMD does offer better value for the money.
It is kind of funny that when AMD was dominating Netburst and selling better performing CPUs at lower prices everyone welcomed competition but now that positions have changed people talk a whole different story. So was AMD also "abusing monopoy" back then, even though it was not a monopoly?
Ho Ho -
"So in what way is AMD doing differently by pricing their K8 at extremely low prices?"
Because AMD with <20% market share is not a monopoly.
"It is kind of funny that when AMD was dominating Netburst and selling better performing CPUs at lower prices everyone welcomed competition"
First, thanks to Intel's illegal business tactics, AMD has been dominating even though Athlon64 was superior to anything Intel releases for 3 years.
Second, AMD could never do what Intel did to the OEMs because it is the much smaller player. Basically this is precisely what makes some tactics illegal for a monopoly company.
correction- AMD "has never been dominating"
abinstein
"Because AMD with <20% market share is not a monopoly."
Yes, it is not but it still has a price lower than equivalently performing product, same as Intel has with Xeons.
"First, thanks to Intel's illegal business tactics, AMD has been dominating even though Athlon64 was superior to anything Intel releases for 3 years."
By domination I was thinking of performance, performance per $ and performance per watt compared to competition.
"Second, AMD could never do what Intel did to the OEMs because it is the much smaller player."
Is Intel doing it to OEMs at the moment? It might have done it in the past and keep K8 taking over more share from Netburst, I don't argue with that. Just that I'm not that sure it is still doing it today.
"Basically this is precisely what makes some tactics illegal for a monopoly company."
So what should Intel do? Price up all its products, lower all of them or something else? With pricing all up people would also start complaining that Intel abuses its monopoly, wouldn't they?
From what I know about competition and monopoly laws it is illegal to set prices so low that the company starts loosing money selling them. From what it looks like Intel is not doing it, certainly not with Xeons.
First, thanks to Intel's illegal business tactics, AMD has been dominating even though Athlon64 was superior to anything Intel releases for 3 years.
As I said before, AMD had that golden opportunity from 2004 -> Mid 2006 to gain all the market share they could. But AMD was stuck with only FAB30 as it's single source of CPUs. They were selling all the CPUs they could make during this period.
They had FAB36 under construction, but that and the deal with Chartered didn't go through until 2006. By the time FAB36 was ready and producing the CPUs Woodcrest and Conroe were here, and AMD no longer had a huge performance advantage.
It's hardly Intel's fault that AMD only had one FAB operational then. There was nothing magical that AMD could have done to change this.
"They were selling all the CPUs they could make during this period."
They sold all the CPUs because they sold them with low prices. AMD didn't always sell all their processors for that long prior to the introduction of K8.
However, even though AMD had clearly superior products, Intel's illegal tactics ensured that for 3 years most OEMs would lock their high-end systems out of AMD chips.
You must be blind if you can't see it.
Giant
They were selling all the CPUs they could make during this period.
This is precisely why it's ludicrous to claim that Intel's steep discounts to OEMs, aggressive marketing, etc comprised any sort of illegal activity. For the last several years AMD have sold everything they fabbed! Look at their quarterly statements, they didn't have inventory writeoffs like Intel did. They could not have taken any more market share than they already did, because they were at full fab capacity. If anyone had CPUs piling up in the warehouses, it was Intel with Netburst.
And that is also why we can confidently put a fork in AMD's current way of doing business. When you sell everything you make and still lose $2 billion in a year, it's time for massive changes.
The other problem is that in 2008 AMD is continuing to ramp Fab 36 and Intel will begin to flood the market with 45-nm chips from four fabs. So we may have a glut of CPUs on the market if Asian demand doesn't take off as expected. If so, one of the two companies (or both) will experience rising inventory levels and it will become a price war like never before, until inevitably AMD is forced to concede.
Ho Ho
"Yes, it is not but it still has a price lower than equivalently performing product, same as Intel has with Xeons."
If it's not monopoly then it can't dump products below cost and it can price them lower without breaking any law.
"By domination I was thinking of performance, performance per $ and performance per watt compared to competition."
Then AMD has dominated since the introduction of K7. You only show that Intel's illegal tactics trace back to further earlier.
"So what should Intel do? Price up all its products, lower all of them or something else? "
I reckon either you can't read or you simply refused to read what I said previously and I'm sorry, I'm not going to explain the simple market dynamics over and over.
They were selling all the CPUs they could make during this period.
For the last several years AMD have sold everything they fabbed!
So what you guys are trying to say that Intel for example don’t sell everything they fab. Is that it?
Having inventory or not having inventory means nothing!
Sci, what is your prediction on the upcoming numbers?
Q3 is almost over and I'd love to see what you have to say.
Also, it seems like your Reducing losses per Qtr is not holding up. It is expected that AMD will post a loss of around 500-580Million.
Ideas?
We can expect a ~$400-500 Million dollar loss for AMD for Q3'07.
Midpoint would be $450 Million.
Aguia
So what you guys are trying to say that Intel for example don’t sell everything they fab. Is that it?
Having inventory or not having inventory means nothing!
I guess you haven't heard of an inventory writeoff. This is when a company essentially throws excess product in the trash because they can't sell it. For example, Intel wrote off about $100 million in Q3 2006. But generally, both AMD and Intel will price older CPUs into the floor to force sales, and inventories remain stable. AMD are doing that now with K8, unfortunately to the disastrous deteriment of their financials. It's just that last year Intel couldn't unload excess Netburst inventory because K8 was so clearly a superior product. Over the last year or so Intel have had to throw hundreds of millions of dollars worth of Netburst into the garbage.
abinstein
"If it's not monopoly then it can't dump products below cost and it can price them lower without breaking any law."
Let me ask straightly. What would be the real cost of a 2GHz quadcore Xeon? If you don't know that specifically you can choose whatever other Intel CPU you like.
"Then AMD has dominated since the introduction of K7. You only show that Intel's illegal tactics trace back to further earlier."
Illegal tactics or bad marketing from AMD? What kind of illegal tactics has Intel used during the last year when AMD has shown huge losses? I sure hope you don't say that when bigger company outengineers a smaller one it would be illegal :)
"I reckon either you can't read or you simply refused to read what I said previously"
I'm sorry but I did not see any place where you said what should Intel do to not make its CPU pricing illegal. You only talked how pricing its Xeons low they can still keep their ASP's up and that was it, you didn't say anything about a solution.
aguia
"So what you guys are trying to say that Intel for example don’t sell everything they fab. Is that it? "
They didn't sell a whole lot of P4's.
mo
"It is expected that AMD will post a loss of around 500-580Million."
My guess would be around $450-550M. If I remember correctly then according too Scientias previous estimates of AMD reducing losses by $100M per quarter they shouldn't loose more than $300-$400M. If indeed this reduction is pushed back by a couple of quarters then AMD will be out of cash long before breaking even.
Architectural Support for Fine-Grained Parallelism on Multi-core Architecture
"We demonstrate that the proposed architectural support has significant performance benefits. First, it delivers much better performance than optimized software implementations: 88% and 98% faster on average for 64 cores on a set of loop-parallel and task-parallel RMS benchmarks, respectively. In addition, it delivers performance close to (about 3% on average) an idealized hardware implementation of a dynamic task scheduler (i.e., operations are instantaneous)."
How's that for a top development of 2007? I'd say it is much more important than even SSE5. There were lot more stuff talked about, some of it you can find here.
Ho Ho -
"Let me ask straightly. What would be the real cost of a 2GHz quadcore Xeon?"
Maybe you can tell me how much you need to make one Xeon?
The fact that you'll calculate chip manufacturing cost in a naive way like what you did only shows you don't know the business.
"Illegal tactics or bad marketing from AMD? What kind of illegal tactics has Intel used during the last year when AMD has shown huge losses?"
I have heard too many times from buyers saying: I'd very like Athlon than Pentium, but IBM/HP/Dell don't offer them!
The question is: what makes those companies not offer Athlon, when their customers ask for it? You can ask the question to the Japanese, Korean, and EU governments.
"I'm sorry but I did not see any place where you said what should Intel do to not make its CPU pricing illegal."
Maybe it's really you can't, not won't.
lex -
"abinstein don't read, don't bother to take time to understand the larger issues facing AMD."
Not only I won't read, I'd recommend nobody reads the stuff you wrote, where you mixed up (chip) cache with (company) cash and use arguments about one to support those about the other.
Lex…
what is illegal is to attempt to maximize profits in a way that hurts consumers or in a way that is simply uncompetitive.
It’s also illegal to make sure other companies don’t sell to consumers.
I reject the argument that without AMD that that choice to the consumer will disappear and we all have to buy a 2000-5000 dollar PC like we did in the IBM/COMPAQ days.
Even today this happens. If Intel wanted the best to consumers wasn’t selling crappy T5xxx CPUs and instead was selling the T7xxx CPUs. Both CPUs are the same, the difference is in one Intel didn’t disable half of the cache!
In my country computer store notebooks: 1 celeron, 1 T7xxx and 18 T5xxx. Now do the math!
2000-5000 dollar PC like we did in the IBM/COMPAQ days
There was a company named Sinclair that sold you a computer with CPU, memory, sound, Video card, Keyboard, … for cheap.
And I think you are confusing Intel (components manufacture), with system manufactures (Acer, IBM, Toshiba, Dell, …) but OK.
In fact when guys like you are saying AMD should pay for the Intel x86 license, then Intel should pay IBM for the IBM PC standard still used today!
Remember that AMD exists because of IBM needed a second source.
IBM ruled because they where the designers of everything: IBM XT, IBM AT, maybe you forgot that not so long ago it was more important the words: IBM PC compatible on the specs than any Intel sticker used today!
Without AMD INTEL would still have went down this road.
The road Intel has been taking is very bad in terms of products, especially in innovation; glued CPU isn’t a good example? Maybe it is.
I don’t want to think what would happened if AMD didn’t exist. Itanium inside our desktops? Or something even worst than a Pentium D820?
Sorry about the deleted post, tried to log in and didn't expect the form to post the comment as well :/
Anyway, what I was going to say was:
aguia said..
The road Intel has been taking is very bad in terms of products, especially in innovation; glued CPU isn’t a good example? Maybe it is.
I don’t want to think what would happened if AMD didn’t exist. Itanium inside our desktops? Or something even worst than a Pentium D820?
I distinctly remember one of the arguments for Intel not releasing >3Ghz Core 2 CPUs is because "Intel doesn't need to". Now, anyone who makes this argument simply cannot deny AMD's importance in advancing the x86 industry. If it were not for AMD, Intel wouldn't have "needed to" release anything faster than a Pentium II 266. Or even a Pentium III 500Mhz.
And, my bet is, if it were not for AMD, we'd all be either using Itanium, or have switched architecture (PowerPC perhaps?). This would have depended on Microsoft and which architecture they decided to have Windows run on.
As far as technology goes, I perceive Intel as doing things the brute-force way, whereas AMD uses more elegant solutions. This is especially true for the K8 and K10 generations. The proof is in the scaling. Intel is going the same route with Nehalem as AMD did with K8 and K10, so obviously AMD did it "right", or Intel wouldn't be doing the same thing. On that note, I should say that Core 2 Duo (there is no real Core 2 Quad available yet - that's Nehalem) is a great design. Intel has achieved superb performance on a clunky old FSB (at least for single and dual socket anyway). Well done to that.
(there is no real Core 2 Quad available yet - that's Nehalem)
glued CPU isn’t a good example? Maybe it is.
I hope you all are planning on booing and hissing at AMD when they release an 8 core MCM. Since these are apparently evil and awful.
In the meantime I'm happy with my Q6600 @ 3Ghz. It lets me multitask in such ways that would have slowed my old E6600 to a crawl. Of course, Houdini just loves CPU performance. The more cores the better. I can't wait to see how an 8 core/16 thread Nehalem CPU would do in something like Houdini.
giant -
"I hope you all are planning on booing and hissing at AMD when they release an 8 core MCM. Since these are apparently evil and awful."
IBM also made multi-core with MCM. But IBM's multi-core is a lot more scalable than Intel's. Thus MCM by itself is not a bad example, but using MCM to glue two dual-core the dumb way is.
giant said..
I hope you all are planning on booing and hissing at AMD when they release an 8 core MCM. Since these are apparently evil and awful.
No, I don't plan to boo and hiss at AMD if they do an 8-core MCM of K10, just the same that I didn't boo or hiss at Intel in my post. I merely made the observation that there is no Core 2 Quad. The apparent quad-core Core 2s available now are simply two Core 2 Duos on the same package. We all already know that. I'm just pedantic.
In the meantime I'm happy with my Q6600 @ 3Ghz. It lets me multitask in such ways that would have slowed my old E6600 to a crawl. Of course, Houdini just loves CPU performance. The more cores the better.
The MCM approach seems to work fairly well for Intel in single and dual socket configurations, and well, you can buy them today at high clock speeds (or overclock them, just as you have, Giant). We need to wait a bit longer for AMD to achieve the same with K10. I'm already saving for my Phenom X4 :D
I can't wait to see how an 8 core/16 thread Nehalem CPU would do in something like Houdini.
Out of interest, when do you expect an 8 core Nehalem to be released? If 4 core Nehalem on 45nm is in the realm of 260mm^2, would that mean 8 core Nehalem would be best done on 32nm? Intel could do a native at 45nm, but I'd expect the die to be somewhere around 550mm^2 (some extra logic to service interconnects for 8 cores) or larger (need an L3?).
Random thoughts now. I personally expect Intel to release tri-core CPUs with Nehalem. It makes absolutely no sense to release a good tri-core as a dual-core when the tri-core can fetch a higher price than then dual-core, and cost the same to produce. AMD's introduction of Phenom X3 will make it "ok" for Intel to do this, since by the time Nehalem comes out, Phenom X3 would have been around for a while.
As for native 8 core CPUs .. I do wonder if either AMD (once they get there) or Intel will entertain the idea of 5, 6, and 7-core CPUs. Probably cause an SKU labyrinth, but that would be the best way to make the most money from their CPU production.
abinstein
"The fact that you'll calculate chip manufacturing cost in a naive way like what you did only shows you don't know the business."
And you know how to calculate the real price? Then tell me how much does Intel loose per (low-end) CPU? So far you haven't said a thing, you simply keep on repeating yourself without saying anything of real value.
"I have heard too many times from buyers saying: I'd very like Athlon than Pentium, but IBM/HP/Dell don't offer them!"
But they do sell them.
abstine
IBM also made multi-core with MCM. But IBM's multi-core is a lot more scalable than Intel's. Thus MCM by itself is not a bad example, but using MCM to glue two dual-core the dumb way is.
Intel's MCM approach might be "uncivilized", yet it readily outperforms native quad core Barcelona.
They can be clocked higher, as well as a lot better yield (Intel 90% vs. 30% AMD)
There is a reason Intel had sold over a million quad cores, while AMD sold... how many?
Abinstein
Not only I won't read, I'd recommend nobody reads the stuff you wrote, where you mixed up (chip) cache with (company) cash and use arguments about one to support those about the other.
And I would definitely read someone's comment, who has lack of understanding regarding fabrication process, die size, and performance?
You're a joke, Abinstein.
Ho ho, (and abinstein)
But they do sell them.
Here is a little story Ho ho,
Some section of my company wanted to upgrade their servers.
Well after analyzing the market offers at that time it seemed that the best system they could buy was AMD dual core based Opteron.
They made the draft of the specs, and mentioned dual core CPU as one of the requirements. The proposal was send to Dell, HP, Fujitsu and Gateway.
Well out of the four obviously Dell didn’t proposed any AMD system but also none of the others did. Intel at that time didn’t have dual core Xeon yet. So one even proposed servers based on PentiumD, they thought, what’s happening here?!?!?!?
They started to make call and asked why wasn’t any Opteron based server proposed.
Well they (HP, Fujitsu, Dell, Gateway) argued:
-We can, but we can’t give you the same warranty if you buy the Opteron system.
-We won’t give you support if you buy that system.
-Price will be much higher (It wasn’t but OK).
-The Intel Xeon also had dual core Inside (HT technology).
-And other stupid arguments that I can’t remember all them now.
Well in the end they bought none of those brand Server, and bought from one company that supposed didn’t have them IBM, and they also didn’t say stupid things, they didn’t even asked why they wanted AMD.
Here is a nice example that not only Intel is doing illegal tactics, they partners do it too. I think the AMD antitrust case should not have Intel in the court, but also lots of companies that beneficed from Intel illegal tactics like Dell, there was one that didn’t beneficed from this AMD. Just my thoughts.
Well Aguia, that's a little nice story you just made up there..
Want to provide backup for that?
If those major OEM refused to sell Opteron processors during the K8 vs. Prescott days, then how do you think AMD got to the top?
Yomama, anyone who has been alive and messing with computers for the past 4 years knows exactly what Aguias talking about. Saying that you believe he made the story up or that you don't believe it either shows a blatent willingness to distort the facts in order to achieve your own perception of reality, or a lack of experience in this field.
Point being, vendors have often, anonymously, been quoted as saying that when Athlon 64 first came out they felt like Intel was holding a gun to their head, in terms of exclusivity.
And they were right to feel that way. Imagine a company with 8x% of all of the marketshare of a certain product telling you either:
a) if you start selling any quantity of our competitors product, we will drastically increase prices on all of our shipments to you
or in the case of smaller vendors:
b) if you start selling any quantity of our competitors product, we will simply cease shipments to you.
I know of a small pc parts vendor that, for some reason, gained the ire of Intel's salesment for having the balls to sell AMD's products, and now they only sell AMD's products, and can't buy processors directly from Intel (and thus can't actually compete on those processors prices).
This wouldn't make any difference if it weren't for the fact that Intel gained its massive share through an amazing and very admirable marketing campaign. This makes it so that anyone without any experience with computers thinks their computer isn't fast unless it has an Intel processor. You can go to any store and see this. I've literally watched 3 people in a row turn down an athlon 64 x2 system in favor of a Celeron D system, simply because they hadn't heard of AMD and weren't willing to pay the extra $150 for components which were all superior to anything the Celeron system had.
Is this AMD's fault?
How about we phraise the question this way. Is it AMD's fault that with nearly 1/6th of Intel's marketshare, a mere fraction of their income, and 1/10th of their number of employees, they weren't able to out-advertise, out-reasearch, and out-monopolyse them?
Obviously, this isn't happening anymore, but AMD is in a lot of hurt because when it first put out Opteron, it got no support, and no one willing to use it, despite better performance and scaling and pricing. And this happened with dual core too. If you can't capitalize on a product, that's money you wont every be able to re-capitalize on later, and that effects you for a very long time.
So even though a few years ago seems like a long time to you, it does make a very big difference.
greg
"This wouldn't make any difference if it weren't for the fact that Intel gained its massive share through an amazing and very admirable marketing campaign."
Are you talking about Core2? If no then Intel didn't get any marketshare, it already had it. AMD just got some for it with its CPUs. If yes then did AMD get its server marketshare the same way?
"Is it AMD's fault that with nearly 1/6th of Intel's marketshare, a mere fraction of their income, and 1/10th of their number of employees, they weren't able to out-advertise, out-reasearch, and out-monopolyse them?"
So being considerably smaller what logic should give them considerably bigger marketshare than they already have? Let me remind you, they sell everything they make as it is, they cannot possibly get any more marketshare without increasing their factory output.
"And this happened with dual core too."
Let's assume you are correct with that. AMD did really well until H1 2006. What happened in H2? Wasn't it superior CPU that made it really hard to ask high prices for AMD?
Greg
Obviously, this isn't happening anymore, but AMD is in a lot of hurt because when it first put out Opteron, it got no support, and no one willing to use it, despite better performance and scaling and pricing.
The bottom line is that AMD sold every Opteron they put on the shelves, and at premium prices too. They did not have the fab capacity four years ago that they have today, hence they could not have taken market share at any faster rate. Therefore AMD's position today would not be materially better even if Intel had played nice. The only thing they would have is a few tens of millions of dollars saved in legal fees. Certainly not enough to have built another fab and taken more market share.
Thanks to Core 2, AMD are now in the uneviable position of having built out more fab capacity than the demand for their products warrants. It's kind of like being back in 1910 and deciding to invest in lots of capacity to manufacture horse buggy whips while the competition is switching over to making spark plugs. Compared to Intel's current and next generation products, K8 and K10 are dead end products that do not justify the fab capacity investments. Without compelling technology and product to sell at high ASPs, the fixed costs of AMD's substantial fab capacity are killing them. Big changes are around the corner.
Is SCI MIA or what?
I really would like to hear his thoughts on the projected AMD Loss numbers.
If those major OEM refused to sell Opteron processors during the K8 vs. Prescott days, then how do you think AMD got to the top?
Then you didn’t read the story? We still bought the Opterons.
Even recently this still happens, this time we were "forced" to buy systems based on the Intel Xeon 5130. Not a big deal here since they are very good processors, but I think with the Opteron 2.4Ghz priced about the same we were better off with those since our applications run faster on the Opteron systems.
Even recently this still happens, this time we were "forced" to buy systems based on the Intel Xeon 5130.
How were you forced this time? Almost all the vendors now offer AMD platforms so how were you forced this time? or you just pissed of that the people who make the decision didn't go with the AMD you love so much?
May I ask again:
What made AMD not generating profits during the last year when they have sold pretty much everything they have made? It cannot be Intel bullying the sellers because then AMD wouldn't sell all their stuff, wouldn't it?
Why are people so obsessed with what happened a few years ago? It doesn't help with what is going on today. It kind of remainds the old people talking about good-old-times when everything was better than is today :)
How were you forced this time? Almost all the vendors now offer AMD platforms so how were you forced this time? or you just pissed of that the people who make the decision didn't go with the AMD you love so much?
Very easy, they priced the AMD systems much higher. That’s why I put "" on the forced word. We could buy the AMD systems but at premium price.
or you just pissed of that the people who make the decision didn't go with the AMD you love so much?
I don’t love them, but I'm sure you hate them.
And I'm the guy who makes the calls. The final decision is mine.
aguia
"Very easy, they priced the AMD systems much higher"
Are the Opterons still priced at premium? Can you list some of the offerings so we could check?
Just a theory but couldn't the AMD CPUs really be priced at premium at the time? I remember that 3800+ costed way more than Intel lower end dualcore Netbursts, IIRC Opterons were priced at much higher. I wouldn't be surprised if that premium was because AMD asked much more from their CPUs. After all their CPUs were superior.
Axel, AMD was selling everything it had on the shelves, because it was actually bothering to throttle capacity with its very steady (meaning constantly low) demand.
As you can see now, they use companies like chartered because they actually have demand that exceeds their capacity. They have excess products at times because they actually have a more diverse portfolio and weren't exactly prepared to manage it, but that's beside the point.
Also, you again assume that AMD isn't run by intelligent people, and that they had no way, at all, of knowing how core 2 would perform (despite probably having a way of getting early samples). Even someone who hasn't ever worked with or in this industry would find your reasoning flawed, if not pathetic.
Ho ho, are you ignoring the direction of the argument, or the entire argument completely. AMD is not suing Intel for its practices now, it's suing them for what happened during the k7 and mid-late k8 years. What AMD is facing now is the inevitable backlash of not being able to capitalize on demand.
Mo, I have a feeling the poor pricing and and support from those companies has a lot to do with Intel's "pressure" that they excersized much more flagrantly in the past. If you look at the channel, and talk to the people who build those systems for those companies, the Opteron is not being bought by those companies for more than the xeon, yet they're charging you more to buy it.
Not only that, but the equivalent components are much cheaper for AMD systems (RAM, motherboards, cooling systems, and power supplies) so obviously someone is taking in a lot of cash on and seriously hampering the market acceptance of, a new product that (by nature of being a new product) they have only everything to gain by reinforcing and growing its position in the market. They're either very stupid (but since they're working in this industry, I'd have to assume not) or have an ulterior motive/pressure.
Lex said:
Bottom line is that there was never any compelling reason to buy AMD in the years past as anything but a cheaper source of parts.
So, we obviously know you're perfectly logical and unbiased. Obviously a cheaper source of parts with better thermals, greater reliability, and cheaper supporting hardware are nothing your customers would ever want (or better prices, as all of the above would lead to).
Also, Lex, thanks for completely avoiding my point about AMD also being an American company. It shows how amazingly mature and capable of intricate logic you are.
Also, AMD's marketshare wasn't growing the entire time between 2002 and 2005, as you claim.
All I have to say to the rest of your comments is... wow. I mean, how old are you?
As an aside, I think you have a point lex. If AMD doesn't win the court case, and the market keeps going the same way, and neither side mis-executes, or continues to execute exactly the same way they have been for the last 2 years, AMD is not viable. In fact, no company can ever be viable against Intel anymore, unless something is forced to change.
Either Intel needs to screw up, someone needs to buy AMD, or AMD needs to very rapidly increase clockspeeds on barcelona in a way that is unprecedented in their history.
I'm sorry I can't shake your boundless faith in Intel, Lex, but they're a company, and all they really care about is making money. Whether they do that by actually innovating, or simply realizing they don't have to because no ones left, they're just a business, and they're just people like you and me, and they're prone to doing things that aren't beneficial to the industry if they're beneficial to them selves.
Luckily, I think that there are enough rumors flying around that someone probably will buy AMD if Intel does not missetp. Whether you're willing to admit it or not, Lex, AMD has products that are important to the industry.
Are the Opterons still priced at premium? Can you list some of the offerings so we could check?
Don’t confuse web prices and manufacture prices with the final vendor prices.
I wouldn't be surprised if that premium was because AMD asked much more from their CPUs.
AMD didn’t sell us the servers. Gateway did.
mo
Hmmm, Ok so Intel offers product at lower price that AMD cannot meet and somehow it's Intels Fault?
Nope. That’s my all point it isn’t Intel fault. The final vendors priced the AMD system much higher in a way we would choose Intel. Server market is very different from the desktop. With server you have to deal with “partners” if you whant everything running smooth.
It must be a crime to lower the prices in your book. If AMD decides to charge a "premium" then please explain to me how the blame lands on Intel? In the end, it is YOUR company saving money right?
We didn’t save any money. The Intel system cost the usual price. The AMD systems where inflated by our vendors.
don't hate them, i hate fanboys like you who see NOTHING wrong with their Angelic AMD. OMG AMD is so perfect.
Well I see your point, you hate them. You are a pure Anti-AMD guy! Which is even worst than fanboys.
And I'm gonna call you out right now, the final decision is NOT YOURS
It’s mine. Not yours for sure.
you would have gone with opterons, even if you had to pay a premium, the decision is not yours pal if you didn't get what you wanted to begin with.
If I was you even if the Intel cost 10X I would go for the Intel system for sure even if my company would have to go BK because of this.
The problem is that final cost is very important. Multiply 500$ in each server then you got free server for each 4 server you buy. Money spend is very important. I don’t want to hurt the company where I work because of stupid brands. If I was a brand guy everything in my company was IBM and SUN for sure.
lex
Like I said before EU, Japan, and Korean are loving the opportunity to screw a dominate American technology company in some way to help the local industry complex. Thanks AMD for helping other countries try take down another American iconic manufacturing and technical powerhouse.
But AMD is also American. So they are helping an American company because of an American company?
Your argument is plain stupid.
Greg,
Excellent points.
"Money spend is very important. I don’t want to hurt the company where I work because of stupid brands. If I was a brand guy everything in my company was IBM and SUN for sure."
Wise words, that's the difference between server space and user space summarized, and why AMD manage to gain some server market share against 'all odds'.
As long as AMD will provide 'value' in server space it will prevail. The things on user space are not that simple, there added value often weights more than real value. Those two markets are at opposite sides. In the server space AMD needs value to win, on user space it needs performance and image. I hope AMD can address the image and performance when it will finally launch Phenom series.
aguia
"Don’t confuse web prices and manufacture prices with the final vendor prices."
So they don't vendors have the prices listed on the web?
"AMD didn’t sell us the servers. Gateway did."
Yes but how much did AMD ask for their CPUs back then?
Estonian Pricewatch page showed me that in 03.08.06 Opteron 875 costed around $1800. By 22.02.07 it costed half that and today only 1/4. Opteron 2214 started at around $550 on 05.01.07 and today costs half that. Comparatively 3.0GHz Xeon 5050 has been priced at around $300 for the last year or so. Can you say that in no way could your Opteron based servers contain much higher priced CPUs than comparable Xeons? How can you know that it was the vendors who priced their CPUs that high and not AMD itself who asked insane prices for its CPUs?
Sorry about the typo, I meant "vendors who priced their machines that high"
"Estonian Pricewatch page showed me that in 03.08.06 Opteron 875 costed around $1800. By 22.02.07 it costed half that and today only 1/4. Opteron 2214 started at around $550 on 05.01.07 and today costs half that. Comparatively 3.0GHz Xeon 5050 has been priced at around $300 for the last year or so."
Ho Ho, don't you see that it just makes my point that Intel is pricing extremely low where AMD has competitive products while raising processor prices on other places (i.e. notebook) to cover?
lex
AMD continues to talk and not deliver.
Not true. AMD did deliver Barcelona.
And that is the status of 2007 and will be the same in 2008
Presumably based on nothing but wishful thinking. The rest of your statements can be summarized that AMD does nothing right while Intel does nothing wrong.
You can delete this too, but that is the truth!
We'll see if it is the truth in Q4. If AMD can't begin recovering profitability by then, then you might even be right.
Sci, I know you have a lot of catching up to do but can you please comment on the AMD losses for Q3'07?
By your table AMD should loose between 300-400M in Q3, 200-300 in Q4 and be dead even by Q1 next year? correct?
Hoho, your point is moot, b ecause aguia and I are talking about buying a system now. You can't get an equivalent AMD system for the same price as an Intel system, you wont get the same support, and you wont get a decent warrantee.
Why is this the case when AMD's product uses less power, runs cooler, and has cheaper supporting parts?
Intel isn't charging less per CPU (read my previous point) and it isn't easier for vendors to work with Intel or their parts, so why make the machines more expensive, less accessible, and less beneficial to them as a company and to their customers?
mo
With Barcelona coming out late in Q3 I don't know that there will be much revenue shift. Maybe AMD could reduce the loss by $100 Million (due to 90nm ramp down among other things) but I doubt they could cut it in half which would be a $250 Million decrease. Q4 should look better because they will finally have some good volume of K10. AMD may not break even in Q4 but if they can reduce the loss by 2/3rds that would be a good sign.
So in other words, the prediction you had made about AMD breaking even in Q4 is wrong and you would like to adjust it.
Correct?
Hoho, Intel can't use questionable tactics against most of the channel, as the business model presented there mostly dissuades that. However, their volume share can only reach so high at that point, and it will also grow only grow at a certain rate, as the channel only reaches a certain amount of customers, whose product knowledge can be best described as word of mouth.
Also, "so much" was not very much, considering the actual growth before AMD was supported by large vendors was probably less than 10%. Once AMD brought the case forward is when they made their major gain in marketshare and slightly after they brought forward their case.
That last point would lead me to believe that, at least, Intel was trying to pull strings with vendors to keep AMD out of the market, and those strings broke. The timing just correlates too well.
As to server pricing, you'll have to call a large vendor and ask them yourself, since I can't call for you. That's, really, the only decent way to figure out pricing.
greg
"Once AMD brought the case forward is when they made their major gain in marketshare and slightly after they brought forward their case."
Didn't that happen soon after Intel reached thermal limits of Netburst and clock speed stopped increasing whereas AMD was gradually increasing it?
"As to server pricing, you'll have to call a large vendor and ask them yourself, since I can't call for you."
So IBM and Sun with their online lists aren't large enough?
You'll get a better idea with other vendors. Sun and IBM have been offering AMD processors longer than anyone else anyway.
Also, you're right that this started happening when Intel reached pentiums thermal limits, but it was too fast to be entirely because of that. The market just doesn't react that quickly. I'm certain it had to be because people were getting word that AMD was setting up a case against Intel (I mean, it was pretty obvious what was happening even a year before hand).
giant
"As I said before, AMD had that golden opportunity from 2004 -> Mid 2006 to gain all the market share they could."
You should check your facts. In Q1 04 AMD was still producing more K7's than K8's. Secondly, AMD produced less volume of chips in 2004 than it did in 2003. No golden opportunity there.
" But AMD was stuck with only FAB30 as it's single source of CPUs."
AMD expanded FAB 30 by 50% during this time but this wasn't completed until late 2005. AMD's volume and revenue share increased during this period.
Now, why was this in fact Intel's fault? Intel first pressured manufacturers to not support K7. This is why AMD had to supply its own 750 chipset. Then Intel used the same tactic with K8 which is why not a single motherboard maker was at the K8 launch event. This is also why it took nearly half a year for K8 to be supported by other chipsets. Intel's tactics had nothing to do with price advantage or quality.s
ho ho
"If indeed this reduction is pushed back by a couple of quarters then AMD will be out of cash long before breaking even."
No, they won't. I can guarantee that if AMD delivers desktop volume of K10 with 3.0Ghz in Q1 08 there will be no question of bankruptcy.
Ho Ho
"Architectural Support for Fine-Grained Parallelism on Multi-core Architecture"
Is trivial until it effects the desktop. When do you estimate that will happen?
lex
If you truly reject the argument that without AMD nothing will change then you are woefully ignorant of both history and the present.
Intel Pentium Pro - desire to compete with RISC based servers and workstations.
Intel Pentium II - reaction to K5
Intel Pentium III - reaction to K6
Intel Pentium 4 - reaction to K7
Intel Core2Duo - reaction to K8
The truth is that without AMD, there would have been no PII, PIII, P4, or C2D. There would also be no Nehalem.
Scientia
Intel Pentium II - reaction to K5
Intel Pentium III - reaction to K6
Intel Pentium 4 - reaction to K7
Intel Core2Duo - reaction to K8
From what you have said before it takes about 4 years to produce a new processor, so in looking at the time line (according to Wiki) how are they responding?
Image for Reference
Please explain.
Thanks
-----------------------------
Also...
I can guarantee that if AMD delivers desktop volume of K10 with 3.0Ghz in Q1...
Dual or Quad?
scientia
"Is trivial until it effects the desktop. When do you estimate that will happen?"
How are SSE5 and G3MX any different from that? When will they be on desktop? When will programs start benefiting from SSE5? Will SSE5 be important if Intel makes up its own new version of it?
Some interesting thoughts on Silverthorne can be found on Ars Technica with a counterpoint of sorts on Cnet.
Just one more reason I think Silverthorn is one of the year's top developments. It has the computing power, thermals, and power envelope for x86 to start to seriously compete with ARM.
AMD is American too. Yup they are, and where do they do ALL their manufacturing? Where do they do all their assembly?
AMD is an American based company. From what I can gather they do the majority of their design in US and are HQ here. But lets be serious, all their manufacturing is overseas. Does AMD lead in anything or is viewed by any as a larger leader in the industry?
Tell me where are all Intel manufacturing plants located?
Please don't need to bring up Isreal, Ireland etc. etc.
Because it would completely kill all your logic. Right?
Remember that more then 2/3s of their manufacturing and design is done in the US.
Link?
At least with prescott you got to give INTEL an A for marketing
Which is the most important thing in developing new technology and innovation.
I'm not lex but who cares :)
aguia
"Tell me where are all Intel manufacturing plants located?"
Where does Intel pay its taxes?
"Link?"
http://www.intel.com/community/selectacommunity.htm#US
From the link:
Arizona: Fab 32, Fab 12, Fab 22
Colorado: Fab 23
Massachusetts: Fab 17
New Mexico: Fab 11X
"Intel United States is home to over 45,000 employees and its corporate headquarters located in Santa Clara, California"
Intel has half of its workers in US. How many does AMD have? Any comments?
"Which is the most important thing in developing new technology and innovation."
It earned them money so they wouldn't go bankrupt. Innovation is great but you have to "eat" something to live.
enumae,
AMD release K8 in 23 April 2003 not September.
There was also Prescott on February 1, 2004 that was rather big update to Northwood, about as big as k8->k10, if not greater. It was in fact so big that people wondered why wasn't it called Pentium 5. Only problem was that the CPU was designed to run at much greater frequency than the manufacturing technologies allowed.
Here:
The Best Gaming Graphics Cards
Free marketing.
Nvidia wins one category...
Ati products are bad and late, really ...
Sarcasm in Italic
agui
"Where does AMD pay its taxes?"
Exactly
"From the link:
China, Costa Rica, India, Ireland, Israel, Malaysia, Philippines, Russia, Vietnam."
Did you bother to check what facilities they have at those places? Apparently not.
"You tell me. Don’t forget to add ATI."
"Like you, who bought a crap CPU. Did you bought it because of Marketing?"
It wasn't the best thing out there but it surely beat AMD in bang for buck category. Do I have to retell the story once again? And please, grow up.
Can you now comment on if Intel is more of an US company or something else? After all it was you who tried to say that it isn't. Can you prove your previous claims? I showed you a link you asked for and I'd like you to comment on it.
aguia
"Nvidia wins one category..."
Any theories why does ATI GPUs win in almost everywhere but still NV sells around three times as many GPUs and earns massive profits unlike ATI who had 200M revenue with 50M losses? Is NV also using monopolistic tactics?
big update
Really, because of the 64 bits or the SSE3?
Only problem was that the CPU was designed to run at much greater frequency than the manufacturing technologies allowed.
If that’s the case where is it today?
Want to bet that at 6 Ghz consumed more power, performed slower, used a bigger die and more expensive than one Core 2 Duo 3.0Ghz?
It wasn't the best thing out there but it surely beat AMD in bang for buck category.
That’s what AMD today is doing to all Intel CPUs. So by your standard AMD is a better buy over Intel.
Can you now comment on if Intel is more of an US company or something else?
So can you tell if since AMD constructed the FAB30 in Germany became from a US company to a German company. And Ati is also German too now?
After all it was you who tried to say that it isn't.
Where I did that? All I was saying AMD is also an American company let me repost:
Like I said before EU, Japan, and Korean are loving the opportunity to screw a dominate American technology company in some way to help the local industry complex. Thanks AMD for helping other countries try take down another American iconic manufacturing and technical powerhouse.
But AMD is also American. So they are helping an American company because of an American company?
Your argument is plain stupid.
It was in fact so big that people wondered why wasn't it called Pentium 5.
Here Ho ho is why:
Prescott's Little Secret
aguia
"Really, because of the 64 bits or the SSE3?"
Not only that but because all the updates inside the core that would have made it possible to run at much higher frequency than Northwood if there wouldn't have been the thermal problems.
"If that’s the case where is it today?"
You know that it was rather simple to OC 65nm Netburst dualcore to over 4.5GHz with medicore aircoolers? Still, as you should know power usage rises exponentially with clock speed and Intel had an alternative architecture that proved to be a better solution.
"Want to bet that at 6 Ghz consumed more power, performed slower, used a bigger die and more expensive than one Core 2 Duo 3.0Ghz?"
More power? Yes. Bigger die? P4D 920 was around 140mm^2. More expensive? Unlikely.
"That’s what AMD today is doing to all Intel CPUs.
Not all, especially not with the higher priced CPUs.
"So by your standard AMD is a better buy over Intel."
Depending on needs and price range.
"So can you tell if since AMD constructed the FAB30 in Germany became from a US company to a German company"
I've never said that. It was you who was talking about being US/non-US company. I only showed you the link you were asking. Now you have it and I still have no comments on it.
"Here Ho ho is why"
I know that Prescott was inferior to Northwood in performance and thermals.
Had you actually read the link you would have seen why Intel designed Prescott as it did. Intel was expecting to have it running at much higher frequency than Northwood but as all people should know they failed. Kind of similar how AMD was talking about 2.6GHz K10 benchmarks and only released at up to 2GHz.
P4D 920 was around 140mm^2.
Pentium 4 Single core CPU with 2MB L2 at 65nm had 81mm^2. Do the correct math.
Intel Pentium D 960 Processor
Presler
Not all, especially not with the higher priced CPUs.
So are going to tell me that Intel at that time was competing with AMD high end CPUs. Only in price maybe, where the EE cost the same as the FX60 but couldn’t even beat the X2 3800+.
Now you have it and I still have no comments on it.
But what you want me to say?! I never said Intel wasn’t a US company it was lex that said AMD wasn’t an US company. What I was trying to say was if AMD isn’t a US company then Intel isn’t too. That’s why I already told you it’s not a good idea answer other people replies; because you may not understand what was in previous posts/articles/…
Had you actually read the link you would have seen why Intel designed Prescott as it did. Intel was expecting to have it running at much higher frequency than Northwood but as all people should know they failed. Kind of similar how AMD was talking about 2.6GHz K10 benchmarks and only released at up to 2GHz.
Like I said innumerous times, Intel with the same transistor count could have done a “native” dual core Northwood. Could even have glued two of them, Northwood thermal where very low.
If Intel did a huge mistake like that, why didn’t remove the product from the market and wait for 65nm process.
Presscott didn’t help Intel in nothing.
In fact one question, where was Intel magnificent gluing technology in the Pentium 3 days, in fact since the first single Core CPU appeared?
aguia
"Pentium 4 Single core CPU with 2MB L2 at 65nm had 81mm^2. Do the correct math."
Interesting. I wonder whose links have the correct information.
"So are going to tell me that Intel at that time was competing with AMD high end CPUs."
No. Back then Intel had vastly better marketing that could tell people to by inferior product. Later in 2005/2006H1 it got a lot tougher for Intel and they had to start massive price cuts that lead to rather big drops in profitability and the fact that it could offer more bang per buck at the early days of dualcore CPUs, at least in Estonia.
"... EE cost the same as the FX60 but couldn’t even beat the X2 3800+."
That depends on the task you use the CPU for, just the same as today when K8 can be a better choise over Core2. As I've said before there was time when P4D 920 was a lot cheaper than x2 3800+, cheap enough to make up for the performance difference.
"But what you want me to say?!"
You asked where are Intel production plants located. I told you. You asked for a link that would show that majority of Intel workers work in US and I gave you that. I'm not asking for anything specific, just a comment on the information you asked and I gave.
"Like I said innumerous times, Intel with the same transistor count could have done a “native” dual core Northwood."
In theory, yes. In practice I have no idea what is needed to do that. Also why couldn't/didn't AMD create a MCM dualcore with K8 when K10 was late and underperforming when it has the technology and will use it in a year or two?
"If Intel did a huge mistake like that, why didn’t remove the product from the market and wait for 65nm process."
What mistake? Market liked dualcores and MCM was the only thing Intel had. It was good enough and they seld quite a few of them. I can't see a reason why should they have pulled them back.
"where was Intel magnificent gluing technology in the Pentium 3 days, in fact since the first single Core CPU appeared?"
As I said I have no ideas what technology advances it took to create a MCM dualcore. Probably Intel could have done it before but it might have been a bit of a problem if it would have started competing against Itanium. Also back then single threaded performance was what was most needed from the desktop PC's, for servers there were the expensive CPUs with Itanium leading the pack.
Interesting. I wonder whose links have the correct information.
Don’t tell me that 81x2 isn’t 162…
And on the same link you provided the Pentium4 have 81mm2 not 70mm2.
That site doesn’t know how to do math very well…
Also back then single threaded performance was what was most needed from the desktop PC's
Do you have any info where it shows that a single core CPU can’t do what a dual core do for most of the used applications today? (web, games, office)
One fact: I would only buy one Core 2 Duo or AMD Athlon X2 because of the price they cost today and not because of the performance difference VS single core CPUs.
Mo
"So in other words, the prediction you had made about AMD breaking even in Q4 is wrong and you would like to adjust it."
Mo, have you stopped beating your wife?
Why is it so difficult for you to repeat things correctly? AMD made a forecast that they felt they could break even by Q4. This was not my prediction as you've tried to claim. I've talked about AMD's statement before and never said that I agreed with it. What I'm saying now is that I doubt that AMD can hit break even in Q4.
"Correct?"
Try repeating what I say instead of what you make up. That is the only way you will ever be correct.
scientia
"AMD made a forecast that they felt they could break even by Q4"
Actually AMD never said they expect to break even, they said the hope to. Hope != expect. Some people may hope to rule the world, it's not like they actually expect to do that one day :)
Ho ho, if you have no point with your comments about AMD's americanness compared to Intel's, then stop commenting on it (and by point, I mean one relavent to there being ulterior motives to what the EU, Korea, and Japan did). What you're doing is remarkably similar to trolling unless you finally give us some reason for your doing it.
ATI did poorly LAST quarter, and the quarter before that because they either didn't have r600 or it wasn't running very well. They will be doing much better once all the vedors finish stocking their machines with new video cards. In case you missed it, ATI getting WHQL drivers much earlier than Nvidia, meant that vendors used them instead.
enumae
"From what you have said before it takes about 4 years to produce a new processor, so in looking at the time line (according to Wiki) how are they responding?"
I can see why the timeline would be confusing if you aren't familiar with processor design. PII was identical to Pentium Pro except for having the L2 cache on a card instead of an MCM. Essentially PII was a cost reduced (consumer version of) Pentium Pro. This is why Intel was able to release PII so quickly. Pentium Pro was never intended to be released on the desktop.
Pentium 4 (Williamette) was a nearly completed experiemental project that was also not intended to be released on the desktop. Williamette shows the classic signs of being rushed. The rough edges on Williamette were corrected with Northwood.
Your estimate about K8 is off of course. AMD first demonstrated K8 in 2001. I'm pretty certain that design work on C2D began in 2002 when it became obvious that Prescott had problems.
"Dual or Quad?"
I think AMD needs a 3.0Ghz quad in Q1 08 to be competitive with Intel.
Ho Ho
"How are SSE5 and G3MX any different from that? When will they be on desktop? When will programs start benefiting from SSE5?"
G3MX will benefit the desktop even if it is never used there because it will reduce the cost and complexity of designing consumer DIMMs. SSE5 will be available on the desktop in mid 2009. Software should begin using it right away.
"Will SSE5 be important if Intel makes up its own new version of it?"
An Intel version would trail by at least a year. It is doubtful that Intel will be able to get MS to support a second version.
Scientia
Your estimate about K8 is off of course. AMD first demonstrated K8 in 2001...
Thanks for taking the time.
In regards to my estimate, all of the dates shown are from Wiki, and are the supposed release dates of the products.
Sorry for any errors in the image.
Post a Comment