Intel's 3.0Ghz Barrier.
More than year ago everyone was talking about Core 2 Duo's from Intel that would clock to 3.2 and even 3.33Ghz in 2006. However, these speeds have never been released. And, rather than asking why, not a single review site has mentioned it. The common pretense today seems to be that Intel never claimed that it would top 3.0Ghz in 2006 which, of course, it did.
It isn't hard at all to find evidence of Intel's intention to have Conroe's clocked as high as 3.2 or 3.33Ghz in 2006.
February 12, 2006, Conroe Extreme Edition
In the Q4 2006 the maker will also add the model E6800 that works at 2.93GHz . . . The Extreme Edition of the Conroe processor will operate at 3.33GHz
May 31, 2006, Intel Confirms Two Upcoming Core 2 Extreme CPUs
Intel representatives just contacted DailyTech with the following information:
The Core 2 Extreme processor (Conroe based) will ship at 2.93GHz at Core 2 Duo launch. We will also have a 3.2GHz version by end of the year.
At Bad Hardware, March 31, 2006, Conroe Roadmap And Prices we see:
Core Duo E8000 4MB 3.33GHz 1333MHz Q4 $1199
Core Duo E6900 4MB 3.20GHz 1066MHz FSB Q4 $969
Core Duo EE edition 3.33GHz(L2 4M) 1333MHz Q3 $999
Mike's Hardware originally had this listed as well but it was later changed. However, we can still find Mike's original roadmap from forum comments made about it at the time. For example, at VR-Zone Forum on October 4, 2006, we find a reference to Mike's:
Intel Conroe E6900 (3.2GHz) is expected to be released in Q4.
Clearly, everyone (including Intel) expected Core 2 Duo's faster than 3.0Ghz to be released in 2006 yet these were never released. The common excuse given by Intel proponents is that Intel didn't release faster chips as originally planned because, “It didn't have to.” However, there is good evidence that the reason had more to do with temperature and overheating than a warm and fuzzy feeling of being safely in the lead.
It is very rare to get proper numbers for thermal testing of Intel cpu's. Typically, overclocking is done with premium cooling and testing with the stock HSF never thermally stresses the CPU. For example, when Anandtech originally reviewed X6800, they used a massive Tuniq Tower. This is, of course, nothing at all like what is shipped in the vast majority of computer systems. A term that often gets tossed around is "on air". However the truth is that high end air coolers today are as good as liquid coolers used to be four or five years ago. Thus the term "on air" now means very little. Cooling your CPU "on air" with something the size of a transmission cooler is not much an accomplishment. But, if you could overclock "on air" with the stock HSF, that would be an accomplishment indeed. Unfortunately, review sites seem to want to do thermal testing with a stock HSF about as much as they would like to feed a tank full of hungry pirannas by hand.
It appears that we only got some halfway informative numbers from Anandtech by accident when they were reviewing a cooler instead of the processor itself. I've talked about this information in an earlier article but I'm going to revisit it along with other information since people still seem confused about Core 2 Duo's thermal limitations.
In Anandtech's Case cooling in the second chart: CPU Temperature Under Load, we have some data. Notice that at the stock clock speed of 2.93Ghz with the stock HSF, X6800 is reading 56 C. Now, the article says, "The stress test simulates running a demanding contemporary game. The Far Cry River demo is looped for 30 minutes and the CPU temperature is captured at 4 second intervals". Unfortunately, Anandtech's assumption is a bit off since Far Cry does not really thermally stress the core. According to the Core 2 Duo Temperature Guide:
Intel provides a test program, Thermal Analysis Tool (TAT), to simulate 100% Load. Some users may not be aware that Prime95, Orthos, Everest and assorted others, may simulate loads which are intermittent, or less than TAT. These are ideal for stress testing CPU, memory and system stability over time, but aren't designed for testing the limits of CPU cooling efficiency.
Orthos Priority 9 Small FFT’s simulates 88% of TAT ~ 5c lower.
So, if FFT is 5C lower then Far Cry would be less than that. The Guide also clearly states Tcase Load should not exceed ~ 60c with TAT, and 55c with Orthos.
But, at 56 C we have already exceeded 55 C. And, that would be if we were running FFT. Since we are only running Far Cry the real temperature is probably closer to 60 C at full load. Now, someone will probably claim that that is okay because you would never hit full thermal load under normal circumstances. Unfortunately, that is already figured in. As stated in the guide: 50c is safe. 50c Tcase is a safe and sustainable temperature. 55 or 60 C are not safe or sustainable temperatures. Even though the maximum spec is 60 C, 60c is hot.
I have seen others claim that 60 C was fine because it didn't exceed the maximum rating. However, this is not the way the rating works. 60 C is the maximum for TAT only because no other program will ever reach this stress level. So, it is clear that X6800 with stock HSF will indeed exceed the factory cooling specs at stock speed. This is why Intel has not released a 3.2Ghz quad core.
But, if Core 2 Duo is running hot at 2.93Ghz this still leaves the question of whether anyone would notice. As I've already mentioned, these types of tests seem to be avoided like the plague by regular review sites. So, we need something obscure. However, as luck would have it, we do have something out of the ordinary at Digit Life where X6800 (2.93 Ghz) dual core is reviewed. These conditions are very unusual because he tests in his un-air conditioned apartment. In Moscow apartments, unconditioned for several days, the standard daytime temperature was within +25—+30°C. In this case the environment temperature was +28°C. 28 C is 82 F. This would not be unusual for Indiana in the Summer either. But, he uses the stock HSF. And, let's see what happens:
it didn't even occur to us that new Intel Core 2 processors could spring such a surprise with throttling...) Yes! It was throttling!
The chart is excellent because it compares the stock HSF to a better cooling solution. We can see that none of the common benchmarks really thermally stress the cpu. The one that stressed it the most was the Solidworks CAD & CAE benchmark. The chart only shows an 11% drop because it is a total score. However, The results of the overheated processor are very low in two applications out of three: SolidWorks 2005 and Pro/ENGINEER Wildfire 2.0. This is proof positive that even with a regular application that dual core C2D is busting the thermal limits when running at 82 F ambient. The author says that there was no thermal throttling at 72 F ambient.
Now, let's look at that chart again. Notice that with the 3D Shooter Games (F.E.A.R, Half Life 2, Quake 4, Unreal Tournament 2004) the drop is only 3% versus the 11% we saw with CAD. This again casts doubt that Anandtech was doing anything thermally stressful with Far Cry.
So, Intel did not release a 3.2Ghz processor in 2006 because they couldn't. Such a processor would have exceeded the factory's own limits when running routine applications at ambient temperatures common in the Summer. However, we also know that Intel has steadily improved the thermal properties of its C2D chips with each revision. It is possible that one of the newer revisions of C2D would be capable of going over 3.0Ghz without busting the factory's thermal limits. After all, one would assume that simply lowering the TDP would help and we know that TDP has come down. However, I'm not entirely sure that 45nm is actually an improvement. For example, if a given 45nm chip has the same TDP as a given 65nm chip then logically the 45nm chip would concentrate the heat into about half the die area. This would seem to be more likely to create hot spots rather than less.
I'm quite certain that Intel will indeed get chips out that exceed 3.0Ghz. And, I'm pretty certain that if Intel doesn't do it in Q4 of this year that Q1 08 should be reasonable. It might happen with 45nm as many expect but I can't see any reason why it couldn't happen with 65nm in another revision or two if 45nm wasn't up to at first. In other words, Intel would move to 45nm anyway to reduce die size even if it didn't make any major strides in power draw at first. However, the talk has been that 45nm at Intel is much better with power draw than 65nm. I guess we need to take that with a grain of salt since it was claimed in January 2006 that Intel had Conroe samples hitting 3.33Ghz. It would be nice to have some up to date comparison information with newer C2D's to see if the thermal limits have been improving over time (which they likely have). But, since the information we do have about stock cooling is nearly accidental that seems very unlikely. We may just have to wait until Q4 and see if any 3.2 or 3.33Ghz 45nm chips appear.
152 comments:
some links are dead
Well it’s pretty obvious that current applications out there are single core/single thread so impossible to max out the Core 2 Duo processor, not even talking of the quad Core.
It’s surely impossible to use Far Cry game to max out the processor.
Maybe Intel had take some thoughts what would happen to their processors in the future if they where going to be maxed out, not only with the stock cooler but also with lots of dust on the cooler and the fan rotating more slowly.
I’m sure Intel as see that with the first Xeon, remember that Intel as gone first to Desktop than Server, maybe the server version where getting already to hot, so they backed a little.
I always thought that they where concentrating in quad core processors instead of high clocked dual core, but now after reading you excellent article I’m having second thoughts.
Very good Scientia and continue the excellent work.
Scientia,
Just some thought about this review:
AMD Athlon 64 X2 6000+ (Socket AM2)
Why do you think AMD processors are so bad at encoding but are so good at decoding?
Also why aren’t decoding tests done by the others web sites ?
If it’s just the much better Intel SSE units would this also showed at decoding?
Scientia, Intel seams to have released finally the native 2MB L2 cache. It uses less power than the old with 4MB cache but half was disabled:
B2 vs L2
Maybe this version has different specs. But if the die is smaller (2MB vs 4MB), like you said could also have more hot spots, so more difficult to cool, due to less contact surface.
Also you or any one knows of any single core 65nm processor review (3500+/3800+)?
According to this road map:
Intel Native 2MB L2 Conroe/Merom In Feb '07
Intel will have 3.7Ghz processors in Q4/2007. But the same road map put 2MB L2 processors in Feb'07 while we are already in Jul'07. The road map failed its predictions in 6 months!
It’s from 22Oct06.
Dear Mr. Scientia. It is very interesting to me. I meant your postings. May I add your blog to mine? Thanks from Indonesia.
Dear Mr. Scientia. It is very interesting to me. Intel is enjoying the triumph for now. And their marketing people were so arrogant to claim their C2D is the perfect product.
BTW, May I link your blog to mine? Thanks. From Indonesia.
Your first four links don't work.
Intel's word cannot be trusted at all-they did promise 10GHz processors too but where are they? The points you raise about the Intel proponents are very interesting, especially their attempts to deny that Intel ever talked about 3.3GHz and 3.2GHz
Conroes. What is most telling, however, is the "methodology" used in testing the thermal limits of Intel processors-this is clear evidence on their part to deceive people. The only question is whether they did it of their own volition or were they guided by Intel?
azmount & polonium
Yes, thank you; I hadn't realized that there was a problem. I couldn't see anything wrong with the links in html but I inserted them again and I think they all work now.
Wu Tjin Sen
Yes, you can link to any article of mine. On the first page you can see that there is already a link to this article from another website.
all
Comment moderation is disabled again. Comments will appear right away.
As I understand it, exceeding the temperature threshold of a CPU (within reasonable limits) is primarily a long term reliability issue. Since Intel warranties their processors for 7 years, I think the clock speed issue is more of a long term reliability issue than a short term one. And how many of us are using 7 year old processors? Not many I suspect.
On 45nm, you seem to be forgetting that this is not just a dumb shrink. Intel has taken a big step on the power savings front with the introduction of metal gates. This from the Inq on the effects:
Intel is claiming the usual 2x increase in transistor density, a given on the shrink between 65nm and 45nm, and better performance on top of it. 30% reduction in switching power, 20% faster switching speed or 5x lower source-drain leakage, and a 10x reduction in leakage through the dielectric layer.
If this doesn't break the 3.0GHz barrier, then there is a real architectural problem. I would even go so far to say that Intel could break 3.0GHz on 65nm if it were necessary. But I suspect the resources have been channeled into other areas (read Nehalem and Gesher). So even if C2D does hit the thermal wall at 3.0GHz on 65nm, the wall seems to have been moved at 45nm.
Finally, I'm not sure why you are making a big deal out of this and yet downplaying Barcelona debuting at 2.0GHz. The Barcelona disappointment seems like a much bigger issue to me. Yet you seem willing to assume the best for Barcellona's potential. Why not do the same for C2D on 45nm?
aguia
One might be tempted to attribute the higher encoding speed for C2D to its larger cache. However, there is no detectable change in performance between either the 4MB and 2MB C2D's or the 1MB and 512K X2's. So, it isn't cache.
The difference between the two is that encoding takes more processing power. Encoding is more complex because decisions have to be made about which parts to drop from the original dataset to create the final version. C2D is about 50% faster than X2 because of its greater processing power.
However, once the data is made into a regular MP3 format it takes less processing power to decode because it is a simple conversion with no data decisions. X2 is able to keep up because C2D is not utilizing its full SSE potential.
Half and 1/4 cache C2D. The die sizes for these chips are the same it's just that they have less cache. The cache uses little power anyway so it isn't much of a factor. There are no real difference in thermal effects with changes in cache size. With 45nm the entire die has been shrunk to half the original size. Therefore the power could be concentrated into a much smaller area. This could potentially cause negative thermal effects.
polonium
"What is most telling, however, is the "methodology" used in testing the thermal limits of Intel processors"
We are really talking about two different things. When reviewers test processors they are primarily interested in performance on common benchmarks. These benchmarks do not typically thermally stress the core. Trying to stress the core by running something like Prime95 is more typically associate with overclocking which also tends to use premium cooling. So, thermal testing with stock HSF is rare.
Intel on the other hand has to consider all of these factors when releasing a chip with a given clock speed. And, they are very much aware that the vast majority of their chips will run with stock HSF's. Undoubtedly, Intel's testing is much more thorough. Intel obviously did its job by not releasing faster clock speeds than the technology could handle. The challenge has been showing that the technology was indeed thermally limited when thermal testing is not typically done.
Scientia
However, there is good evidence that the reason had more to do with temperature and overheating than a warm and fuzzy feeling of being safely in the lead.
E4000 65W Tcase is 61.4°C
E6000 (2MB) 65W Tcase is 61.4°C
E6000 (4MB) 65W Tcase is 60.1°C
X6800 75W Tcase is 60.4°C
Q6600 105W Tcase is 62.2°C
Q6700 130W Tcase is 64.5°C
QX6800 130W Tcase is 54.8°C
If Tcase is the cause or basis, how do explain a lower Tcase on a higher wattage Quad-Core and then claim that Intel can not release a 3.2GHz Dual-Core?
This is why Intel has not released a 3.2Ghz quad core.
Maybe I don't understand, but your links are referring to 3.2GHz and 3.33GHz Dual-Core, who is talking about a 3.2GHz Quad-Core?
"if a given 45nm chip has the same TDP as a given 65nm chip then logically the 45nm chip would concentrate the heat into about half the die area."
Good point. This is probably why AMD isn't releasing highest-clocked K8 on 65nm. Even with 2/3 TDP the smaller die size probably makes thermal density too high.
"Why do you think AMD processors are so bad at encoding but are so good at decoding?"
There is a feedback look in encoding when the encode adjust output to the desired bitrate/quality. When made multi-threaded this loop incurs inter-core data sharing (one thread/core's output is being consumed by another).
For Core 2, the shared 4MB L2 cache allow straightforward sharing between the 2 cores. For K8, the two separate 512KB L2 aren't good for this type of sharing at all. This is also why QX6700 has no better performance than X6700. Apparently the encoder is optimized for Core 2 Duo (2 cores).
Oh I'd also like to say that Core 2's shared L2 cache is a great design, as I've said from the beginning. It (the L2) brings the two cores close together when working on a single process; this makes the processor *very* good for benchmarks and some desktop applications.
About the temperatures at Anand, can anyone understand what sensor temperatures are they reporting? Tjunction or Tcase?
From what I know encoding takes a whole lot of more calculating power since the work done is considerably more complicated. It can be written using SIMD instructions and this is why C2D shows that good results. Decoding an mp3 is a really simple task compared to that and there isn't too much stuff you can parallelize there, e.g there is no complicated signal processing that easily lends to SIMD-ification.
An interesting thing I noticed in the en/decoding benchmark is that there is very small speed difference between Intel and MS compiler generated code, sometimes ICC even generates faster code.
If it is unclear then the compiler comment was about the code speed when it runs on AMD CPUs
Any response Scientia?
intheknow
Did you actually read my article? Most of what you just said repeated what I said in my article.
"On 45nm, you seem to be forgetting that this is not just a dumb shrink. Intel has taken a big step on the power savings front with the introduction of metal gates."
However, the talk has been that 45nm at Intel is much better with power draw than 65nm.
"If this doesn't break the 3.0GHz barrier, then there is a real architectural problem. I would even go so far to say that Intel could break 3.0GHz on 65nm if it were necessary."
It might happen with 45nm as many expect but I can't see any reason why it couldn't happen with 65nm in another revision or two if 45nm wasn't up to at first.
"The Barcelona disappointment seems like a much bigger issue to me."
Barcelona is a disappointment which I wrote about in a previous article AMD Quietly Announces K10 Launch TimeFrame.
There is little doubt that AMD is not happy with the 2.0Ghz launch speed . . . It is obvious that AMD's effaced statement was as much a mea culpa as if they had taken out an ad during the SuperBowl. So, AMD's rating is that they are not at the top of their game.
"Yet you seem willing to assume the best for Barcellona's potential."
I don't believe I mentioned Barcelona at all in this article. However, from my previous article where I did mention K10:
I don't know how likely it is that AMD can get the speed up 2 ½ grades from 2.0Ghz to 2.5Ghz. That is what would be required to equal the current top K8 speeds. This speed could also roughly match Clovertown. However, 2.5Ghz for dual core K10 would lag behind 2.93Ghz Conroe. Also, 2.5Ghz is only likely to match a 2.66Ghz Penryn quad core.
"Why not do the same for C2D on 45nm?"
I'm quite certain that Intel will indeed get chips out that exceed 3.0Ghz. And, I'm pretty certain that if Intel doesn't do it in Q4 of this year that Q1 08 should be reasonable.
enumae
Sorry for the confusion about quad core; I corrected the article.
The only relevant rating that you have listed is:
X6800 75W Tcase is 60.4°C
which is only slightly above 60 C. I assume you must be looking at:
Q6700 130W Tcase is 64.5°C
which I'll admit is interesting that they've gotten it up nearly 5 C. However, it doesn't change anything since the rating for 2.93Ghz is much worse than X6800:
QX6800 130W Tcase is 54.8°C
Scientia
Sorry for the confusion about quad core; I corrected the article.
Ok.
The only relevant...
ALT+248 = °, in case you forgot :).
I went ahead and showed them all because when I was looking for them I had to dig a little.
I assume you must be looking at...
Please don't assume, I will rephrase my question in case it was not clear...
The QX6800 (130W) 2.93GHz Quad-core has a lower Tcase of 54.8°C than that of the X6800 (75W) 2.93 Dual-core with a Tcase of 60.4°C.
What is it that allow a higher wattage chip with twice the cores to have a lower Tcase temp?
There must be other factors involved such as a heat sink, or TIM (is that the word for the heat spreader on the chip?), and if that is the case they could be applied to the Dual-core.
Looking at this, I have a hard time believing that Intel can not release Dual-core with a clock speed of 3.2GHz, they simply don't need it.
However, it doesn't change anything since the rating for 2.93Ghz is much worse than X6800
Isn't it better if they can have a 130W Quad-core work within a smaller thermal envelope than a 75W Dual-core?
ho ho
"About the temperatures at Anand, can anyone understand what sensor temperatures are they reporting? Tjunction or Tcase?"
Right, you can't tell from that article which was an update. If you look at the previous Thermalright Ultra 120 article you can see that they use the nTune utility to get temperature.
NVIDIA Monitor has a drop-down pane for temperature measurement which reports CPU, System, and GPU results. Reviews at this point will concentrate primarily on CPU temperature. In addition to the real-time temperature measurement, NVIDIA Monitor also has a logging feature which can record temperature to a file in standard increments (we selected every 4 seconds).
CPU is the temperature for the whole processor and not an individual core. This would be Tcase. TAT would also show the core temperatures which would be Tjunction.
enumae
"What is it that allow a higher wattage chip with twice the cores to have a lower Tcase temp?"
Oh, I see. You are reading as: a given chip will typically measure a maximum of Tcase when drawing maximum TDP. This is backwards. You should read it as: a given chip can safely dissipate a maximum TDP if the Tcase is kept below a maximum of Tcase.
Tcase is a cooling spec, not a measured heating spec. If this were measured Tcase then the Tcase should go up with TDP. However, it actually goes down (because a higher wattage chip has to maintain a lower average temperature to prevent local overheating).
You can clearly see this at this link for Xeon:
2.4 Intel® E7501,
E7500 533 512 KB 65.0W 1.5V 74C
2.4Ghz Intel® E7501,
E7500 533 512 KB 40.0W 1.3V 81C
Identical except that the higher wattage chip has a lower temperature tolerance. If we take the 40 watt chip and overclock it, the tolerable Tcase will drop as the TDP increases.
Scientia
Ok, now I understand your view :)
Sorry for the misunderstanding.
Random news is starting to come out, I believe that the NDA was lifted tonight (July 15th).
VR-Zone
They might need more clock speed if Barcelona speeds up.
"From what I know encoding takes a whole lot of more calculating power since the work done is considerably more complicated."
Apparently you are (again) not understanding. The difference is exactly as I have described, that encoding requires a feedback loop and data sharing between threads while decoding doesn't.
"It can be written using SIMD instructions and this is why C2D shows that good results."
Decoding can be written in SIMD exactly as well as encoding does.
"Decoding an mp3 is a really simple task compared to that and there isn't too much stuff you can parallelize there"
You are mixing thread and instruction level parallelization. Encoding has the same parallelism in terms of SIMD as decoding does, but more on thread-level.
"e.g there is no complicated signal processing that easily lends to SIMD-ification."
This is just wrong.
Some odd results there. I had a quick look at Tom's and found the following review
They test a x6800 against a fx-62 and it looks like they used stock cooling on both parts.
"Under heavy loads, the overclocked Core 2 Extreme reads 151° F (66° C), and power consumption registers 95 watts. Don't forget that these readings occur because the clock rate was boosted from its stock value of 2.93 GHz to an impressive 3.66 GHz. In this situation, the stock cooler fan ran at a rotational speed of 2700 RPM, easy to notice at higher clock rates."
Looking at the various benchmarks I cannot see any throttling when comparing the o/c x6800, x6800 and e6700.
While we don't know the room temp that Tom's run the tests at, wouldn't you expect a o/c x6800 running at 3.66 to throttle ?
As Tom's did not seem to experience the same problem as Digit-Lifes did someone get lucky on the quality of their proc or dodgy/badly applied HSF.
p.s. Nice blog you have here, much more informative than certain others I have come across.
Enumae as you can see in my 3 post Vr zone is not to be trusted.
Single Core Battle : Intel Wolfdale-L vs AMD Spica
3.7Ghz processors? I don’t think so... and low end ones, yeah right...
Also if Tick-Tack-Tock is really running according to the plans:
Conroe-July2006
Penryn-July2007
Nehalem-July2008
slack
""Under heavy loads, the overclocked Core 2 Extreme reads 151° F (66° C)"
They don't say what they used to get a "heavy load". In the past they typically ran Prime95 which we know does not thermally load the processor.
"Looking at the various benchmarks I cannot see any throttling when comparing the o/c x6800, x6800 and e6700."
None of the listed benchmarks would thermally stress the cpu. 66 C should be just under the throttling temperature which, by the way, is too hot. Throttling is used to prevent the chip from burning up; it does not keep the chip at a safe temperature. Running just under the throttling temperature will still damage your processor. If they are showing a Tcase of 66 C when running something like Prime95 then they are at least 15 C over spec.
"While we don't know the room temp that Tom's run the tests at, wouldn't you expect a o/c x6800 running at 3.66 to throttle ?"
The test lab is air conditioned.
"As Tom's did not seem to experience the same problem as Digit-Lifes did someone get lucky on the quality of their proc or dodgy/badly applied HSF."
In the Digit Life article he states that he saw no throttling at 72 F. And, he only saw throttling with the CAD application.
aguia
3.16Ghz for quad core 45nm with 120 watts
3.33Ghz for dual core 45nm with 65 watts
Those seem like reasonable estimates to me. I think the only question is whether these will arrive in Q4 07 or Q1 08.
Intel can't ramp up brand new tech, a tech that is by the way, way more different than what they used so far - in just one month. Or two month. Or three. If that roadmap has any truth in it then i would expect those 3GHz + CPUs in the first half of 08.
However it does sounds scary. Will AMD be comfortable to sell their Phenoms X4 for under 200$ in 2008Q1?
I guess the talk soon will be about the Q2 07 results. The webcast of the shareholders meeting begins at 10am ET. You can get to it at AMD Investor Relations.
azmount
Which Kentsfields will be under $200 in Q1 08?
FAB 30 should be reduced to 40% capacity by then (as tooling is removed) with more than 80% of the production total coming from FAB 36.
What the TBD means in the road map?
I think intel will introduce 2.13GHz quad core CPU in Q1. I'm usually never wrong when it comes to predictions. How about we wait and see, that would be fair wouldn't you say? If there wont be no Kentsfield for under 200$ then I'll admit i was wrong on this blog.
Too bad; they didn't ask any questions at all at the shareholders meeting. It looks like we'll have to wait until thursday to get information from the Earnings report at 5pm.
aguia
To Be Determined
azmount
At NewEgg
1.6Ghz Clovertown - $349
2.4Ghz Kentsfield - $455
I assume you are thinking of a 2.3Ghz Clovertown with the FSB speed reduced from 1333Mhz to 1066Mhz. $200 would be about where Conroe 2.13Ghz is now. It's possible but it wouldn't that mean that 1.6Ghz Allendale would have to displace the current single core Celerons including the three Conroe L models?
azmount
Anyway, getting back to the issue. If Intel has a 2.13Ghz 1066Mhz Kentsfied for $200 in Q1 08 then AMD would have to match this price with its 1.6Ghz X4 Phenom.
This should still leave plenty of speed grades above this for 1.8, 2.0, and 2.2Ghz models. AMD is also likely to have something from 2.3 - 2.5Ghz by then although 2.6Ghz is possible.
If TBD means To Be Determined, then that road map is from VRZone and not from Intel.
So Intel doesnt even know how many processing threads their own processors will do, cache size and other things, very strange.
I have hard time believing that 1.6GHz Phenom X4 will be able to match 2.13GHz Core2Quad. Keep in mind that Hector made it pretty clear that AMD Will offer good performance/price ratio, in the month after Core2 release they sure kept things that way (except for the FX series).
One more thing. K8 has reached its limit. They might bump the speed to 3.2GHz but thats about it. Thus without K10 based product AMD simply can't compete in 2008. If all, even 3.2GHz K8 desktop models are to be sold for under 100$ in Q1 how can the company stay afloat?
Scientia you in sharikou blog said that AMD as 3.0Ghz processors in 90nm while Intel can’t do faster than 3.0Ghz in 65nm.
Do you think the fact that Intel processors are also 14 stage cores and AMD 12 stages cores, wouldn’t that leave even more clock speed difference to Intel VS AMD (12% at least).
Willamette, Northwood and Presscott have 21/31 stages that allow the processor clock much higher, isn’t it?
azmount
Some of the missing Nehalem information. It's native quad but may go to 8 cores with MCM. It uses something similar to HyperThreading so a quad core could handle 8 threads. It doesn't have a FSB since it uses an IMC. Clock speed is unknown.
"I have hard time believing that 1.6GHz Phenom X4 will be able to match 2.13GHz Core2Quad."
Why? A 2.13Ghz Kentsfield would be roughly equivalent to a 1.7Ghz K10. However, K10's 1600Mhz memory speed is faster than Kentsfield's 1066. If Intel can bump this up to 1333Mhz then AMD might need to match with 1.8Ghz.
"One more thing. K8 has reached its limit. They might bump the speed to 3.2GHz but thats about it.
Thus without K10 based product AMD simply can't compete in 2008. If all, even 3.2GHz K8 desktop models are to be sold for under 100$ in Q1 how can the company stay afloat?"
Well, do you see Intel's 2.4Ghz Conroe's selling under $100 in Q1 08? If AMD can release a 2.5Ghz X2 K10 then they'll have Intel's entire product line covered except for 3.0Ghz or faster Penryns.
aguia
"Scientia you in sharikou blog said that AMD as 3.0Ghz processors in 90nm while Intel can’t do faster than 3.0Ghz in 65nm."
That's not quite what I said. I said that Intel was unable to clock faster than 3.0Ghz on 65nm in 2006. AMD's current fastest 65nm chip is 2.7Ghz. AMD's 2.8 and 3.0Ghz chips are still 90nm.
"Do you think the fact that Intel processors are also 14 stage cores and AMD 12 stages cores, wouldn’t that leave even more clock speed difference to Intel VS AMD (12% at least)."
You can't always go by stages. Pentium Pro was originally 14 stages which was reduced to 12 in PII and then 10 in PIII with no change in pipeline speed.
K10 has the same number of stages as K8 but should be about 20% faster. This should make it the same speed as C2D.
"Willamette, Northwood and Presscott have 21/31 stages that allow the processor clock much higher, isn’t it?"
Prescott only gained about 6% by adding 10 stages. This is because most of the stages were added to remove the double clocked components that were overheating in Northwood.
One question. (I'm getting annoying hem Scientia)
I read all the Xbit review and noted that they used Prime95 to load/burn the CPUs.
Does this application really load/burn all the CPUs out there?
Is it multi-core aware?
That's not quite what I said.
Sorry. I must have misunderstood you.
I was giving higher clock speed CPUs = higher stages. Not relating it to performance.
Just clock speed.
Scientia
" In the past they typically ran Prime95 which we know does not thermally load the processor."
Do you know of any regularly used applications that load CPU nearly as much as TAT?
aguia
"Willamette, Northwood and Presscott have 21/31 stages that allow the processor clock much higher, isn’t it?"
They are actually 20/31 stages. Also AMD has hybrid pipelines with 12 for integers and 17 for floating point, Core2 has 14 for everything.
Anyway, 12 stages is enough to get to at least 4GHz and probably even much higher. Short pipelines are not the things what keep CPU clock speeds down. K8 has been shown to work at >4GHz, Core2 > 5.5GHz and 30 stage Netburst at >8GHz. Of cource that is with extreme cooling.
Btw, doesn't Cell SPU's have something like 7-8 stage pipelines at 3.2GHz? Wasn't Power6 also with extremely short pipelines?
Would you be surprised if they were to put out a 3.2Ghz Core 2 Duo in the near future on the 65nm process? Or would your excuse be that it's 2007, not 2006.
Couldn't it be possible that in 2006, they saw that their flagship chip X6800 was running in the lead untouched...that they have NO need of a 3.2Ghz what so ever? Couldn't there be a slightest possibility of that?
Plus your analogy that you used on AMDzone is flawed as you used the low end chips, which are be outperformed by a higher end chips.
Do you have direct quote from the horse's mouth about 3.2Ghz release in 2006?
1) Your first article, did you read the last line? "Intel Corp. officials did not comment on the news-story."
Your first article is already debunked as it was not official.
2)New article from a blog? are you serious about this. "although the roadmap did not have information about a 3.2GHz Conroe." What representative? no information what so ever. He could have made anything up.
3)Another news from a blog who wants Mel Gibson to run this country? another joke?
E6900? Intel said no such thing. Even the Roadmap they show doesn't show the E6900. Written March, 2006. Information was sparse. This article is probably the most useless as your backup.
4)"Mike's Hardware originally had this listed as well but it was later changed."
Why do you think it was changed? Perhaps Intel never said such thing?
You linked to articles from other people but yet NO OFFICIAL ANNOUNCEMENT THAT INTEL WOULD HAVE 3.2GHZ!
I'm surprised that people even went past all that to read the article. You article never had a foundation because all your resources are flawed as I mentioned above.
Can you provide me with ONE SINGLE official announcement from Intel that they even had 3.2Ghz on the roadmap? Most likely not because they never did.
The roadmaps were made up, and people bought em. 3.2Ghz in 2006 was all fud spread by selected few. Intel didn't say they would have 3.2Ghz on any of their roadmaps.
Sorry Sci, you grasping on loose straws.
Scientia said....
A 2.13Ghz Kentsfield would be roughly equivalent to a 1.7Ghz K10.
and then:
K10 has the same number of stages as K8 but should be about 20% faster. This should make it the same speed as C2D.
No no no, you are overestimating the importance of better interconnect in K10.
It will scales better on multithreaded apps. than Core2Quad but not by that much. Also i see no reason to think that K10 can perform better on single threaded apps. than Conroe.
1: K10 has 512K of L2 at 14 clocks + 2MB of L3 with even bigger latency vs. 4MB of L2 at 14 clocks in Conroe - bigger size plus better hardware prefetching will favor Conroe.
2:K10 has 128KB of L1 vs. 64KB L1 in Conroe, but in K10 its 2-way associativity vs. 8-way in Conroe which in many cases will put Conroe's L1 cache on par with K10's or close to it.
3: K10 can do 3 simple instructions per cycle but so does Conroe.
4: K10 can proses 2 128bit SSE instructions per cycle vs. 3 128bit in Conroe. Some other features will bring K10's SIMD performance closer to Conroes but not equal and not better.
5: K10 can do two 128bit loads/store from L1 but so does Conroe (actually one 128bit store+ one 128bit load).
6: K10 has much better FP performance. And little it will matter on the desktop.
Less my information is incorrect, I have to assume that K10 will fall behind Conroe in singlethreaded performance.
azmount aryl
"K10 can proses 2 128bit SSE instructions per cycle vs. 3 128bit in Conroe"
What instructions are those? I always thought that Conroe does two instructions per cycle.
Also, didn't Core2 L1 cache have lower latency thn K10?
K10 has much better FP performance. And little it will matter on the desktop.
Are you sure?!
I'm pretty sure improved FP = improved SSE.
Scientia,
This seam very interesting for you (I think):
Core 2 Duo Temperature Guide
They are talking about the same you are talking in your article and seams they corroborate with what you have said here.
aguia
"I'm pretty sure improved FP = improved SSE."
He was talking about x87 FPU, not SSE. Even K8 has better x87 than Core2 but as you know it still gets beaten by C2D.
"They are talking about the same you are talking in your article and seams they corroborate with what you have said here."
This is because Scientia based his article on theirs.
ho ho
Core 2 Duo Temperature Guide:
Some users may not be aware that Prime95, Orthos, Everest and assorted others, may simulate loads which are intermittent, or less than TAT. These are ideal for stress testing CPU, memory and system stability over time, but aren't designed for testing the limits of CPU cooling efficiency.
Primary Test = TAT @ 100% 10 Minutes
Alternate Test = Orthos @ P9 Small FFT’s 10 Minutes
Orthos Priority 9 Small FFT’s simulates 88% of TAT ~ 5c lower.
mo
"Would you be surprised if they were to put out a 3.2Ghz Core 2 Duo in the near future on the 65nm process?"
Mo, the article covered 2006. Intel has continued to make improvements to its 65nm process which is why I said in my article that I was sure that Intel could get above 3.0Ghz on 65nm at some point.
"that they have NO need of a 3.2Ghz what so ever? Couldn't there be a slightest possibility of that?"
It's possible that they didn't feel they needed it but it is also true that they were not capable of releasing one.
"Do you have direct quote from the horse's mouth about 3.2Ghz release in 2006?"
Intel representatives just contacted DailyTech with the following information:
The Core 2 Extreme processor (Conroe based) will ship at 2.93GHz at Core 2 Duo launch. We will also have a 3.2GHz version by end of the year.
You don't like it because it is a blog? Is George Ou's the only blog you read then?
"Can you provide me with ONE SINGLE official announcement from Intel that they even had 3.2Ghz on the roadmap?"
No, just like you will not be able to find one single official AMD announcement of 2.3Ghz processors at launch. Nor will you be able to find an official statement that Intel will have 3.33Ghz Penryns at launch.
I have not seen an official roadmap from either Intel or AMD in the past 8 years. So, I guess by your logic neither company has ever intended to release anything.
In other words, it is pretty mych impossible to load cores as much as TAT does with any real-world application or usecase.
1) I don't read George Ou's Blog, never have. Never have I referenced George in any of my replies. Never have I replied to George Ou's blog. I don't even have the link to his blog. I did read your rant on It over at AMDZone, in which the thread took a total dive.
2)I'm not the one writing a blog entry trying to prove a theory. You are. You should be able to provide SOLID references, and sad to say, you have none. All your references have been to blogs and not so credible articles written way before the official release of Core 2 Duo. A lot of people were speculating what would be released. There was no official word from Intel.
Intel never confirmed a 3.2Ghz chip, and especially not in 2006. It was all guesstimates from people drawing their own roadmaps.
I'm not the one trying to prove that AMD doesn't have 2.3 or 2.6Ghz Barcelonas.
your article fell apart in the first few paragraphs because your sources are weak.
Sorry but some guy guesstimating an upcoming product in his blog is not really news to me.
This is a public blog, and as a public reader of your blog, I question the validity of your article. Basically, your wrote on something that never existed to begin with.
You have no concrete evidence that Intel ever had 3.2Ghz on their 2006 roadmap, Just like I can't say that AMD doesn't have 2.6Ghz Barcelona.
He was talking about x87 FPU, not SSE. Even K8 has better x87 than Core2 but as you know it still gets beaten by C2D.
Then he also knows for sure that:
-AMD designed x86-64 instruction set.
-AMD as removed x87 instruction set out of x86-64 (MMX got removed too).
-AMD would not improve something that they removed by themselves.
scientia
"Intel representatives just contacted DailyTech with the following information ..."
I can't count how many times I've heard similar things on Inquirer and Fudzilla but even you should know that those things aren't always true.
aguia
"-AMD as removed x87 instruction set out of x86-64 (MMX got removed too)."
Funny thing is that both work excellently in 64bit mode. I know that the contents of MMX/x87 registers are not kept when swiching kernel level threads but they work just fine in userspace code.
"-AMD would not improve something that they removed by themselves."
Can you show me where is it said it is removed? I know it really isn't removed but amuse me :)
Funny thing is that both work excellently in 64bit mode.
They have to.
In legacy 32bit mode. Or compatibility 64 bit mode.
Native 64 bit mode, no.
Can you show me where is it said it is removed? I know it really isn't removed but amuse me :)
Prepare for a surprise ;) hoho hoho :)
The 64-bit advantage
aguia
"Native 64 bit mode, no."
I'm quite sure we had the discussion with Scientia some time ago and there was some link to MS documents where it was said that you can use them in user level code but in kernel space the contents of those registers are not kept over a context switch.
One source is here:
Early reports claimed that the operating system scheduler would not save and restore the x87 FPU machine state across thread context switches. Observed behavior shows that this is not the case: the x87 state is saved and restored, except for kernel-mode-only threads. Nevertheless, the most recent documentation available from Microsoft states that the x87/MMX/3DNow! instructions may not be used in long mode.
Basically it is exactly as I said: they work just fine in 64bit native mode, just not on kernel level. Of cource MS sais you shouldn't use those instructions but that doesn't make them not working. Basically it is the same as both AMD and Intel have been telling you you use SIMD instructions instead of x87 for years.
mo
So, if I'm understanding you, you are saying that Intel never intended to release chips faster than 3.0Ghz in 2006 but if they had wanted to then they would have been able to.
And, you believe this without any official statement from Intel. So, basically, you are willing to believe good unofficial things about Intel but not willing to believe bad unofficial things about Intel.
Intel Leaks 45nm Xeon Clock Frequencies
These look reasonable to me. It doesn't mention a timeframe but I would assume Q4 07 for these since they are server chips. However, because these include half clocks they may not all be released at once.
Scientia wrote:
So, if I'm understanding you, you are saying that Intel never intended to release chips faster than 3.0Ghz in 2006 but if they had wanted to then they would have been able to.
Dear host- you are coming to a conclusion that ignores rational behavior based on the economics of chip manufacture.
On every lot of wafers produced, regardless of manufacturer, there is a distribution of chips- it may be Gaussian, it may be Poisson, it may be something else- and it is always nearly skewed in some vector or another (fmax at the expense of yield, power at the expense of fmax,...). Since we are discussing fmax, there are typically two options available- produce standard lots, and get low bin splits (small percentages of the lot) in the highest speed bin, but good overall die yield, with lots of lower speed bins. Or... run a skew lot, where top bin split is much higher, but die yield is likely lower, typically due to higher leakage (due to CDs, implants, and a number of other process tricks that yield speed).
Economically speaking, this means that introducing a new top bin would be for one of two reasons:
1. Defensive purposes: competitive environment requires a faster chop to compete.
2. Offensive purposes: competitive environment can be changed by introing a faster part. This could be viewed from the "final nail" viewpoint.
In the case of defensive purposes, there isn't a lot a company can do- if their hand is forced, it is forced. You do what you need to to compete. In the second scenario however, there are additional considerations. If the new top bin is not required to compete, then maximizing profit/revenue behavior should drive the release decision. If by releasing a new top bin, I reduce the amount of $1K parts I can sell, and take my current performance leader and whack the price in half- have I sabotaged myself? If I can get high enough yields of top bin, this is an easy decision- crush the competition. But if I can't, and I can only generate a few %, then why do it?
I believe the latter is the case at Intel today. If Intel HAD to release a higher bin, they would do so. Manufacturing capacity what it is, they can afford to tank yield on some % of skew lots for more higher bin parts. They can even afford to put some "better than stock" cooling on said parts for a few $$/SKU. But the marketplace isn't demanding it, because AMD is currently not executing- and therefore Intel is maximizing profit.
Thoughts?
The Dailytech chart is only showing a speed of 3.33Ghz for dual core on 45nm a year later. It still seems very unlikely by end of 2006.
"I'm pretty sure improved FP = improved SSE."
"He was talking about x87 FPU, not SSE. Even K8 has better x87 than Core2 but as you know it still gets beaten by C2D."
Stop this pointless guessing and ranting, please. Doesn't anyone have the technical depth of knowing the difference (and similarities) between x87 and SSE anymore?
First the fact: K10 has better FP due to 2-4x SSE throughput. Its x87 (80-bit) capability however remains the same as K8.
Apparently, Azmount was wrong when saying K10 "has much better FP performance" in the context outside of SSE. Then he suggests the better FP is useless to desktop. Well, the better SSE FP is very useful to media applications, which desktops run increasingly more.
Also, in terms of arithmetic circuits, K8's x87 isn't really better than Core 2. However, K8 has a dedicated floating point register file that Core 2 doesn't, and that makes K8 perform much better for scientific applications that heavily use floating point instructions.
"there was some link to MS documents where it was said that you can use them in user level code but in kernel space the contents of those registers are not kept over a context switch."
There is really no need of such speculation. Read this paragraph from AMD K8 Software Optimization Guide p.237:
"In general, 64-bit operating systems support the x87 and 3DNow! instructions in 32-bit threads; however, 64-bit operating systems may not support x87 and 3DNow! instructions in 64-bit threads."
Why? Because in 64-bit the hardware does not manage/preserve x87 states upon task switching any more. For confirmation, read AMD64 System Programming manual p.321 in the Hardware Task-Management in Legacy Mode section:
"The processor does not automatically save the registers used by the media or x87 instructions. Instead, the processor sets CR0.TS to 1 during a task switch. ... System software can then save the previous state of the media and x87 registers and clear the CR0.TS bit to 0 before executing the next media/x87 instruction."
Note that "none of these features are supported in long mode (either compatibility mode or 64-bit
mode)," according to the same manual on p.318.
abi said...
Apparently, Azmount was wrong when saying K10 "has much better FP performance" in the context outside of SSE
I did mentioned better SSE performance before in my post. I did explicitly noted better FP performance as i was making point that FP performance is not as important feature when it comes to desktop as it is say in workstation or server environments. Do you not agree that most games/encoding benchmarks will gain very little from better FP(forget about SIMD for a second here) performance?
Also, saying that FP = SSE is Kinda correct, K10 does use the same die logic to process both. So if you think that that logic there is for FP processing first and SSE comes as sort of a little side effect - then yeah, sure, FP = SSE. Happy now?
"Also, saying that FP = SSE is Kinda correct, K10 does use the same die logic to process both."
What you said here is correct. And you would be correct to say K10 has better FP than K8, provided the FP means SSE-format floating point calculations (which however will be quite useful for all media calculations).
Thus aguia was correct to say "improved FP = improved SSE", because the improved FP in K10 is precisely due to its improved SSE. (What a dull logic!)
OTOH, Ho Ho's comment that you were talking about x87 rather than SSE would be misleading, which lead me the conclusion that your initial statement was wrong - because K10 does not have better x87 FP than K8. x87, being another example of lousy Intel design, simply cannot utilize the instruction-level parallelism in modern processors. The additional floating-point circuits in K10 are totally useless to x87 instructions.
Thanks for your clarification.
abinstein
"However, K8 has a dedicated floating point register file that Core 2 doesn't, and that makes K8 perform much better for scientific applications that heavily use floating point instructions"
With what does Core2 share its floating point register file that K10 doesn't?
"With what does Core2 share its floating point register file that K10 doesn't?"
Oops, I meant to say instruction scheduler, not register file. Somehow I was thinking the former but typing the latter without myself knowing it. Thanks for pointing it out. But IIRC K8 has larger fp register file than K10.
Oops.. again.. I meant to say K8 has larger fp register file than "Core 2", not (apparently) K10. :p
abinstein
"K8 has larger fp register file than "Core 2", not (apparently) K10. :p"
I don't know how many rename registers do the CPUs have but I'm quite sure that just having more shadow registers won't help much in terms of performance. It will help a bit but there are lots of other things that affect performanc a lot more.
Does any one knows of any socket AM2+ motherboard besides this one?
Biostar TF560 A2+
AMD is not going to survive. I guess even the high rank executives know that and probably trying to leave the boat. There is simply nothing that can be done το reverse the situation. The crows are gathering
As much as we don't like it, it is going to happen and I am curious to see when scientia will admit it.
July 17, 2007 5:59 AM
gdp
I've already debunked this rumor at AMDZone. Samsung cannot buy AMD because AMD's process technology agreements are encumbered with IBM. And, Samsung cannot get process technology from IBM because IBM has agreements through 2011 with Samsung competitors Sony and Toshiba. Secondly, IBM cannot buy AMD because IBM also sells systems. With the technology encumberances it is also not possible to break up AMD and make a profit.
It is possible that AMD could be purchased but it would have to be a company that could pick up AMD's current process agreements with IBM as well as AMD license agreements with several other vendors including RAMBUS. However, there are reasons to think that AMD would not go bankrupt.
Truth. This is the company that can sell all of their processors for under $100 and still make money. By the way, when does ATi acquisition is to stop making impact on quarterly results? (I'm referring to that 500M dent it been leaving lately)
gdp77 said...
"AMD is not going to survive. I guess even the high rank executives know that and probably trying to leave the boat. There is simply nothing that can be done το reverse the situation. The crows are gathering
As much as we don't like it, it is going to happen and I am curious to see when scientia will admit it."
I think you better start worrying about Intel's finacial situation instead gdp77. 3.5 billion USD in fines by an EU anti-trust ruling is quite a bitter pill for even Intel to swallow. From what I've seen, thier pockets aren't quite so deep as they have been, or as most people like to think as well thanks to their price war with AMD.
EU Agents 'Raid' Intel Offices in Antitrust Probe, 07/12/05.
Competition offence: European Union commission before decision over Intel
http://translate.google.com/translate?u=http%3A%2F%2Fwww.planet3dnow.de%2Fcgi-bin%2Fnewspub%2Fviewnews.cgi%3Fid%3D1184529814&langpair=de%7Cen&hl=de&ie=UTF-8&oe=UTF-8&prev=%2Flanguage_tools> Do Intel threaten fines at a value of 3.5 billion US Dollar?
"In the trust procedure of the European Union competition guardians against Intel a decision is imminent according to data the restaurant week.
In on tomorrow's Monday the appearing expenditure a decision is considered possible still before the summer break at the end of July. Philip Lowe, general manager for competition, is quoted with the statement that the case is locked in foreseeable time.
In the course of the investigations before almost exactly two years on Veranlassung of the European Union commission addresses had been scanned by Intel in different European Union states (we reported).
Intel threaten in case of a condemnation of fines at a value of 3.5 billion US Dollar, as well as competition editions.
Whether in this connection also punishments take place against the Media Saturn Holding in the possession of Metro, which became also the subject of the procedure (we reported), is not so far not well-known."
As far as AMD's situation goes, it's anyone's guess. They have been in this situation before in the pre K8 days, pre K7 days, pre K6 days (see a pattern here?) and it's very possible they can climb out of the hole again. Execution and delivery of with K10 family on it's price, performance and distribution will ensure this. It's not going to happen as fast as we'd like, but I can look to thier past market downturns like this between processor generations and see that they can just as easily pull themselves back up with their next-gen wares. As far as your gloom and doom opinions gdp77, you're welcome to them, but personally, you've far under-estimated AMD's "scrappiness" IMHO. They've been in worse finacial problems and survived, time and time again. AMD is without a doubt down right now, but nowhere near out of the game.
Scientia, I just want to add, I'm a long-time reader of your blog and your posts out on AMDZone. I always appreciate your "down to earth" viewpoints and the fact you pull no punches for AMD or Intel. Kudos and keep up the good work man.
Scientia I can’t be doing your home work ;)
Intel roadmap confirms
Intel Core 2 Extreme demonstrated with 3.5 GHz
I'm sure it will help you to clarify a few things.
azmount aryl
"This is the company that can sell all of their processors for under $100 and still make money"
Only big CPU company that has made profit during the last three quarters has been Intel. Are you referring to them? You know they have pretty much all the price points covered from <$30 to >$1000.
"By the way, when does ATi acquisition is to stop making impact on quarterly results?"
Since Q4 06, IIRC only a tiny bit got carried over to Q1 07
Dear God, Dare Sci use Tomshardware as his backup. Articles filled with "we hear" "sources tell us".....
I'm sure you missed this part:
"The roadmap does not list such a chip and in fact shows the dual-core Core 2 Extreme to be phasing out by Q1/Q2 of next year. "
So on all other days, It can't get any worse than Toms, on days like these, it can't get any better than Toms lol. Perfect.
Aguia
As per the quote above. The roadmap showed that xtreme core 2 will be phased out Q2, 07 and it is fading out. No where on the roadmap did it have a 3.2Ghz chip. I don't know who and what confirmed the 3.2Ghz.
You have gots to be kidding me if you are going to use those TG articles as your backup. They clearly state that no where on the roadmap did they see a 3.2Ghz chip at end of 2006.
Great job look for something completely useless.
"As far as AMD's situation goes, it's anyone's guess. They have been in this situation before in the pre K8 days, pre K7 days, pre K6 days (see a pattern here?) and it's very possible they can climb out of the hole again. Execution and delivery of with K10 family on it's price, performance and distribution will ensure this."
K7 was performing better than anything Intel had to offer back then. K8 completely destroyed the competition. So there were reasons that allowed AMD to stand on their feet again. Unfortunately, we all know very well that K10 won't even be near the competition.
My "doom and gloom" is just a logical assumption.
Intel fanboy GDP wrote:
...My "doom and gloom" is just a logical assumption.
That's because you're intel's cheerleader. =)
Scientia- no thoughts on my economic argument for not releasing higher speed bins? Seems logical- and in the absence of hard data (your inferred conclusions do not constitute hard data), just as plausible if not more so.
Remember, AMD and Intel are both businesses first, and that drives their behaviors (or at least is supposed to). From a technical perspective, Intel has demonstrated in the past that they will use more advanced cooling solutions to get higher speed bins out the door when the marketplace demanded such action. They aren't doing that now.
"My "doom and gloom" is just a logical assumption."
Your doom and gloom is all bullsh*t. How do you know K10 won't be up to competition when you haven't even seen the chips? I guarantee you K10 will have 15% better IPC, better performance per watt, and much better scalability than penryn. Wanna bet on your reputation? Come back when K10 is released.
If you think K10's problem is its low clock rate, then so were K7's and K8's clock rates upon initial release. The real problem of AMD is not its chip design. AMD has better microarchitecture and much better system architecture than Intel at this date. AMD's problem is its 90nm SOI not competitive to Intel's 65nm bulk Si, and 65nm not to 45nm.
In other words, if IBM wants AMD alive, AMD will be.
Abinstein
Being a little hypocrite are we not?
You say "How do you know K10 won't be up to competition when you haven't even seen the chips?"
Then you say "I guarantee you K10 will have 15% better IPC, better performance per watt, and much better scalability than penryn."
Dare I go further?
Plus what good is 15% IPC when you could possibly be at a >50% Clock disadvantage.
If you say that AMD can ramp up Barcelona, can't the same be said for Penryn?
Intel is already starting to bring Nahelem into the press. By next year they will be talking about Gesher.
Albeit, you gotta admit Intel definitely knows how to add fuel to the already blazing fire.
Don't worry Mo, it's just Abinstein's bias. Don't take him seriously.
So mo you only read what you like?
Intel today confirmed that Core 2 Extreme will launch with 2.93 GHz/FSB1066 next month and will be available with 3.2 GHz by the end of the year.
It is unclear at this time what will happen to the current Core 2 Extreme, which was confirmed by Intel to be available as a 3.2 GHz version by the end of this year.
Who was that from Intel who confirmed it? The same one who confirms stuff for Inquirer and Fudzilla?
Who was that from Intel who confirmed it?
At a presentation today in Santa Clara
according to Intel.
Intel today claims
Intel claims
Maybe the guy who wrote the article talked with someone at Intel.
It’s funny you guys easily believe anything that says bad clouds to AMD.
Good thinks to Intel, you all easily believe.
This case it’s also good things to Intel but since it didn’t delivered/happened, and since is giving the reason to Scientia article you all easily don’t believe.
So TOMSHARDWARE, ANANDTECH, and all the others are wrong, all they come up with the same invented story. Amazing!
aguia
"It’s funny you guys easily believe anything that says bad clouds to AMD. "
My point was that this is basically picking up the pieces you like. Whenever tom reports something bad about AMD the site is labelled as pro-Intel you cannot ever trust.
"So TOMSHARDWARE, ANANDTECH, and all the others are wrong, all they come up with the same invented story. Amazing!"
Sharikou seems to believe this story that everyone who show benchmarks where C2D is ahead of X2 are paid Intel pumpers :)
Happier now ho ho, mo mo, and the others:
Intel’s New Core Extreme Chip Has 4 Cores, 2.66GHz Clock-Speed.
It seems that I have to be paid here. I'm doing Scientia work, doing you guys work. I will create some paypal account where you can all pay me.
;) :)
PS: The dates are the same in both news, Thursday, August 17, 2006
I really don't see the point of tangling up multiple arguments to try to pretend that you have something important to say.
1.) Fact versus speculation. If all the facts were known then I wouldn't be writing an article. Engaging in analysis and speculation is common in many areas, not just microprocessors. Most people understand the basic parameters of speculation although others don't seem to. Merely pointing out that something is speculation or that a source may not be accurate is like saying the sky is blue and feeling that you've said something very profound. Refuting this piece could be done by showing that rumors had a single source or by showing that each source was definitely wrong rather than speculative. It could also be done by showing an official roadmap from Intel from the same time frame or earlier. None of these things have been done.
2.) 2006. We've seen rumors many times that have a single source. For example, the recent rumor that AMD would get rid of its FABs in 2008 came from a single source. Since the 3.2Ghz rumors in 2006 did not appear to come from one source that should be sufficient. For example, it seems unlikely that both Anandtech and TomsDaily would have the same source.
Secondly, if it were simply a matter of doing only what was necessary then we have to ask why Intel didn't cut the L2 cache in half for most desktop chips as this was all that would have been necessary. Really, only the quad core and dual socket chips needed that much cache. This does suggest that Intel would have released faster chips if it had been practical.
3.) K10 performance. Without extensive testing there is no way to know the specifics. However, the architecture of K10 has been improved substantially over that of K8. Barring a serious design flaw (which is still possible at this point) K10 has to be a bit faster in Integer and nearly twice as fast in SSE as K8. I'm currently expecting a 20% improvement in Integer IPC. This should be enough to match Conroe at equal clock but I don't know about Penryn. It looks like Penryn may get another 7% Integer IPC.
4.) Clock speeds. 45nm Penryn chips faster than 3.0Ghz seem likely in Q4 07. I doubt K10 has any chance of exceeding 2.6Ghz in Q4. However, given the initial clock speeds, 2.3 - 2.5 seems more likely. This doesn't seem like enough speed to make AMD competitive in any area except maybe quad core 2-way and 4-way against Intel systems using FBDIMM.
5.) Toms Hardware Guide. I'm not sure why someone would make a comment about THG in terms of information. My complaint has been about testing. I've seen sloppy testing at both THG and Anandtech for years. I've still yet to see any testing done with the PGA compiler even though it costs the same as the Intel compiler. One has to wonder why. Is it possible that none of the testing sites are aware of the PGA compiler or could it be that they are afraid that Intel may lose some of its lead?
ho ho
"Sharikou seems to believe this story that everyone who show benchmarks where C2D is ahead of X2 are paid Intel pumpers :)"
The specific testing at one site or in one review may not be that great. But, there has been enough testing in multiple reviews and at many sites. It would be ridiculous to suggest that C2D is not faster.
In fact, even with the PGA compiler I would expect C2D to be faster on most systems. Since AMD is closer in dual socket server systems a small bump could put the in the lead there.
BTW, for those not familiar with logic, the fallacy in regard to THG is called Hasty Generalization. This is when something that only applies in a specific case is misapplied in general to every case.
scientia
"I've still yet to see any testing done with the PGA compiler even though it costs the same as the Intel compiler"
Do you remember the audio en/decoding benchmark where they compared MSVC vs ICC? In there ICC generated better code for AMD than MSVC. Are you sure that ICC still has that kind of "protection" built in as it used to?
abinstein
"What bias?"
It is kind of offtopic here but you interpreted those MPI results in the opposite way trying to show that K8 based clusters scale better whereas in fact Core2 does, just see the links I posted on your blog.
ho ho
"Do you remember the audio en/decoding benchmark where they compared MSVC vs ICC? In there ICC generated better code for AMD than MSVC."
The scores are not unusual. ICC typically gives a small improvement to AMD. If you actually look at the scores you see that the average improvement for encoding for C2D is 10% while the average for AMD is 0%. Strangely for decoding where AMD is already competitive Intel gets an 11% increase while AMD gets 7%.
" Are you sure that ICC still has that kind of "protection" built in as it used to?"
It doesn't matter whether it does or not. The point is that ICC does not produce optimal code for AMD. Portland Group claims that the PGA compiler produces faster code than the Intel compiler for both Intel and AMD. If this is true then logically this would be the compiler to use.
The reason it is not used in reviews may have to do with the fact that it benefits AMD more than it does Intel (since Intel already uses a fairly optimal compiler). As I've said before, if Portland's claims have any truth to them, code compiled with PGA could reduce Intel's Integer lead to 10-15%. I doubt this would make Intel very happy. Do you suppose this is why PGA is not used? Do you any reasonable argument why PGA should not be tested?
Edited: abinstein said...
Mo-
"Dare I go further?"
It's speculation. That's why I asked whether he wanna bet. Do you?
"Plus what good is 15% IPC when you could possibly be at a >50% Clock disadvantage."
Did you actually follow what I was commenting on at all? As I've said and you couldn't read, K7 and K8 had slow initial clock rate as well. Maybe AMD should have gone under twice in history and you're living in an illusion?
"If you say that AMD can ramp up Barcelona, can't the same be said for Penryn?"
Higher clock (e.g. >3.33GHz) for Penryn will be beneficial only if FSB speed is raised higher (e.g. >1.6GT/s). If you see that coming, let me know.
Poke-
"Don't worry Mo, it's just Abinstein's bias. Don't take him seriously."
What bias? Care you spell it out rather than commenting carelessly like this?
Edited: Mo said...
Sci, you wanna talk speculations vs facts?
Let me get to this reply of yours above, I never got a chance to comment on it and would like it to be more clarified.
You Said
"It's possible that they didn't feel they needed it but it is also true that they were not capable of releasing one."
so it's a possibility that they didn't feel the need.
BUT it's a truth that they were not capable of releasing one?
Do you see something wrong with that? How you turned speculations into facts(truths).
What proof do you have besides your speculation in this entry, that you came to this "fact"?
Aguia: Those articles are useless like I said. I can attend a presentation tomorrow and write that Intel confirmed that they will release a 5.33Ghz Penryn next week. Though at the SAME TIME Intel showed no such chip in their presentation or roadmap.
Do you get where I'm coming from?
INQ. and FUDZILLA get a lot of confirmations too, do you buy all their confirmations?
your last link. Please do me a favor and read the last line. Let me quote it.
Intel did not comment on the new-story.
Question: If Intel did not comment on the story then how did they get such confirmations?
I'm done commenting on this, you can have the last word.
Abinstein.
Let me get this straight.
You call HIS speculation nonsense but offer your own speculation on the same front?
Why is your speculation more valuable than his?
You tell him that he has not seen the chip so he can't speculate but then you turn around and speculate yourself even though YOU have not seen the chip either.
Don't go around calling his speculations nonsense merely based on "not seeing the chip" and then doing the same thing yourself.
Hypocrisy at it's best.
abinstein said
"Your doom and gloom is all bullsh*t."
Sci, If you are going to censor our posts, have the decency to do it on both ends.
mo
That's why abinstein got the warning. But, I'm not going to have this escalate.
Sci
I was using the same volatile terms that were used by another member, but you felt it was necessary you censor my use of that volatile term and not his use of the Same volatile term.
Gee, how unbiased, and balanced of you.
You missed my point entirely. I made an assumption. You also made an assumption, though it maybe supported by questionable arguments (which you made in your entry).
At the end of the day, no matter how to turn it and squeeze it, it's STILL an assumption.
You don't have the facts to turn your assumptions to Truths. So don't do it. Simple as that.
Your arguments though are not valid. You suggest that speculation with supporting information (temperature limitations) is less valid than speculation with no information (Intel's motives). In fact, your assumption about Intel's motives
I really don't understand why people still keep posting on this clown's blog. It is not even as entertaining as Sharikou's!
His whole argument is, "Intel won't release 3.0+ GHz because they will have to package t 50$ heat sink!" Duh!! 50$ heatsink for a 1000$ part?
With Zalman 9700 CNP my C2D B-step runs at 3.2 GHZ under full load (CPUBURN+adobe) at 49 deg C to 50 deg C. So the rest of the nonsense this joker is spewing is utter BS.
Note: as Abinstein has proven, BS is not an offending word.
I'm done here, Good luck with the Blog.
mo
The editing consisted of substituting less volatile terms. If you don't want your posts edited then use more restraint with what you say.
Your arguments though are not valid. You suggest that speculation with supporting information (temperature limitations) is less valid than speculation with no information (Intel's motives). In fact, your assumption about Intel's motives depends on a previous assumption that they could have built faster processors (which you've never seen and have no supporting information for).
What Dr. Yield said is true however. The chips that Intel has produced up to now have not been capable of clocking above 3.0Ghz (although the G0 stepping looks promising). My guess is though that by the time the G0 stepping is able to reach higher clocks, 45nm will be ready anyway with lower TDP.
Is is possible that Intel could have made something faster in 2006. However, if making something faster in 2006 resulted in unacceptably low yields then this would amount to the same thing.
Core2Dude, I guess I'm puzzled. If my blog is such a joke then why are you reading it? Why are Mo and Poke posting here? Why does Roborat read my articles and then write parodies of them? Why have I been getting more than 650 views month from Intel and 11,000 views a month total?
I think people always find it easier to make fun of and dismiss things they don't like but can't really refute. I think this is especially true of people who can't write articles of their own.
I dunno? everyone needs a good laugh from time to time.
People at Intel are human too, they want to laugh like the rest of us.
11,000 Views and about 5-6 people bickering in your comments. Great accomplishment Sci.
Ok. seriously, im done here :).
PS: No matter how much you re-arrange your replies, you're intentions are still the same.
Core2Dude
"His whole argument is, "Intel won't release 3.0+ GHz because they will have to package t 50$ heat sink!" Duh!! 50$ heatsink for a 1000$ part?"
No, that has never been my argument. From what I saw of the case cooling tests at Anandtech, even the quad core parts could clock above 3.0Ghz with premium air cooling. I've said this many times (and yet you can't seem to remember this).
I'm certain Intel could release such chips but I'm guessing that the market would be so small that they just don't bother. I could see chips like that being used at Alienware or Voodoo perhaps but I'm sure you know that the vast majority of systems will be sold with stock coolers.
Most OEMs will not want to bother with oversized heat sinks. Premium air coolers weigh as much as 2 lbs. Coolers of this size cannot easily be shipped because the board mounts aren't strong enough. You would need additional bracing to keep this from breaking during shipping. I'm sure you didn't buy a system with the Zalman already installed; you had to install it yourself. This is not something that most computer buyers will do.
I don't understand why it would be a joke to point out something so obvious. Do you honestly believe that Intel wants to bother qualifying such a small market segment and that OEMs want to deal with non-standard cooling solutions?
mo
Since you feel you have a superior perspective perhaps you could write an article for roborat's blog. Or would that take more courage than you possess?
Dr. Yield
What do you make of the fact that Intel is skipping 45nm with Itanium?
If we follow the current logic then I guess someone would suggest that Intel is skipping 45nm because Itanium is so good on 65nm that it doesn't need 45nm. I think a much more likely guess is that Itanium is so late on 65nm that Intel is skipping 45nm and moving directly to 32nm. I think this decision is also influenced by IBM's Power 6 which by all accounts is going to be much faster than 65nm Itanium. Clearly, the x86 section is working much better than the EPIC section.
In your blog you say:
So, Intel did not release a 3.2Ghz processor in 2006 because they couldn't.
And in your post you say:
I'm certain Intel could release such chips but I'm guessing that the market would be so small that they just don't bother.
So... which one is it?
I'm sure you didn't buy a system with the Zalman already installed;
Please! I don't buy a system, I build it. And anyone who wants extreme, either builds it, or buys a special system. In any case, shouldn't be too difficult to get a special heatsink.
Do you honestly believe that Intel wants to bother qualifying such a small market segment and that OEMs want to deal with non-standard cooling solutions?
Take a look at Dell H2C cooling solution. The CPU is factory overclocked to 3.46GHz.
Now don't tell me Dell is not an OEM.
core2dude
"So... which one is it?"
It's both. You are talking about two different processors.
In the first instance I'm saying that I don't believe that Intel could have released reasonable volumes of > 3.0Ghz chips in 2006 using stock HSFs without unacceptably low yields.
In the second instance I'm saying that Intel could have released > 3.0Ghz chips if they had required premium cooling. Again, according to the Case Cooling tests at Anandtech the premium coolers can keep even QX6800 sufficiently cool.
Scientia
Here is a video from Intel explaining why Intel is skipping 45nm with Itanium.
Fast forward to about 24 minutes, and here is a link showing the power point presentation in the background.
core2dude
"I build it. And anyone who wants extreme, either builds it, or buys a special system. In any case, shouldn't be too difficult to get a special heatsink."
Yes, I used to build systems too. People who buiild their own systems are in the minority. People who build extreme systems and overclock are in the extreme minority.
"Take a look at Dell H2C cooling solution. The CPU is factory overclocked to 3.46GHz."
Yes, but that chip is warranted by Dell rather than Intel. Sun did the same thing with an Opteron chip. Again, the discussion is not about premium cooling which is certainly available but about Intel's (and AMD's) unwillingness to qualify chips for premium cooling only.
Do you recall the Ford Pinto with its 2.3L, 4 cylinder engine? It produced 88 HP. With just a bit of tweaking, this engine could be increased to 200 HP. The engine could do this with stock crank, stock rods, stock pistons, and stock bearings. The only necessary strength upgrade was larger rod bearing bolts.
This could be done either by using a Cosworth crossflow head or by using the stock counterflow head and turbocharging. If Ford had produced its own crossflow head with larger valves this engine could easily have produced 150HP reliably with very little increase in price. The same was true of the 2.2L 4 cylinder Dodge engine. However, these factories weren't keen on this idea either. Both preferred V6's for this power range.
scientia
"What do you make of the fact that Intel is skipping 45nm with Itanium?"
Well, Itanium evolves rather slowly, it doesn't have two teams working on it in parallel as x86 does so it takes about four years to come up with a new design. That will mean by the time a new version is done two technology nodes have passed. Pure logicks. I didn't watch the video enumae posted but I believe this is the reason.
"Again, according to the Case Cooling tests at Anandtech the premium coolers can keep even QX6800 sufficiently cool."
It's quite obvious as quadcores take almost twice as much power as dualcores.
core2dude,
your posts only gives all the reason to Scientia.
You don’t read Scientia posts,
or you don’t read your own posts.
So... which one is it? ;)
HoHo
They have two teams, just like x86.
Watch the video, it is pretty interesting.
In the video they try hard to step around the issue. They specially mention Power 6 and that it uses a smaller process without quite saying that they are behind. The term they use is that the design doesn't match the current process node. Then they say that by skipping 45nm that they will bring Itanium up to the current process node. They carefully do not mention being behind IBM or having to catch up.
I don't see why they would need another design team for a process shrink. It wouldn't be optimal geometry but it should be an improvement in die size, yield, and power draw.
Teams. Let's see. We know they had at least 3, one for Itanium, one for P4, and one for mobile in Haifa. However, I would agree that P4 would be two teams. That is what Intel said back when PIII was in development: two teams with four year development time offset so that an update came out every two years. This would also fit what I said before about the Haifa team doing Banias, Dothan, Yonah while a completely different team worked on C2D. Presumably the other team working on Tejas converted what they could to Tulsa and began Nehalem II.
BTW, AMD's Q2 earnings comes out at 5pm today.
Scientia
In the video they try hard to step around the issue...
Whats the issue?
You asked why they were skipping 45nm and the video explained it (around minute 31).
If you don't believe Intel and want to start drawing your own conclusions, then why ask the question at all?
----------------------------------
Looks like the new stepping from Intel puts the E6850 (3.0GHz Dual-core) at 65W and a Tc of 72°C.
scientia
"They carefully do not mention being behind IBM or having to catch up."
No, but they did say several times that they have a whole year lead compared to others.
Also Power6 is not completely 65nm, it is a hybrid between 90nm and 65nm.
Also Power6 is not completely 65nm, it is a hybrid between 90nm and 65nm.
Where did you get that info ho ho?
Does the Power6 exist on 90nm?
Ho Ho -
"It is kind of offtopic here but you interpreted those MPI results in the opposite way trying to show that K8 based clusters scale better"
The bias, again, is yours. You were comparing a C2D cluster with 10Gbps ethernet backbone (Darwin) to a K8 cluster with 1Gbps (Emerald). Please read my response there for more detail.
For your own sake, I'd like also to remind you again to learn from your mistakes, and be less biased in favor of Intel and against AMD.
Again, what are my biases?
Mo -
"You call HIS speculation nonsense but offer your own speculation on the same front?"
He made his speculation, and I made mine. That's why (I've said this before) I asked him whether he wanna bet. Please, read before you write. Why do I always have to deal with people who don't do this basic job, and why are most of them Intelers?
"Why is your speculation more valuable than his?"
No, to an average guy, my speculation is no more valuable. However, he is deriving a conclusion that AMD will go under based on his speculation. This is b******* (i.e., nonsense). Do you see I do anything remotely like this? Or are you just being hypocritic at your best?
"Hypocrisy at it's best."
Hypocrisy or not, if you believe his gloom and doom which is based on his speculation of K10 performance, then my comments to him also completely apply to you.
poke -
Come up and be a man, and tell us what is my bias. Or do you simply not take any statements seriously when it burst the bubble of your deliberate false claim?
enumae
"If you don't believe Intel and want to start drawing your own conclusions, then why ask the question at all?"
I'm not sure what you are talking about. I think Intel avoids mentioning that Itanium is behind Power or that they need to catch up. They mention the difference in process nodes casually. They also say that they will match process nodes at 32nm but again without any indication that it is necessary.
They do not say why they are lagging with Itanium (they do have an entire D1D testing FAB for this) nor do they say why they are not moving to 45nm now. Again, they talk about being behind on FAB processes as though it is routine.
Q2 Earnings webcast in 1 hour at 5pm ET. You can catch it here at AMD Investor Relations.
I've already stated my median estimate of a loss of $550 Million. We'll find out shortly.
Scientia
They do not say why they are lagging with Itanium (they do have an entire D1D testing FAB for this) nor do they say why they are not moving to 45nm now. Again, they talk about being behind on FAB processes as though it is routine.
If a video by Intel that is clearly answering your questions can not help you then I can't help you.
If you watch it from about the 30:50 mark to about 33:50 they explain this.
They make all of those points very clear.
You don’t read Scientia posts,
or you don’t read your own posts.
So... which one is it? ;)
The earlier...
I am not too interested in what the clown has to say
core2dude said...
I am not too interested in what the clown has to say
Hope you enjoyed your stay! Don't let the doorknob bite you in the ass on the way out!
...In other news, the earnings report was better than most expected I imagine. At least thier operating losses are starting to reverse somewhat it seems... Here's to hoping thier 3rd quarter will continue to see lesser operating loss and thier revenue continue to climb.
AMD Reports Second Quarter Results
– Microprocessor Unit Shipments Increase 22% Year-Over-Year and 38% Sequentially –
SUNNYVALE, Calif. — July 19, 2007 — AMD (NYSE: AMD) today reported financial results for the quarter ended June 30, 2007. AMD reported second quarter 2007 revenue of $1.378 billion, an operating loss of $457 million, and a net loss of $600 million, or $1.09 per share. These results include an impact of $130 million, or $0.24 per share, from ATI acquisition-related and integration charges of $78 million, employee stock-based compensation expense of $31 million, severance charges of $16 million and debt issuance charges of $5 million. In the first quarter of 2007, AMD reported revenue of $1.233 billion and an operating loss of $504 million. In the second quarter of 2006, AMD reported revenue of $1.216 billion and operating income of $102 million.
Change
($M except percentages) Q2-07 Q1-07 Q2-061 Q2-07 vs Q1-07 Q2-07 vs Q2-06
Revenue $1,378 $1,233 $1,216 12% 13%
Operating Income (Loss)
GAAP Operating income (loss) $(457) $(504) $102
Acquisition-related, integration and severance charges $94 $113 NA
Stock-based compensation expense $31 $28 $18
Non-GAAP Operating income (loss)2 $(332) $(363) $120
1 As a result of the acquisition of ATI, 2006 financial results only include the results of the former ATI operations from October 25 through December 31, 2006. Therefore, financial results for the second quarter 2007 do not correlate directly to those for the second quarter 2006.
2 In this press release, AMD has provided non-GAAP financial measures for operating income (loss) and gross margin to reflect its financial results without acquisition-related, integration and severance charges and employee stock-based compensation expense. Management believes this non-GAAP presentation makes it easier for investors to compare current and historical period operating results.
Looking forward to your thoughts in your next blog post on AMD's 2nd quarter results Scientia.
Abinstein, we all know you're a rabid AMD supporter. Just look at your blog. Atleast Scientia has the guts to admit he is wrong when he makes a mistake unlike "some" people.
Well, AMD's numbers were a little worse than I expected. I was looking for $550 Million and they hit $600 Million.
Lou Ceifer,
With AMD giving away their CPUs, their losses will continue. With a revenue of $1.378b USD, and a net loss of $600m USD, for every dollar of revenue, it's costing AMD ~1.5 dollars. AMD may keep their tiny market but at their expense... X2 6000+ for under $200. AMD's best = Intel's low end.
Not to mention ATI will keep dragging AMD down.
poke
Not exactly. You aren't seeing the whole picture.
Poke said...
Lou Ceifer,
With AMD giving away their CPUs, their losses will continue.
That's funny!
AMD is mounting losses due to it's last generation wares being outperformed by Intel's latest gen wares, their many more FABs for product output, much larger market presence and marketing that it's enjoyed for the past 30 something years. (thanks to it's monopolistic practices at home and abroad with a high-paid legal team to keep the company out of anti-trust proceedings and government investigations to boot)
Not because it's "giving away CPUs".
Please, show me where I can find some of these "free" AMD CPUs sir. Waiting anxiously on your answer...
With a revenue of $1.378b USD, and a net loss of $600m USD, for every dollar of revenue, it's costing AMD ~1.5 dollars. AMD may keep their tiny market but at their expense... X2 6000+ for under $200. AMD's best = Intel's low end
Well thank you for the brilliant and enlightening analysis, *cough* but this does not mean it's going to stay like that forever. This situation is what I like to call a "generation gap". Once K10 is released(next month) you can expect to see a reversal of this trend, albiet slow, until the K10 is fully saturated in the market, channels and OEMs and are shipping top speed bins. Then you can expect a very noticeable change at that point in AMD's finacial fortunes, especially if AMD gets the K10 dual cores to 3 Ghz or beyond or the quad cores to 2.66 Ghz or beyond. The K10 architecture and I/O system is solid and very scalable, however, AMD's ability to scale it's frequencies is what will be the real question and that's anyone's guess at this point.(unless you work for AMD ha!)
I will say this as well, I have been paying attention to a certain claimed "AMD employee" who works in the engineering division out on the HardOCP forums ("morfinx") and he says they are going to come out as scheduled and at the frequencies AMD has show on the latest roadmap out there, but we'll see. Granted, time is not on AMD's side, but this is not the first time they have been in this very simular predicament.
Personally, I'll think I'll wait to see how K10 pans out months after it's release before I start predicting thier doom, thanks.
Not to mention ATI will keep dragging AMD down.
ATI is not doing well as most would hope right now, particularly due to the R600 family, thier AMD chipsets on the other hand are great IMHO and doing well as far as I've read. It also has XBox revenue to fall back on as well as it's other wares outside of it's discrete graphics such as embedded devices and other OEM products. It could be doing better, but it's not dragging AMD down as bad by any means as you think it is.
The AMD/ATI merger and it's benifits in the long run are going to be FAR worth it for AMD, as Scientia has already explained here quite a few times over. Don't waste yours or our time with a "can't see the trees through the forest" mentality of the short term successes or failure, this industry in ways is like a chess game. CPU/GPU technology is going to play a far bigger role as general processing units in the future, which is THE FUTURE and something Intel is having to play catch up with currently with thier fledgling discrete graphics division, as AMD's Fusion is already well into development.
Once K10 is released(next month) you can expect to see a reversal of this trend, albiet slow, until the K10 is fully saturated in the market, channels and OEMs and are shipping top speed bins
Very slow, what did Hector say? Q2'08 or was it Q3? By that time their market will shrink down to less than 10%. No one is going to be waiting for AMD to ramp up their half baked K10s when they can pick up Clovertowns/Yorkfields at all price points.
I will say this as well, I have been paying attention to a certain claimed "AMD employee" who works in the engineering division out on the HardOCP forums ("morfinx") and he says they are going to come out as scheduled and at the frequencies AMD has show on the latest roadmap out there, but we'll see.
Oh yea, a mysterious "AMD employee." Yep, I'm supposed to trust him and you alright. /rollseyes
Poke -
"Abinstein, we all know you're a rabid AMD supporter. Just look at your blog."
You are just dodging my question. Again, can you or can you not point out where/what is my bias, from my blog or comments? If you can, then what or where is it?
Yes, my blog contains some speculations, and I made sure they are explicitly marked so and are unbiased. People have come to challenged me with all kinds of silly questions which only show 1) they are biased, 2) they simply didn't read.
If you can find bias there, speak it up loud. If you can't find it, stop making FUDs. Ignoring my challenge does not justify FUDing, just let you know.
Any comment to my last post Scientia?
aguia
"Where did you get that info ho ho?
Does the Power6 exist on 90nm?"
I thought it was common knowledge and only Scientia didn't know it but it seems as lots of knowledgeable people don't know about those things. Just read this:
Dr Frank Soltis, an IBM chief scientist, said IBM had solved power leakage problems associated with high frequency by using a combination of 90nm and 65nm parts in the POWER6 design.
lou ceifer
"this does not mean it's going to stay like that forever."
What is your estimate of how long can AMD suvive like that? Long enough to have enough high-performance K10's to replace its older K8's?
"Once K10 is released(next month) you can expect to see a reversal of this trend, albiet slow, until the K10 is fully saturated in the market, channels and OEMs and are shipping top speed bins"
Will that happen before H2 08? Remember, originally K10 was supposed to start ramping at Q1 08 and AMD seems to have more problems with it than thought earlier.
"The K10 architecture and I/O system is solid and very scalable"
Indeed, so it is. Though even when Intel has much worse scalability it still stole lots of marketshare from AMD. That means scalability is not that important in order to earn profits.
"It also has XBox revenue to fall back on as well as it's other wares outside of it's discrete graphics such as embedded devices and other OEM products"
Yes but even with XBox and Wii combined wit all other embedded products it still lost marketshare to NVidia in embedded market this quarter. Not too good I'd say. Not to mention that their revenue dropped down to less than $200M with over $50M in losses.
"It could be doing better, but it's not dragging AMD down as bad by any means as you think it is"
I think you can say that, per dollar it had less losses than the CPU part of AMD.
"The AMD/ATI merger and it's benifits in the long run are going to be FAR worth it for AMD,"
Yes but there is lots of time for the future to arrive and AMD has said it'll probably delay some things in order to save money. Last I heard Larrabee will be on market at the same time as first GPGPU-replacing products from AMD, if not sooner. If Intel is first on market it doesn't have to work too hard to become the new standard as most people will go with their products.
scientia, what do you think is your prediction of AMD having $100M less losses each quarter enough to not BK? I think with this quarter results this is going to be quite a challenge for them as with this formulae it will take them at least $1.5B to break even by the end of next year.
Ho Ho -
"I thought it was common knowledge and only Scientia didn't know it but it seems as lots of knowledgeable people don't know about those things. Just read this"
You are quoting from Feb 2006 where Dr Frank Soltis said "IBM had solved these problems by using a combination of 90nm and 65nm parts in the Power design." This has nothing to do with Power6 being "mix of 90nm and 65nm," unless you simply lack good logic.
Power6 is a 65nm design. You don't do physical design with variable lambda values. I doubt there's a CAD tool that lets you do this.
Poke -
"Very slow, what did Hector say? Q2'08 or was it Q3? By that time their market will shrink down to less than 10%."
Talking about lack of ability to read - where did you see Hector says 2Q08? Did you just make it up like all FUDing Intelers do to those bad news for AMD?
Barcelona is expected to contribute to ASP in 2H07, and Phenom in 1Q08. Search the transcript page for "Barcelona core" and learn to read the paragraph!
Ι once thought that Intel wouldn't like a market without AMD and that wouldn't introduce penryns, leaving a small time window for AMD in order to survive. It is clear to me now that Intel want to completely annihilate AMD.
So, AMD have 900M$ before BK. I believed that the BK would come Q1 or Q2 08, now I think that Q408 is possible. It is not a matter of "if", it is a matter of "when"
abinstein
"This has nothing to do with Power6 being "mix of 90nm and 65nm," unless you simply lack good logic."
So what does it mean? If the sentence wasnot about Power6 then what was it?
"Barcelona is expected to contribute to ASP in 2H07, and Phenom in 1Q08"
Contribute how much? When will crossover take place?
"now I think that Q408 is possible."
typo : Q407
poke
"By that time their market will shrink down to less than 10%. No one is going to be waiting for AMD to ramp up their half baked K10s when they can pick up Clovertowns/Yorkfields at all price points."
This is wrong but I guess we can discuss it on the new thread.
Still no comment?
"Poke said...
Very slow, what did Hector say? Q2'08 or was it Q3? By that time their market will shrink down to less than 10%. No one is going to be waiting for AMD to ramp up their half baked K10s when they can pick up Clovertowns/Yorkfields at all price points."
Haha, right. Hey, let me borrow that crystal ball of yours sometime, I've got other predictions to make too.
"Oh yea, a mysterious "AMD employee." Yep, I'm supposed to trust him and you alright. /rollseyes"
Hey chump, believe whatever you want. I'm just throwing out my personal view, what I've read and future possibilities, it's no different from your point of view. We all want to see "our" favorite company succeed. It's obvious you've already decided what you think about it, so be it, you're not going to break my heart buddy. I don't expect people like you to accept any kind of second hand news particularly in AMD's favor anyway, much less discredit anything that goes in Intel's favor, but hey, that's alright. Believe whatever you want. We'll see how things pan out in time won't we? =^)
"Ho Ho said...
What is your estimate of how long can AMD suvive like that? Long enough to have enough high-performance K10's to replace its older K8's?"
I don't have to estimate anything... If anything, history serves as a great example into what to expect in the future, since we as a race tend to repeat it with many things in life. So there's your answer...
Will that happen before H2 08? Remember, originally K10 was supposed to start ramping at Q1 08 and AMD seems to have more problems with it than thought earlier.
Again, I don't have to tell you this, you know as well as I do what is at stake. If AMD well knows it's situation,(which I'm very sure they do) it will get K10 out the door in the speeds they outlined in their roadmaps and get the quirks out of their FABs manufacturing well before that time frame or they will have serious problems, possibly either getting bought out or face bank rupture. Nuff said.
"Indeed, so it is. Though even when Intel has much worse scalability it still stole lots of marketshare from AMD. That means scalability is not that important in order to earn profits."
I think the old adage is, early bird gets the worm? However, the second mouse gets the cheese too. Intel is doing all it can right now to get as much market share back in the server market from AMD with it's current K8 Opterons before the K10 quad Opterons (Barcelona) hit the market, no surprise here. Having a much bigger market presence and marketing team for the past 30 something years helps alot too, I would think.
Yes but even with XBox and Wii combined wit all other embedded products it still lost marketshare to NVidia in embedded market this quarter. Not too good I'd say. Not to mention that their revenue dropped down to less than $200M with over $50M in losses.
There are far more things going on with AMD/ATI that they have planned that Nvidia is not even touching so I'd have to say wait and see before we declare ATI worthless at this point IMHO. The things coming down the pipe in store with AMD/ATI in the GPU market haven't even yet gotten into full swing. Until it does, it's safe to say Nvidia is going to be ruling the roost in the GPU market for the time being. I have faith that ATI is not going to settle being in it's current position for long though.
"Yes but there is lots of time for the future to arrive and AMD has said it'll probably delay some things in order to save money. Last I heard Larrabee will be on market at the same time as first GPGPU-replacing products from AMD, if not sooner. If Intel is first on market it doesn't have to work too hard to become the new standard as most people will go with their products.
Well, we'll see if they have to delay Fusion, so far I have high doubts they will delay it, as I get the general impression it's a very high priority project for AMD/ATI. As far as becoming a standard, either AMD or Intel's GPGPU depends on how well Microsoft will support it.(unfortunately) From what I understand, the next gen GPGPU's will be different in it's microcode/programming from typical x86 processors, hopefully both will keep some resemblance of the majority of x86 code,(to retain compatibility) or perhaps it will be another evolution of x86 like x86-64 was. However, I imagine if Intel goes in an EPIC based direction with their GPGPU,(to try to resurrect Itanium possibly, which I feel is not out of the question) they will meet resistance from Microsoft. Really this is all speculation at this point, it's far too early in the game IMHO as we won't see the fruits of this kind of technology until 2009 or later.
Lou Ceifer, thanks for the refreshingly level-headed view.
As for larrabee and the whole marketshare issue in general, we're still forgetting about the effect an emerging brand has on very large brand-name. While AMD is not a new brand, it is behaving extremely similarly to one due to how surpressed its marketshare has been (not saying anything about the legality of Intel's business practices).
Due to PC makers wanting to have leverage against intel (because Intel is extremely dominant and everybody wants leverage against somebody regardless of their product-line) they will try to get AMD into as many of their product lines as possible to get Intel's prices down. If AMD even has a competent product (though I'm doubting it will stay this way if that does happen) their market-share will likely continue to increase until it's about 35%, going off the way other markets behave.
At this level, AMD should be able to command a greater level of price control and will likely be able to adjust ASP's to allow themselves enough capital to expand production and increase profitability and very greatly increase stability.
As for larrabee, it's highly unlikely that a brand new product from a company that has never existed in the market of said product and is pitted against two hugely competetive and very competent companys is not going to just push one of those companies out of the market immediately. At best (though obviously not best for the consumer) Nvidia would eventually die off if a pricing war occured and AMD absorbed all of Nvidia's space in fabs.
But that's still unlikely.
Post a Comment