Monday, April 16, 2007

Core 2 Duo -- The Embarassing Secrets

Although Core 2 Duo has been impressive since its introduction last year, a veil of secrecy has remained in place which has prevented a true understanding of the chip's capabilities. This has been reminiscent of The Wizard Of Oz with analysts and enthusiasts insisting we ignore what's behind the curtain. However, we can now see that some of C2D's prowess is just as imaginary as the giant flaming wizard.

The two things that Intel would rather you not know about Core 2 Duo are that it has been tweaked for benchmarks rather than for real code, and that at 2.93 Ghz it is exceeding its thermal limits on the 65nm process. I'm sure both of these things will come as a surprise to many but the evidence is at Tom's Hardware Guide, Xbitlabs, and Anandtech. But, although the information is very clear, no one has previously called any attention to it. Core 2 Duo roughly doubles the SSE performance of K8, Core Duo, and P4D. This is no minor accomplishment and Intel deserves every bit of credit for this. For SSE intensive applications, C2D is a grand slam home run. However, the great majority of consumer applications are more dependent on integer performance than floating point performance and this is where the smoke and mirrors have been in full force. There is no doubt that Core 2 Duo is faster than K8 at the same clock. The problem has been in finding out how much faster. Estimates have ranged from 5% to 40% faster. Unfortunately, most of the hardware review sites have shown no desire to narrow this range.

Determining performance requires benchmarks but today's benchmarks have been created haphazardly without any type of standards or quality control. The fact that a benchmark claims to be a benchmark does not mean that it measures anything useful. When phrenology was believed to be a genuine science, one enterprising individual patented a device that automatically measured bumps on the head. This device then gave a score that purportedly showed the person's mental characteristics. Unfortunately, many of today's benchmarks have the same lack of utility. When benchmark code is too small or is run in an environment far simpler than a real application environment we get an artificial sensitivity to cache size. This is particularly true of shared cache as C2D has. Under real conditions the use of cache by both cores tends to fragment and divide up the L2 which limits how much gain each core gets. Yet, typical testing by review sites carefully runs benchmark code on only one core without other common threads that would be running in the background. This tends to make these tests more of a theoretical upper limit than something that is actually attainable. This common testing however is misleading because the split caches on K8 are immune to cross core interference. This should mean that K8 will perform better under real conditions than typical testing would indicate. The routine testing that review sites do is a bit like testing gas mileage while driving downhill or testing the air conditioning on a 70 F day. Obviously, real driving is not always downhill and the air conditioning will be more likely to run on an 85 F day.

Although C2D has always done well in the past with this type of artificial testing, more recent tests with Celeron 400 versions of C2D with 512K L2 cache give a much more realistic view of the processor's capabilities. Dailytech hides this fact as well as it can by doing very limited testing of Celeron 440 in Conroe-L, Celeron 400, but the large drops in performance can still be seen. Likewise Xbitlabs doesn't make this process any easier when it puts a comparison with a stock Celeron 440 on Conroe-L, page 3 and the comparison with Conroe E4300 on Conroe-L, page 4. The charts are pictures so they can't be copied and the relevant information is on two separate pages. So, it is necessary to transcribe both charts to find out what is going on. In terms of scaling, Celeron 440 is good with 94% scaling after a 50% clock increase. However, the comparison between the 2.0Ghz Celeron 440 and the 1.8Ghz E4300 is not so good. With a 10% greater clock speed, the lower cache C2D is actually 36% slower. This is a difference of 42%. The tricky part is trying to figure out how much of this is due to cache and how much is due to dual core. Unfortunately, none of the review sites make any attempt to isolate these two. We can plainly see that for a very small number of benchmarks like Company of Heros and Zip compression the cache makes a huge difference and artificially boosts the speed by more than 25%. For Fritz, Pov-ray, and Cinebench the boost is at least 10%. However, for most benchmarks the boost is probably about 5%. Still, considering that C2D is typically only 20% faster than K8 we would still have to strongly question the speed suggested by these benchmark scores. It is unfortunate that review site testing commonly avoids real world conditions. Under real conditions, C2D is probably closer to 10% greater IPC with integer operations. However, the SSE performance should still be much higher.

The rumors last year before C2D was released were that Intel would release a 3.2Ghz clocked model before the end of the year. This didn't happen and it now appears that it won't happen this year either. While some try to explain away the lack of higher clocks they nevertheless insist that Intel could release higher clocks if they wanted to. The evidence for this is the supposed overclockability as routinely reported by nearly every review site. It is now clear that this perception is wrong and that even at the stock 2.93Ghz, X6800 is exceeding the factory's temperature limits. The factory limits and proper thermal testing procedures for C2D are spelled out quite nicely in the Core 2 Duo Temperature Guide. We will use this excellent reference to analyze the testing done by Anandtech of the Thermalright 120. The first important issue is what to run to thermally load the cpu. Anandtech simply looped the "Far Cry River demo for 30 minutes".

However, the Guide says that Intel "provides a test program, Thermal Analysis Tool (TAT), to simulate 100% Loads. Some users may not be aware that Prime95, Orthos, Everest and assorted others, may simulate loads which are intermittent, or less than TAT. These are ideal for stress testing CPU, memory and system stability over time, but aren't designed for testing the limits of CPU cooling efficiency." Since we know that the Anandtech testing did not reach maximum we have to allow a greater margin when reviewing the temperature results. The Guide says that, "Orthos Priority 9 Small FFT’s simulates 88% of TAT ~ 5c lower." Since the Far Cry demo will not load the processor as much as Orthos we'll allow an extra 2c for 7c altogether. Next we need to know what the maximum temperature can be.

According to the Guide, "Thermal Case Temperatures of 60c is hot, 55c is warm, and 50c is safe. Tcase Load should not exceed ~ 55c with TAT @ 100% Load." So, 55c is the max and since we are allowing 7c because of less than 100% thermal loading, the maximum allowable temperature would be 48c. The second chart, Loaded CPU Temperature lists the resulting temperatures. We note that the temperature of the X6800 at 2.93Ghz with the stock HSF (heatsink and fan) is shockingly 56c or 8c over maximum. We can see that even a Thermalright MST-6775 is inadequate. From these temperatures we can say that X6800 is not truly an X/EE/FX class chip. This is really a Special Edition chip since it requires something better than stock cooling just to run at its rated clock speed. This finally explains why Intel has not released anything faster. If the thermal limits can be exceeded with stock HSF at stock speeds then anything faster would be even riskier. Clearly, Intel is not willing to take that risk and repeat the 1.13Ghz PIII fiasco. This explains why Intel is waiting until it has a suitable 45nm Penryn to increase clocks again. Presumably with reduced power draw, Penryn could stay inside the factory thermal limits.

We've now seen that Intel's best processor is somewhat less impressive than it has been portrayed. However, we still have the question of Intel's stance in the market. One possible way to judge is to look at Intel's past boom times. The first boom time starts with the introduction of the 440 chipset on the new ATX motherboard in 1995. Intel made great strides from this point, taking away both brand value and money from Compaq and IBM. This boom time continued until Intel's PIII was surpassed by K7 in 1999. The second boom period began in early 2002 when Intel released the Northwood P4. This second boom time ended in early 2004 as K8 came up strong while Intel was only able to release Prescott as a Celeron processor due to power draw. The third boom time obviously began in the fourth quarter of 2006 after Intel's release of Core 2 Duo. There is some suggestion from the first two that these boom times are getting shorter. If this is true and the 3rd boom is shorter still we may have something like:

1st boom time - 4 years.
2nd boom time - 2 years.
3rd boom time - 1 year ?

If this trend actually occurs then this would mean that Intel's third boom would end in the fourth quarter of this year. It seems this could be possible if AMD's K10 is competitive and its DTX form factor and 600 series chipsets are popular.

122 comments:

sharikouisallwaysright said...

I was surprised how bad the new Celeron performed in the tests.
But byuers of a Celeron do not need any performance, so this is not a no-go...

Beside this, where is the need for more performance?
The few Enthusiast-Gamers and some Supercomputing can not keep the Industry alife.

I see only powersaving, higher integration, more functionality as reason to buy a new computer for most people.

More better Graphic is only for the Videophile/Graphcienthusiasts.

More better Audio only atract the Adiophile.

More AI/Physic is only for Science and some Gamingenthusiasts.

The biggest Part of the market does not need any Performanceupgrade for doing daily work!

Unknown said...

While I think your arbitrary choice of 7C for error correction is unjustified, seeing as you don't have a strong argument to back it. Regardless, it is interesting that core2s run so hot at stock, as it seems all the benchmarking sights have been praising how "cool" it runs.

One thing I think you'd like to see are Intel's performance results for Penryn:
http://www.hardocp.com/image.html?image=MTE3NjkwNDc2MnRoZ08zVllVNFJfMV8xX2wuanBn

I call BS on the HL2 results, as anyone can see, the qx6800 system is gimped by Intel somehow. An increase in divide performance and a 10% increase in clock speed do not yield a 40% increase in single threaded video game performance. You should also never have a dual core machine substantially outperforming a quad core machine on an encoding when there clocks have so little difference.

The discrepencies in this data are so pathetic, I don't even understand how any news site can sleep while releasing this data without commentary on how much they doubt it. Maybe this is Intel's why of showing those in the know that they are still in control, and that benchmarkers will publish whatever crap they push out the door like it was the word of god.

Amdzoner said...

Scientia, you do know Intel is releasing a E6850, 3Ghz 1333FSB consuming 80 Watts in Q3, right?

This is still on the 65nm you speek of.

Do not degrade yourself to Sharikou's level. I always found you a more balanced perosn.

Intel gave us strong hints that Penryn debuts in Q3/Q4 this year. And when that happens Intel usually debut with a Extreme processor.

Do you really think that Intel will release a Penryn working with worse speccs than the one showcased at IDF (3,33Ghz, 1333FSB)?

I think not.

Randy Allen said...

Scientia, you do know Intel is releasing a E6850, 3Ghz 1333FSB consuming 80 Watts in Q3, right?

Incorrect. It's a 65W part. Intel has plenty of headroom left in it's 65nm process technology.

qurious69ss said...

WOW!!! Did sharikou steal your blog??? Plenty of data out there including my personal experience with Core2 that pretty much discredits your last post.

DaSickNinja said...

The Core 2 runs to hot? Using the Celeron chips as a metric to determine the performance of the Core 2's? What the heck?

orly said...

'The two things that Intel would rather you not know about Core 2 Duo are that it has been tweaked for benchmarks rather than for real code'

Real benchbarks are real code. If you have any suggestions you could, you know use the comment section at sites.

'and that at 2.93 Ghz it is exceeding its thermal limits on the 65nm process. I'm sure both of these things will come as a surprise to many but the evidence is at Tom's Hardware Guide, Xbitlabs, and Anandtech.'

http://www.xbitlabs.com/images/cpu/core2extreme-qx6700/burn.png

O RLY?

Oh snap, is that FX 62 at 130? Isnt the max power 125? Dang.

'The fact that a benchmark claims to be a benchmark does not mean that it measures anything useful.'

So seeing how fast a cpu can say encode something is not a valid benchmark and doesn't measure anything useful?

'The rumors last year before C2D was released were that Intel would release a 3.2Ghz clocked model before the end of the year. This didn't happen and it now appears that it won't happen this year either.'

Have you considered the possiblity that theres no need to release it?

'While some try to explain away the lack of higher clocks they nevertheless insist that Intel could release higher clocks if they wanted to.'

Again lack of competition.

'This is really a Special Edition chip since it requires something better than stock cooling just to run at its rated clock speed.'

Just to run at its rated clock speed you say? Is it underclocking itself because its running so hot? No. You should check out how hot those FXs are running.

'This finally explains why Intel has not released anything faster. '

Did you miss all the X2 6000 reviews?

Unknown said...

Wow, some of you need to look at the basis of the logic of your reasoning. There are just too many problems with them to point out anything specific, so here goes.

Because some of a can be b, does not mean all of a are b.

If the statement "not all b are a" is made, you cannot conclude that it is meant that none of b are a.

Maybe some of you will be intelligent to figure out which arguments these apply to, but I highly doubt that.

abinstein said...

Scientia, regardless all those Intel fanboi responses, I don't think C2D is tweaked just for benchmarks, either.

It is true that we see faster media encoding by C2D than K8X2. It is true that compressions run faster on C2D than on K8X2. It is true that games and 3D graphics apps run faster on C2D than K8X2.

At the most, we can only say that C2D is tweaked for certain types of apps such as media processing/compressions and AI (path finding). OTOH, C2D runs slower than K8X2 for cryptography and many mathematical/scientific codes, and about the same for business applications. The point is, all these above are real codes, not benchmarks.

So while you may say that the benchmarks that are used to measure C2D vs. K8X2 are biased, they are still valid, real-world applications.

IMO, Intel simply has greater advantage in terms of skewing benchmark results. Software developers will optimize their codes for the bigger market, which is 4:1 in Intel's favor. With more specialized instructions (SSE) and programs (media, games, etc.), program type and code optimization are more important to performance. Plus with much larger marketing power Intel enjoys tremendous favor just by running the (valid, real-world code) benchmarks.

This is a fact that has been there since K5/K6. There's nothing IMO "embarrassingly secret" about Core 2 Duo.

Unknown said...

Maybe what Scientia was referring to was server apps.

C2D is not that great a contender to K8 in server environments due to FSB limitations. Server apps are the real thing and it shows how well an specific architecture can scale (performance wise).

I guess that's the reason you won't see C2Ds in HPCs and in 4/8/16 way servers. ;)

abinstein said...

Scientia, another point I want to make that is contrary to yours is about Intel's vs. AMD's processing capability, which translates to (partially) their highest clock rates.

While IMO you're correct that Intel's 65nm seems to top out at around 3.2GHz, judging from the fact that Penryn will have 3.33GHz version at initial released, Intel does have (much) better processing technology and can reach much better clock rates than AMD.

One reason I think is AMD's use of SOI, whose biggest advantage over bulk Si is the elimination of area junction capacitance, which, however, represents a smaller portion of total capacitance as transistor size reduces. Thus at smaller processing node, SOI's performance lead over bulk Si wears off, while at the same time its floating body effect and variable Vt make higher clock rates more difficult even with a higher Vcc.

Thus I believe with K10 AMD's going after the low-clock rate, high-IPC route, and we're not going to see the processor at high clocks. A 2.9GHz Agena may still have about the same level of performance as a 3.33GHz Penryn, but until we actually benchmark them using applications that are naturally in Intel's favor, we really don't know.

In conclusion, I think you're a bit optimistic about AMD regarding processing technologies, while critical about Intel somewhat on the wrong issue (i.e., IMO, it doesn't tweak for invalid benchmarks; it simply enjoys favors from the choices and optimizations of benchmarks).

(Although I also think your POVs on AMD+ATi and DTX are right on. :-))

Unknown said...

abinstein wrote:
...Intel does have (much) better processing technology...

Talking about now (current offerings) then yes, BUT K10 will change that.

abinstein wrote:and can reach much better clock rates than AMD.

Won't deny or approve that, but since you're talking about supposed clock speed limitations on Agena, please take a look at this:

Agena FX overclocks to 3 GHZ

Jeff Graw said...

Which is interesting. If that report is true AMD should be able to release 3GHz, or at least 2.8GHz K10 quadcores. But they aren't. Is K10 really that good that AMD is choosing to release it at a much lower speed than it is capable of attaining?

abinstein said...

"Server apps are the real thing and it shows how well an specific architecture can scale (performance wise)."

Yes, I have no doubt that a K8-based server system will cost less, run more efficiently, and have better overall performance than a C2D-based one.

For one, most servers are limited by cryptography. If you need a high performance box for a busy apache web server, you need 10x higher processing power to use SSL connections. The server will benefit from K8 *much* more than C2D.

If your HPC server runs lots of custom programs (as usually the case) whose assembly codes are not hand-optimized (again usually the case) and have lots of number crunching (e.g. numerical simulation, add-multiply, AND/OR/XOR and modular), you'll also benefit from K8 more than C2D.

The problem is that many decision makers of server purchase do not know/care about such detail. They look at the on-line marketing, find C2D being 20% faster than K8 (for totally different apps), and buy a bunch of C2D boxes. After the purchase, the truth is they don't care whether it is really 20% faster, or in fact 10% slower.

This (sad) situation happens on my friend, too. His office bought a few C2D boxes for complex custom programs, which I found run some 5% faster on my own cheaper K8 X2 4400+. Do you think they'd care about that?

What I want to say is that there is no doubt the C2D benchmark results are valid. However, with biased program choice and optimization, Intel can affect the opinions of say 80% of the potential buyers, who do not (care to) know the detail/truth of the actual performance of their programs. In the end, it's not Intel's fault that people are ignorant.

abinstein said...

"Which is interesting. If that report is true AMD should be able to release 3GHz, or at least 2.8GHz K10 quadcores. But they aren't."

No, OC capability is different from actual releasable clock rates. We see this from C2D, too, which in many cases can be OC'd to 3.4GHz or above. Yet Intel is not releasing C2D with those high clocks, at least not under warranty. :-)

Jeff Graw said...

"No, OC capability is different from actual releasable clock rates. We see this from C2D, too, which in many cases can be OC'd to 3.4GHz or above. Yet Intel is not releasing C2D with those high clocks, at least not under warranty. :-)"

The difference is that AMD uses SOI so they can be more liberal in testing their CPU's to the fail point.

That's why since the introduction of the A64, all of AMD's flagship have barely OCed at all at stock conditions. Sometimes a 200MHz OC was attainable, but often not. Now all of the sudden a 500MHz OC at stock conditions? Either that's a really lucky chip or something is up.

Scientia from AMDZone said...

sharikouisallwaysright

"But byuers of a Celeron do not need any performance, so this is not a no-go..."

I never suggested that Celeron 440 should have more power; the comparison was only with performance of small cache versus large cache.

Greg

" I think your arbitrary choice of 7C for error correction is unjustified"

It's the best guess I can make. Presumably Ortho would cause more thermal stress and according to the guide it is 5c lower. Obviously I would prefer that the same testing was done with TAT.

Sal

"E6850, 3Ghz 1333FSB "

I don't understand; are you saying that it is impressive if Intel can raise the clock by 0.07Ghz?

Sal

"Do you really think that Intel will release a Penryn working with worse speccs than the one showcased at IDF (3,33Ghz, 1333FSB)?"

I assume your question is whether or not I think that Intel will have a 3.2Ghz Penryn in Q4 07. I would say that it is possible but so far it hasn't shown up on any roadmap. If not Q4 then surely we will see one in 2008.

axel

"using the *stock HSF*, with temperatures <60 C under load

I myself run my E6600 at 3.2 GHz at 1.35V (a bit over stock), and running Orthos I don't exceed 65 C."


That's fine but according to the Guide you shouldn't exceed 50c while running Ortho. So, you are proving that your system runs hot. This would agree with the Anandtech results.

axel

"your prediction that AMD would steadily gain unit share at 0.6% per quarter through 2007. Total reversal this quarter as the market research figures will soon show."

The latest indications are that AMD will be lucky to hold onto its volume share before K10 is released. I could see AMD's losing volume share in Q1 and Q2. There is no doubt that AMD's revenue share is dropping sharply; the question is how much the volume share drops.

qurious

"Plenty of data out there including my personal experience with Core2 that pretty much discredits your last post."

The data isn't mine; it's from Anandtech. Even Axel posted data showing a hot system. If you know of some data that shows cooler operation then feel free to reference it.

sickninja

According to the data, the X6800 runs too hot with stock HSF. It seems fine with premium cooling but X class chips are supposed to be stock, not premium. If Intel reclassified the X6800 to indicate that it had to have premium cooling then I would have no complaint.

orly

I'm sorry; I couldn't find anything in your comments to reply to. Do you have any information that shows that FX-62 is exceeding its thermal limits?

abinstein

Cache sensitivity does not indicate real world performance. And, snips of real code run under artificial conditions do not show this either. There have been no standards at all for benchmarks. Benchmark standards would be nice.

abinstein

"While IMO you're correct that Intel's 65nm seems to top out at around 3.2GHz, judging from the fact that Penryn will have 3.33GHz version at initial released"

I didn't say that Intel tops out at 65nm; I said that according to the Anandtech data their X6800 runs hot at 2.93Ghz with stock HSF. It seems to be fine with higher clocks with premium cooling. I also said in my article that I expect Penryn to clock higher even at stock.

jeff

" Is K10 really that good that AMD is choosing to release it at a much lower speed than it is capable of attaining?"

I think these arguments are completely bogus for both AMD and Intel. I think either company will release what it can reasonably yield when it can.

Scientia from AMDZone said...

BTW, if I were only trying to pump up AMD as some here think I would be talking about Agena FX overclocks to 3 GHZ

However, I have no idea what the significance of this is, if any. Overclockers would probably like this but overclockers are insignificant in the market. The only clock speeds that matter are what are actually released.

I assume that if AMD intends to revise the clock speeds that it will release then they'll mention this at the June meeting.

DaSickNinja said...

What data is this? My X6800 runs under 55C @ load even with Intel's crappy push pin HSF.

Scientia from AMDZone said...

sickninja

"What data is this? My X6800 runs under 55C @ load even with Intel's crappy push pin HSF."

Do you run TAT to do your thermal testing? If you are getting 55c with lighter loading this would be very close to the 56c that Anandtech got.

enumae said...

Scientia

I thought that was an interesting read, but one statement really stands out...

We can then see that C2D works well with carefully controlled code in a pristine testing environment, but falls apart under real world conditions.

I think what you are really talking about is heavy multitasking, right?

Anyways, I spent about two hours looking for something to backup your cliams, or disprove them.

I'm glad to say you are correct.

There are only handful of reviews that have really put the processors through multitasking situations, and even those are pretty light.

The performance gap shrinks pretty quick.

---------------------

When you talk about Conroe and how its cache helps performance, you should probably point out that this is to cover up the latency due to the FSB and not having an IMC, right?

Just a thought, how would AMD run compared to Conroe without an IMC?

Wouldn't AMD also need cache to hide the latencies?

---------------------

As for the temps. I ran Orthos and Speedfan 4.3? for about 10 mins, just to see about where I fell.

My heat score is 11, it was 60°C (core 0, core 1) under load, so I guess its normal.

Abit AW9D-Max
E6600 3.3GHz
Vcore 1.40

Again, nice post.

Scientia from AMDZone said...

enumae

"As for the temps. I ran Orthos and Speedfan 4.3? for about 10 mins, just to see about where I fell.

My heat score is 11, it was 60°C (core 0, core 1) under load, so I guess its normal."


Normal? According to the Guide your temperature is 10c over the limit. BTW, how did you make the degree symbol?

enumae said...

lol... Hold down the alt key, then type 248 = °, I do Land Surveying, so I have to do that alot.

PS: Did you mean mutitasking, or real world in reference to my previous post?

Scientia from AMDZone said...

enumae

Well, I meant real conditions as in the machine evironment that would be typical rather than an artificial one.

Testers are going to clean up the machine as much as possible before running any tests. They'll turn off every background process that they can and have nothing extra running. However, this isn't really typical. Home systems are not run this way.

Secondly, the limited demos and snips of test code from real applications is also not the same as a real world machine state because in the course of running a real application you will hit more of the code than you would in a limited test. What this means is that under real conditions the cache would tend to get more fragmented and at some point will reach a typical state. Since the testing isn't actually profiling the test code itself we don't know how closely this test matches the normal machine state. An example of this is that the Lamborgini Countach was claimed to hit 200mph which it might have under special conditions. But when tested in an actual driving configuration in the US the vehicle was unable to get past 165mph.

Finally, the testing for multi-core has been particularly bad. Very little of the testing has been done with all cores stressed.

I think properly loaded testing is more common for servers but what I would really like to see are some type of standards for benchmarks and guidelines for testing so that we can be sure that something real is being tested.

For example, there are people who will insist that SuperPi is testing something real but even the limited testing that I have done with this toy benchmark shows that it is useless for any type of comparison between processors. The only thing that can be tested with this is scaling on the same processor when changing clock speed.

sharikouisallwaysright said...

First Penryn-Benches:
http://www.forum-3dcenter.org/vbulletin/showpost.php?p=5420503&postcount=3

Azmount Aryl said...

Question:
When they say that this processor is at 3.33 is it really at 3.33 or is it at 3.66?
Food for thoughts.

Aguia said...

Finally, the testing for multi-core has been particularly bad. Very little of the testing has been done with all cores stressed.

Finally. Some one saying what I have said hundred of times.

I have one Core 2 Duo (single core) in HL2 I got 120fps.
I have one Core 2 Duo (dual core) in HL2 I got 120fps.
I have one Core 2 Quad (quad core) in HL2 I got 120fps.

Can anyone see the trend here? Now imagine that I got one sixteen core processor. HL2 would run at near light speed. ;)

Aguia said...

AMD and Intel clock speeds question.

If AMD K8 as a 12 stages core and Intel as a 14 stages core, doesn’t that give Intel at least 16% clock speed advantage?

What I'm saying is
AMD K8 clocked at 3.0Ghz = 3.5Ghz
?

(Willie/Northwood) 20 stages = 66% clock advantage.
(Presscott/and up) 31 stages = 158% clock advantage.

Or I'm messing everything up?

Also another point if true, shouldn’t Core 2 Duo be already at least 3.5Ghz since its at 65nm while AMD is at 3.0Ghz with 90nm?




The CPU is a 4-issue design (compared to the 3 issue cores of the Athlon 64 and Pentium 4 architectures) with a 14 stage pipeline - significantly shorter than that of NetBurst CPUs (from 20 in Willamette to 31 stages in Prescott). The shorter pipeline will ensure that Merom and it's derivatives will not clock as high as Precott, but it will likely clock as fast or faster than the Athlon 64 - i.e. around 3Ghz. However, the IPC of Merom is likely to be better than the Athlon 64 due to it's 4 issue superscalar design and vastly better than the P4.

Unknown said...

excellent post scientia =D

abinstein said...

"If AMD K8 as a 12 stages core and Intel as a 14 stages core, doesn’t that give Intel at least 16% clock speed advantage?"

I don't think it's that simple. Clock rate is not affected by the total number of pipeline stages, but by the length of the critical path. Unless we know what longest paths the pipelines (C2D vs. K8/K10) have, it's difficult to judge which has a better process technology.

Based on the fact that C2D outperforms K8, clock-for-clock, by more than 10% in average, I'd rather believe that C2D has a longer critical path than K8, and thus Intel does have a better process technology.

Unknown said...

i really wonder what's jack's take on this...

may i have the permission to post this on the forum?

abinstein said...

"However, the IPC of Merom is likely to be better than the Athlon 64 due to it's 4 issue superscalar design and vastly better than the P4."

This is probably the worst misinformation on the web. It clearly shows how clueless most websites are.

The "4-issue" of C2D is different from the "3 complex decodes" of K8. The former reads 20 bytes per cycle and generate up to 4 micro ops. The latter fetches 32 bytes per cycle and decodes up to 3 x86 instructions. In sheer number, K8's 3-way x86 decoding is even better than C2D's 4-way micro op generation, but in average they should perform about the same, because 1) most x86 instructions are translated to one micro op, so K8's complex decoders have no advantage there, 2) in average programs, one x86 instructions is decoded to roughly 1.3 (p6) micro ops.

In other words, the "4-issue superiority" is merely one of the "blue crystals" that Intel used to market its microprocessors. It's by no mean better than K8's 3 complex decoders, which is not the performance bottleneck of K8 (it could be for K10, though, if it is 50% faster than Conroe per core at the same clock).

abinstein said...

"It's by no mean better than K8's 3 complex decoders, which is not the performance bottleneck of K8 (it could be for K10, though, if it is 50% faster than Conroe per core at the same clock)."

I should've said this in three separate sentences:

1. C2D's 4-way micro op generation is by no mean better than K8's 3-way complex decoders.

2. The 3-way complex decoders are not yet the IPC bottleneck of the K8 core.

3. The 3-way complex decoders could become the IPC bottleneck of the K10 core if K10 has 50% higher IPC than C2D.

Unknown said...

Talk about your past statements coming back to bite you in the ass...

Scientias Said:
https://www2.blogger.com/comment.g?blogID=32351755&postID=3002587459677493167
"Basically, I would expect AMD's margins to recover somewhat in Q1. Apparently, you expect their margins to be worse. I guess we'll see what happens in a couple of months. If AMD shows further drops in margin and loses volume share then I would also be more pessimistic about their future. But, right now, I don't believe that 2007 will be anywhere near as bad as you suggest."

Are you ready now to admit that you were 100% wrong? As it turns out, I was predicting a loss of ~$210M minumum but even I am surprised at the size of the loss in Q1. I told you that margins would be down and sales would decrease, you said the opposite. The good news for you is that you can simply admit that you were wrong and did not understand the marketplace and save some of your credibility - or you can deny reality and become a Sharikou clone....

The choice is yours....

sharikouisallwaysright said...

I thought sales where up but income was hurt from low pricing?

Roborat, Ph.D said...

Scientia said: "Core 2 Duo are....that at 2.93 Ghz it is exceeding its thermal limits on the 65nm process."

Absolutely incorrect. The thermal dissipation rating of the CPU packaging is a function of the design of heat spreader (IHS), thermal interface material (TIM) and the hot zoning design of the chip. Obviously the temperature limits defined by intel were derived from the capability of the IHS and TIM to dissipate heat at 55°C. This limit has nothing to do with the silicon thermal limit. Nothing is stopping Intel from slapping a thicker IHS and a more expensive TIM just to increase heat transfer (along with clock speed) while maintaining that 55°C thermal requirement.

Scientia from AMDZone said...

yomamafor2

I would be surprised if jj didn't already know about it since sickninja was here.

I would suggest though that you refrain from copying and pasting the entire or large sections of the article since it is quite long. I should tell you though that if you do mention my article you are probably going to be compared to 9-inch who used to link to my posts on AMDZone and was eventually banned. You will probably be compared to MadModMike who didn't link to my posts but who was also banned.

So, before you get into a big spat over at Forumz you should probably be clear on my position.

1. I'm wondering why the big dropoff in performance for the Celeron version of C2D.

a. It has been suggested that the benchmarks are not cache sensitive.

b. It has been suggested that C2D has plenty of memory bandwidth.

So, I'm puzzled what is causing the dropoff if it isn't cache. Secondly, if it is cache then why doesn't K8 show the same droppoff? Finally, I am questioning the quality of the benchmarks to test behavior under real world conditions.

2. It is clear from the Anandtech tests and other annecdotal comments that X6800 is running hotter than the temperature limits listed in the C2D Temperature Guide.

a. Is the temperature Guide wrong?

b. If the temperature Guide is correct then why has no one mentioned that X6800 runs too hot with the stock HSF?

c. If X6800 cannot stay within temperature limits with the stock HSF at stock temperatures then I think it needs a special classification since it is extremely likely to be running hot in most of the installations that it is sold.

Now, I don't want you to confuse the final point on temperature. The Anandtech tests show that X6800 is fine with premium air cooling and this should allow overclocking. So, I'm not saying that the chip has to be hot or can't overclock; I'm saying that it seems to need more than a stock cooling method.

DaSickNinja said...

9-inch got banned because he was a troll. MadModMike got banned because he did not act with proper decorum and harassed other users.

People that just copy and paste without
A) Knowing what they say or
B) sticking around to argue their positions are not thought of too highly.

If he says that he got this from some where else and doesn't claim it as his own, the majority of the posters will respond in a normal fashion.

This in no way means everyone is going to be a big boy about it, but with a community this big, you always have a few morons on both sides.

In conclusion... meh.

DaSickNinja said...

And for Gods sake, its either Dasickninja or Ninja. SickNinja makes me sound like some sort of invalid. [/pedantic]

enumae said...

Scientia
Secondly, if it is cache then why doesn't K8 show the same droppoff?

Hasn't it been clearly demonstrated that the reason for the large cache on Intel chips is to cover up the high latency of the FSB when compared to AMD who uses HTT and an IMC?

Shouldn't a latency benchmark clarify this?

Scientia from AMDZone said...

dasickninja

How about daninja?

enumae

Well, there is obviously a difference in latency but latency shouldn't cause a 40% drop. C2D's latency isn't that bad.

enumae said...

Scientia
Well, there is obviously a difference in latency but latency shouldn't cause a 40% drop. C2D's latency isn't that bad.

Conroe's, or Allendales's latency isn't that bad possibly due to the large caches.

If you have not seen benchmarks showing latency, then how can you rule it out and have a conclusion?

Also are the benchmarks that are running slower memory intensive where a latency hit would show up?

abinstein said...

"Conroe's, or Allendales's latency isn't that bad possibly due to the large caches."

enumae, again you show how poor a logic you have. There are two claims in Scientia's comment:

1. No latency difference could result in 40% less performance.

2. C2D's memory latency isn't that bad.

You completely ignored the first point. If C2D's microarchitecture is *so* sensitive to latency, then it is designed to be cache sensitive, since the main purpose of cache is to reduce effective latency.

Second, there's been measurement on C2D's average memory latency when cache is invalidated after every read. As scientia said, "it's not that bad." It's probably 30% highr than K8, but at 90% hit rate (512kB/core) the difference of effective latency is only 3%. Now, if you think C2D with 4MB cache performs 20% faster because the large cache gives it 20% less effective latency, then the same cahe would've given K8 some 17% less latency, too. Again, if this is (as apparent from evidence) the main reason for C2D's good performance, then the microarch is designed to be cache sensitive.



the sole purpose of cache is to reduce effective latency. The reason that such reduction for Conroe greatly diminishes when cache size becomes smaller precisely indicates that the Core 2 architecture is designed to be cache sensitive.

Scientia from AMDZone said...

real

Well, I doubt I was 100% wrong. Obviously, AMD's revenues did not increase. Of course, I said I expected Intel's to increase too and that didn't happen either.

I don't really want to get into the details of this yet because I haven't looked at AMD's numbers or seen what the new volume shares are. When I get the numbers (and you can link to volume share if you find it) I'll do an article to discuss it next and how it is different from my thoughts in January. Then you can pat yourself on the back for whatever you got right.

Woof Woof said...

enumae asked:
"To be sure I have not missed something, please point out which benchmarks show "With a 10% greater clock speed, the lower cache C2D is actually 36% slower.", as stated in your article."

You have to flip between the 2 pages that scientia linked to:

http://www.xbitlabs.com/articles/cpu/display/conroe-l-preview_3.html

http://www.xbitlabs.com/articles/cpu/display/conroe-l-preview_4.html

The 4300 is a 1.8GHz model.

The Celeron 440 aka Conroe L is a 2GHz model. aka 10% higher clock speed.

Scientia from AMDZone said...

Oh, I guess I didn't specify. 36% is the average of all the benchmarks.

Azmount Aryl said...

woof woof said...
The 4300 is a 1.8GHz model.
The Celeron 440 aka Conroe L is a 2GHz model. aka 10% higher clock speed.


Actually the only time this review compares Conroe-L to E4300 the Conroe-L is at 3.0GHz and E4300 is at 1.8GHz.

Not to mention that this comparison is invalid cause with overclocked FSB Conroe-L has the unnatural advantage.

Aguia said...

Another benchmark that show huge differences in cache:
Techreport Valve Source engine particle simulation
and Valve VRAD map compilation


The 6600 and 6700 enjoys "just" 44% to 53% performance advantage VS the 6300 and 6400.
What only 2MB of extra cache can do!
And the second benchmark 27%.


Another question, aren’t Intel tactics similar to AMD?
AMD processors that enjoyed extra cache 1MB for single core and 2MB for dual core, had a price premium, now Intel is doing the same with their 4MB parts.
Or does that mean they are having yield problems?
(Since they are disabling half of the cache in full 4MB parts).

There is also other strange thing that backup my claims, in the notebook line the T7xxx models all have 4MB and the T5xxx 2MB cache.
And on desktop the E6xxx two have 4MB cache the other two have 2MB cache. The E4xxx also have 2MB of cache.
There are also no quad core parts with 2MB+2MB cache instead of 4MB+4MB.

And most notebooks and PCs that I can find selling are T5xxx and E6300/6400.

DaSickNinja said...

Scientia
Getting better, but still not quite there. ^_^

abinstein
So? People have a right to talk.

DaSickNinja said...

The right of free speech should only be abridged by the right to listen to said speech.

enumae said...

Scientia
Oh, I guess I didn't specify. 36% is the average of all the benchmarks.

Ok, well how much of that comes from the Celeron 440 being single core compared to the Conroe E4300 which is dula core?

abinstein said...

"So you being civil would explain why you are still continuing to emphasize that my post was lacking logic, thanks for making my point."

If I were you, and were caught making comments that object for the purpose of objection (i.e., with poor logic), I would've acknowledged the error and quickly moved forward.

Unfortunately you just couldn't do so. You keep insisting on 1) you were new to the inner working of processors, 2) others are being uncivil to you. That doesn't help you nor the discussion at all.

Please note that scientia raised two questions based on pure observation, without requirement of any inner working knowledge:

1. Why does Core 2 with smaller cache perform so poor?

2. Why didn't K8 with smaller cache suffer as greatly?

The validity of these benchmarks are thus questioned. In response, you suggest memory latency makes the difference. This makes no sense, because if a larger cache (greatly) improve Core 2's effective latency, so will it improve K8's.

At this point it already seemed to me that you were objecting simply because you wanted to object.

Then scientia explained to you two things: 1) latency shouldn't make that much difference to overall performance by itself, 2) Core 2's main memory latency isn't that bad. I thought, with proper logic reasoning, these two should definitely explain why latency alone couldn't affect Core 2 so much but not K8.

Your response to that, which again makes no sense nor logic, is that 1) you think Core 2's good latency is possibly due to large cache, and 2) you don't know/haven't seen Core 2's latency benchmarked.

Firstly, the two points of yours contradict each other. You can't suggest Core 2's effective latency greatly improved by cache when you haven't seen proofs of it.

Secondly, Core 2's main memory latency was benchmarked and shown to be only slightly worse than K8. Scientia (and many others) know that for facts and that's why he made the comment. If you didn't know it, what are you objecting for?

enumae said...

Abinstein

Please understand I am not trying to be difficult.

If I were you, and were caught making comments that object for the purpose of objection (i.e., with poor logic), I would've acknowledged the error and quickly moved forward.

I do not object just to object.

I may not be able to fully explain the reasoning behind my opinions, for this I will appologize, as it is not my intent to frustrate you or anyone here.

1) You think Core 2's good latency is possibly due to large cache

Yes, on Conroe (4MB) and Allendale (2MB).

2) you don't know/haven't seen Core 2's latency benchmarked.

I beleieve this is the main cause of confusion.

I am talking about the latencies of this single core Celeron 440 with 512KB cache not Conroe with 2MB or 4MB.

I have searched Google and found nothing, so if you have seen them please point me in the right direction, or provide a link.

If I am wrong in the way I am looking at this, I appologize.

InTheKnow said...

Scientia, while I find your blog interesting, it isn't hard to figure out where the accusations of bias come from. Just look at the titles of your posts this month.

Core 2 Duo - The embarrasing secrets

Intel's chipsets - The roots of monopoly

Intel - the monopoly under siege

The trend here is pretty clear to me. If you want to be balanced, maybe you should do a few pieces taking an equaly critical look at AMD.

Scientia from AMDZone said...

enumae

When abinstein said:

"a few regular, biased, childishly immature amateurs who talk about microarchitectures that they simply have no clue of"

he was talking about some of the posters on ForumZ, not you.

Scientia from AMDZone said...

azmount

"Actually the only time this review compares Conroe-L to E4300 the Conroe-L is at 3.0GHz and E4300 is at 1.8GHz."

They also list the scores for Conroe-L at 2.0Ghz. The 2.0Ghz scores do not have an overclocked FSB. The 2.0Ghz scores are on the previous page. As I said, they sure didn't make it easy.

Scientia from AMDZone said...

aguia

"Another question, aren’t Intel tactics similar to AMD?
AMD processors that enjoyed extra cache 1MB for single core and 2MB for dual core, had a price premium, now Intel is doing the same with their 4MB parts."


I don't have an issue with the price. My article was about benchmarking and whether the larger cache on Intel chips artificially inflates the benchmark scores.

Scientia from AMDZone said...

enumae

"Ok, well how much of that comes from the Celeron 440 being single core compared to the Conroe E4300 which is dula core?"

Good question. You have to carefully compare the 2.0Ghz scores with the 3.6Ghz Cedar Mill scores. Cedar Mill is dual core. Now, for single threaded benchmarks Cedar Mill should be pretty close and that is what we see, however, if the the code were multi-threaded the dual core Cedar Mill should leave single core Conroe-L behind. And, that doesn't happen.

Scientia from AMDZone said...

intheknow

Well, the answer to your question is a bit complicated.

One aspect is that it is common on other websites to have titles that use hyperbole for Intel. I'm sure you've seen ones, like:
"Intel, the empire strikes back", "Core 2 Duo smashes K8", etc.

Another aspect is generation of interest. I get about 25% of my traffic from google searches. When someone hits a page they are more likely to read the article if the title is a bit provocative. For example, if the title had been "Core 2 Duo -- A Review of Heat and Cache Effects" I don't think that would have quite the same result.

Generally, I don't have to say anything bad about AMD because every other website is doing it while they are giving Intel a free ride. This is not some exageration on my part. In all honesty, AMD gets hammered by review sites, by stock analysts, by financial analysts, by industry pundits, and by companies who want better relations with Intel as well as by lots of forum posters.

I don't know if you remember when companies were falling all over themselves to proclaim that they would support Itanium and some companies even stated publicly that they would not use K8. I think we can see how that turned out.

Intel is a trend setter and sometimes it abuses this ability. It got it right with ATX but got it wrong with RDRAM, X86-64, BTX, and most recently with FBDIMM. Conversely, AMD got it right with DDR, X86-64, DTX, and registered DRR2.

Sometimes it seems that Intel gets credit no matter how bad it messes up and AMD gets criticism no matter what it gets right. And, I hate it when people misuse numbers to pump up Intel. For example, some have lately been proclaiming that Intel has taken revenue share. Well, it is ridiculous to say that Intel took revenue share when its revenues dropped too. And, then there are the people who confuse this with volume share and think that AMD's processor unit sales have fallen 30%.

Anyway, I'm expecting AMD to step up with its R600 chipset, with DTX, with Torrenza, and with K10 and be competitive.

If this doesn't happen then I'm sure that my articles will be more critical. As it is I think AMD is still going to get hammered in the second quarter before K10 is released. Right now, I'm waiting to see numbers on volume share before I can do a Q1 article.

Scientia from AMDZone said...

enumae
"I am talking about the latencies of this single core Celeron 440 with 512KB cache not Conroe with 2MB or 4MB."

TechReports

The two charts at the bottom of the page.

Main memory latency doesn't change with cache size so it should be similar for Conroe-L. You can see that:

FX-62 - 40ns
X6800 - 60ns

enumae said...

Scientia
...3.6Ghz Cedar Mill scores. Cedar Mill is dual core.

Intel says..."The Intel® Celeron® D processors 365, 360, 356, 352, and 347 are single-core desktop processors on the 65 nm process. The processor uses Flip-Chip Land Grid Array (FCLGA6) package technology, and plugs into the LGA775 socket."

I am not sure, but this should change your conclusions since you are comparing a Celeron 440 single core processor to a Core 2 Duo E4300 dual core, right?

InTheKnow said...

I see the logic behind your titles, but as long as you are targeting one company over the other for negative analysis (no matter how richly deserved) I think you open yourself up for accusations of bias whether it is justifed or not.

Scientia said ... "And, I hate it when people misuse numbers to pump up Intel. For example, some have lately been proclaiming that Intel has taken revenue share. Well, it is ridiculous to say that Intel took revenue share when its revenues dropped too. "

I'm not sure I follow your logic here.

Here is my (admittedly simplistic) view of it.

Let's say one week the total market is $100. Company A makes 70 dollars and Company B makes 30 dollars. So Co A has 70% of the revenue and Co B has 30%.

The next week, the total market shrinks and is only $80. Company A makes 60 dollars and Company B makes 20 dollars. Co A now has 75% of the available revenue and Co B has 25%.

Despite the reduction in the size of the pie, Co A has a bigger piece of it. Isn't revenue share just how big your piece of the pie is relative to the whole pie regardless of the total size of the pie?

Scientia from AMDZone said...

enumae

"I am not sure, but this should change your conclusions since you are comparing a Celeron 440 single core processor to a Core 2 Duo E4300 dual core, right?"

Yes, good point. We see a very similar scale from Celeron 365 to Pentium D 925. The problem is that 365 has 512K of cache while 925 has 2MB per core. So, now I have to try to find out whether the change is due to cache or having dual cores.

Scientia from AMDZone said...

intheknow

" Isn't revenue share just how big your piece of the pie is relative to the whole pie regardless of the total size of the pie?"

Yes, but there is a difference between having share and taking share. In this case, there would also have to be a change in volume share for Intel to have taken share.

Okay, let me explain. You have a grocery store and you sell watermelons for $3.00 apiece and so does Fred's mart across the street. Let's say you both typically sell $99 a day worth of watermelons.

However, one Wednesday the sales are slow so Fred's mart puts the watermelons on sale for $2.00. At the end of the day you sold 22 melons while Fred sold 30.

You made $66 while Fred made $60.

Based on your interpretation you could claim that you took 5% of Fred's revenue share even if your sales are the same the next day. This of course, is nonsense.

To take share you have to keep it for more than one quarter. Also, to really have a shift in share AMD's prices would either have to stay the same or you would also have to have a shift in volume.

InTheKnow said...

Scientia said... "To take share you have to keep it for more than one quarter."

So just to clarify, it is only now possible to see if Intel really took revenue share in Q4 '06. Any perceived growth in revenue share by AMD could not have been real in that quarter since they clearly suffered a decline in revenue share this quarter.

You also seem to imply that any change in revenue share without a corresponding shift in volume share is not real. Am I stating your view correctly?

InTheKnow said...

Scientia said..."So, now I have to try to find out whether the change is due to cache or having dual cores."

I think that is what the review site was trying to acomplish by comparing a dual core at 1.8 GHz to an overclocked single core at 3.0 GHz. While not entirely the same thing, at least it is an attempt to level the playing field and answer that very question.

LG said...

It's interesting to witness how some key and common posters around the net and popular forums also seem to frequent investor forums, and were so dissapointed that AMD's stock never plummeted. I trully hope these short people looking to prosper at AMD's demise will end up on the street and under a bridge. ;) Sorry, had to get that off my chest. Makes me wonder if they all have something else in common.... ;)

LG said...

That says it all. This is the very embodiment of partiality and bias. If you're sticking up for AMD because they get lambasted everywhere else, fine, but don't try to claim impartiality at the same time. Admit that your articles are slanted towards showing AMD in a positive light and be proud of it!

One would think it wouldn't be that hard for someone to figure it out on their own, given the "scientia from amdzone" title. I agree, AMD is being hammered from nearly all enthusiast sites around the net, for what largely comes down to only one practical advantage in a home user environment. Encoding/decoding. In everything else, AMD surpasses Intel on a price/performance ratio. And as pro-AMD scientia may be, it isn't even close to bringing neutrality back to the equation. This anti-AMD sentiment is infectious, and I have no doubt that Intel banked on it and pushed it through various initiatives. There are probably 10-20 posters in this blog alone, trying to stiffle any positve light on AMD, and push any pro-Intel sentiment possible. I've no doubt what the common denominator is. It's underhanded dirty tactics like this that i'll never, repeat never, ever buy anything made by Intel. If by chance AMD is forced out of the cpu market, I will give up PC's altogether.

Azmount Aryl said...

Second to that. (LG post)

Unknown said...

Second to that. (LG post)

Second that too.

abinstein said...

"Second that too."

Me second that, too, but would also like to add that, in addition to encoding/decoding, C2D also has better price-performance than K8 in some gaming setups (AI path finding) and superpi.

enumae said...

LG
...trying to stiffle any positve light on AMD...

I say this only for debate!

What have you seen as a positive for AMD in the last 3-6 months, besides the purchase of ATI?

LG said...

What do you mean? Their processors are positive. Because they don't hold the current crown of performance, doesn't negate the fact that their chips are still high quality items. And at all price points of their processors they are very competitive. Either winning by a margin, losing by a margin or dead even. How much more positive do they need to be to sell! Just because Intel's processors overclock doesn't mean everybody does it, yet that is the "deciding factor". Hell, i'd wager a guess that %95 of the C2D cpu's sold to customers haven't moved off their shipping speeds. Yet forum after forum recommends Intel's processors at any cost. It's unreasonable and ridiculous. I've seen post after post after post in many forums, where somebody comes in asking for a recommendation for a cpu and the immediate response is "C2D FTW"! Sure, if you do O/C then it's a good chip to added performance, but how many people do you suppose run there chips that way for more than synthetic benchmarks? And those benchmarks get pushed over and over as the be all, end all of decision making. The point is does their current line of products falter in performance to the tune of $600 million in 1 quarter? P4 sold better than that.
BTW, as per scientias investigation of traffic on his blog being a large part from Intel themselves, coupled with the vast negativity and anti-AMD sentiment should speak volumes.
AMD's processors are not week, but are certainly portrayed that way. It's too well organized and methodical to be anything but suspicious. With Intel being scrutinized by the FTC and DoJ, what better way to push your agenda and keep your monopolistic powers than send and army of shills out along with your new line of processors to completely sway and capture public opinion.
Anyway, that's just my opinion, but i'm completely convinced of it.

enumae said...

LG
What do you mean?

A simple break down of AMD's current events...

1. Their finances are terrible.

2. Current products are being beaten by Intel's, I am not saying they are bad but, faster is faster and this is at stock speeds.

If you want to bring in price, consider AMD has lowered prices about 5 times since the launch of Conroe, and have just recently achieved better price/performance.

If the news on the internet is true (Intel price cuts April 22nd), then it will be very short lived.

3. Lack of competition for Clovertown which has allowed Intel to dictate server prices.

4. Delays for R600 based DX10 graphics, while Nvidia is launching there mid to low end cards, and regardless of their drivers, they have shipping products.

All the while Intel is doing all the right things, making money, good prices on Connroe, market share gains (supposedly), and the supposed release of some 45nm chips later this year.

In my opinion there hasn't been much, if anything from AMD press wise or product wise, to get excited about.

Barcelona is a different story, but that is a few months away.

So, what have you seen as a positive for AMD in the last 3-6 months?

Wise lnvestor said...

enumae :
4. Delays for R600 based DX10 graphics, while Nvidia is launching there mid to low end cards, and regardless of their drivers, they have shipping products.

Wouldn't that be alarming to you?

That show the incredible short sightedness in their corporate mentality. They are shipping products even before they have proper drivers(proper testing).

In the end nvidia screw their partners(oem's) and customers(end users). Sadly, they are trading off their future for profits today.

That's why I said I wasn't surprised when Dell purchased larger quantities of DX10 cards from AMD/ATI.

I use to be a NVDA shareholder, but not anymore...

Scientia from AMDZone said...

I went back and looked up quotes from Intel officials from when they first released ATX. They said they weren't concerned about heat and were concentrating on reducing cost.

Then I found a digitlife article from when Intel announced BTX and it was described as the "revolution of the desktop PC platform". Hmmm.

Then we go to AMD with DTX. This form factor has gotten very little attention even though it has the same cost reducing goals as the original ATX spec. We also know that half a dozen companies are going to start making these.

I see this as a double standard.

But, I don't see myself as biased. Intel was smart because they copied AMD in several ways:

1. They went to a single architecture strategy which is what AMD had been doing with K8 and K7.

2. They introduced the sever chips first which is what AMD did with K8.

3. They copied AMD's 760 MP dual FSB chipset design for Woodcrest and Clovertown.

Very smart to copy AMD here. Then after the success of C2D and there is no doubt that it pulled ahead of K8 Intel copied its own so so strategy of using MCM. It didn't work as well with Smithfieled for Presler but it definitely was a win with Clovertown. Let's face it; AMD simply has nothing the Kentsfield class on one socket.

However, it wasn't Clovertown that boosted Intel's server sales; it was Woodcrest with the dual FSB chipset. This was the first time that Intel was able to field four cores with shared memory without a severe slowdown.

Scientia from AMDZone said...

AMD's revenue's fell about half of the total fall in 2002. So, if AMD takes as big a drop in Q2 then this will just as bad as 2002.

There has however been no indication of share gain by Intel. If the unit sales figures show that AMD lost volume share as well then this would be a big victory for Intel.

Now. Let me make this clear. Intel is not currently doing well because of X6800 or QX6700. These chips are not that much of factor. Intel is currently ahead at the top end of the desktop range but the big problem is that the bottom end is being deluged by P4 chips that Intel is getting rid of.

What you need to understand is several things. First, these low prices are affecting Intel and they didn't back in 2002 and 2003. Second, these P4 chips will be gone soon and will no longer be a factor. Third, DTX is specifically designed to survive in an extremely cost sensitive environment so I don't see anyway that Intel could repeat this regardless of how good Penryn is. DTX and R600 should be a big win for AMD. This should stabilize AMD's low range. If K10 is any good then the mid to upper range should stabilize as well.

Finally, I think everyone is forgetting about the FAB situation. Today, AMD has no choice but to keep cranking 90nm chips on FAB 30 because this is all it can produce. However, by the end of this year FAB 36 will be at full capacity and producing all 65nm chips. FAB 30 will only be at about 40% capacity and ready to start 300mm wafers. This means that by Q1 08 AMD will not experience this kind of margin hit again because it will be using almost all 300mm wafers. Secondly, the cost benefits of APM kick in when both FABs are the same.

enumae said...

Wise Investor
Wouldn't that be alarming to you?

It is, and thats why I mentioned it, but, Nvidia is selling products and AMD is not, just delays, and its not due to drivers (supposedly).

Scientia from AMDZone said...

I could mention that while MCM was terrible for Smithfield and so so for Presler, Intel was very smart to use the hybrid bus design for Tulsa. This allowed Intel to stabilize the top end of the server range even though they don't currently have a C2D solution.

So, altogether Intel was smart with:

Quad core on single socket.
Dual FSB chipset for dual socket.
Hybrid MCM design for Tulsa for 4-way.

These things along with C2D's excellent SSE performance is what gives Intel such a good server lineup today.

However, that lineup is not as effective later this year:

SSE loses its advantage.
Quad core on single socket has no advantage.
Quad FSB for 4-way is faster than Tulsa but it is more expensive and uses more power.
And, although dual socket is still good it does lose some of its speed with quad core because of the slower FSB.

The point is that Penryn by itself will not give Intel the same advantages that it currently enjoys. Even with Penryn Intel's competitive position on servers will be worse in Q4 than today.

Intel's competitive position on the mid to low desktop will also be worse because of R600 and DTX.

enumae said...

Scientia
Intel's competitive position on the mid to low desktop will also be worse because of R600 and DTX.

Looking at an Ars Technica article motherboard manufacturers make a DTX motherboard for Intel as well, so why will this only benefit AMD, or weaken Intel?

What is the R600 going to do to Intel?

Are you suggesting that a discrete graphics solution is going to impact Intel's IGP business?

Or are you saying we won't be able to purchase a machine with an Intel processor and an AMD graphics card?

Scientia from AMDZone said...

enumae

"Looking at an Ars Technica article motherboard manufacturers make a DTX motherboard for Intel as well, so why will this only benefit AMD, or weaken Intel?"

You'd have to link to the article. There should be DTX motherboards by mid year. There may or may not be Intel compatible DTX. As far as I know, no one has announced any yet. Also, I doubt very seriously that Intel will start making DTX motherboards anytime soon.

"Are you suggesting that a discrete graphics solution is going to impact Intel's IGP business?"

No, I'm saying AMD's 690 integrated chipset will stabilize its commercial position. The market for integrated is far larger than the market for discrete. However, having a good discrete graphics card should also be helpful in the high end and enthusiast range.

You can't really argue that this doesn't benefit AMD and then explain why Intel is also working on a discrete graphics offering of its own.

I'm not sure how else to explain this. Intel's current advantage comes from various factors but each of these factors is countered by AMD over the next two quarters.

DTX and 690 stabilize the low end desktop.

K10 quad stabilizes both high end server and desktop.

K10 dual core competes head to head with Conroe.

By Q4, AMD's position should be stabilized all across the board from low end to high end on servers, desktop, and mobile. Discrete is still an area where Intel is lacking. This gives AMD a better position at the high end because the only other competitor is nVidia which doesn't sell processors.

enumae said...

Scientia

Sorry, I left out a few words... "Looking at an Ars Technica article motherboard manufacturers will be able to make a DTX motherboard for Intel..."

Here is the link.

My comment may not reflect the statement in the article (Although DTX is an AMD initiative, DTX boards will be able to support both AMD and Intel CPUs.), but if there is an adoption of DTX, shouldn't they make motherboards for both Intel and AMD?

No, I'm saying AMD's 690 integrated chipset will stabilize its commercial position.

Ok, but just to be clear your prior statement was... Intel's competitive position on the mid to low desktop will also be worse because of R600 and DTX., I didn't see any reference to the new chipset just a form factor.

You can't really argue that this doesn't benefit AMD...

No, and I wasn't, like I said you had not made mention of the chipset, just a form factor.

Intel's current advantage comes from various factors but each of these factors is countered by AMD over the next two quarters.

In two quarter the market will stabalize?

In your previous post, you talked about Intel and ATX and how it took 2 years to become "firmly established", now you feel AMD can do it in 6 months?

There are other form factors that this has to compete against, I don't see how you can be so optomistic, while claiming to be unbiased.

K10 quad stabilizes both high end server and desktop.

We have to see the performance first don't we?

We also need to see where Intel's Penryn release date falls aswell. Like I had said before if AMD is only marginally better than Penryn, they will be presured by Intel's pricing, therefore no stabalization, just more of the same.

Also to stabalize a segment, you would need volume of these products. The claims from Hector Ruiz about seeing the revenue from K10 in 2008 is not very promising and doesn't seem to support your thoughts.

Scientia from AMDZone said...

Hmmm. They may or may not use DTX for Intel; this could be another Intel turf war.

AMD first talked about DTX in 2005. They released the spec in early 2006. So, this is actually a year past the release of the spec. That is why it will be established in six months. However, it will remain to be seen when the manufacturers like Dell and HP will pick it up.

Actually, DTX does not compete against other form factors. DTX will fit into a micro-ATX case. However, it is cheaper to make than micro-ATX. Mini-DTX is less certain since most likely this would be associated with VIA.

No, we don't have to see the performance for K10. We know that K10 is better than K8 and AMD currently has no quad core. K10 provides quad core and this fills in a big gap.

And, again, your perception of Penryn is incorrect. Even if Penryn is faster, Intel still loses most of its current advantages.

Intel cannot pressure AMD on price in Q4 as it has so far. Today, FAB 30 90nm 200mm production is nearly half. However, by Q4 this production will be something like 20%. Intel's 45nm production will not be enough volume to have that much effect.

If you need volume to stabilize a segment then Intel's quad core is currently having no effect and will still have no effect until perhaps mid 2008. This is nonsense. Likewise if your theory is true then Penryn would have no effect at all until perhaps the end of Q1 08.

I've already tried to explain about revenue in 2007 versus volume. Let's say AMD is at 30% K10 by year's end. This would be only about 8% of the year's revenue.

enumae said...

DTX is made, from my understanding, for emerging markets, and as such should have to compete with Core 2 Duo Q965, Mini-ITX type products.

While this particular product is lacking a PCI-E x16 or x8, it should be great for emerging markets, right?

------------------------------

K10 provides quad core and this fills in a big gap.

I see your point about the gap, but like you say the gap is only a small portion of the market.

However, by Q4 this production will be something like 20%.

I see your point, but remember AMD is not going to have FAB 30 fully fitted with 300mm this year, and they still need to make X2's, Sempron's and Turion processors.

I am not sure of there capacity, but can FAB 36 take care of all of that on 65nm and allow AMD to focus on K10 in FAB 30/38?

I am not sure, but again, if Hector Ruiz doesn't see revenue coming until 2008, it is likely due to volume.

If you need volume to stabilize a segment...

My comment in regards to K10 was not specifically directed towards quad cores, it applies to Dual and Quad, again if they don't have volume against Cloverton, Woodcrest and possibly Penryn, I don't think they will stabilize.


*edited*-When I reread my post it was not coming across as intended, so I have taken a few lines out, sorry about that.-

Scientia from AMDZone said...

enumae

"DTX is made, from my understanding, for emerging markets,"

DTX is basically the same as micro-ATX and these are common. Likewise, micro-BTX is more common than BTX.

" and as such should have to compete with Core 2 Duo Q965, Mini-ITX type products."

You are mixing up DTX with mini-DTX. Mini-DTX is slightly larger than mini-ITX.

"I see your point about the gap, but like you say the gap is only a small portion of the market."

You can't have it both ways. You can't say that quad core is important for Intel and then unimportant for AMD.

"I see your point, but remember AMD is not going to have FAB 30 fully fitted with 300mm this year, and they still need to make X2's, Sempron's and Turion processors."

FAB 30 won't be producing 300mm at all this year. However, it will still be producing 90nm chips on 200mm. Most of the production will be on FAB 36 though by the end of the year.

"I am not sure of there capacity, but can FAB 36 take care of all of that on 65nm and allow AMD to focus on K10 in FAB 30/38?"

First of all, the 45nm testing is currently being conducted on FAB 36, not FAB 38. Secondly, FAB 36 will be about 4X the capacity of FAB 30 by the end of the year.

"I am not sure, but again, if Hector Ruiz doesn't see revenue coming until 2008, it is likely due to volume."

I've tried explaining this but you don't seem to grasp the concept. If K10 is 20% of production by year's end this is only about 5% of the year's revenue. You don't understand this? K10 production will start at least 1 quarter sooner than 65nm did in 2006; I guarantee it will be at a higher volume than 65nm was.

"My comment in regards to K10 was not specifically directed towards quad cores, it applies to Dual and Quad, again if they don't have volume against Cloverton, Woodcrest and possibly Penryn, I don't think they will stabilize."

Why do you insist on having double standards? You first claim that Intel's quad core was important in 2006 with low volume and then claim that quad core is not important for AMD unless it is at high volume. Only about 5% of Intel's desktop products will be quad core by year's end. AMD will have enough quad core for this plus server.

enumae said...

Scientia
You are mixing up DTX with mini-DTX. Mini-DTX is slightly larger than mini-ITX.

Ok, but my point is, and my understanding is, DTX, Mini-DTX, Micro-ATX and Mini-ITX will all be competeing for the emerging markets and small form factor design wins, in which case it will have competition, correct?

...FAB 36 will be about 4X the capacity of FAB 30 by the end of the year.

I didn't mean anything by this, I couldn't remember how much capacity FAB 36 had, but I know in the earnings report that AMD said all production is on 65nm in FAB 36.

You first claim that Intel's quad core was important in 2006 with low volume and then claim that quad core is not important for AMD unless it is at high volume. Only about 5% of Intel's desktop products will be quad core by year's end. AMD will have enough quad core for this plus server.

We have derailed... In my comments K10 applies to both Dual and Quad core processors, I think that may clear things up a bit, as it is not my intention to single out Quad core's.

AMD needs volume of the new Microarchitecture (K10), thats all I am saying.

----------------------------

In regards to pricing pressure from Intel...

We obviously have different views, but can you agree that if AMD has a substantial advantage, Intels pricing will not influence AMD's pricing?

Scientia from AMDZone said...

enumae

"Ok, but my point is, and my understanding is, DTX, Mini-DTX, Micro-ATX and Mini-ITX will all be competeing for the emerging markets"

Your understanding is completely wrong. DTX, micro-ATX, and pico-BTX are small desktop form factors. These compete in the low range in both north america and europe. DTX is a 65 watt standard.

"We have derailed... In my comments K10 applies to both Dual and Quad core processors, I think that may clear things up a bit, as it is not my intention to single out Quad core's."

Why not? You keep singling out quad core for Intel. In terms of dual core, AMD should have enough for servers and the upper desktop range. Lower desktop will still be X2 and budget will be single core.

"AMD needs volume of the new Microarchitecture (K10), thats all I am saying."

AMD should have more volume of K10 than they had of 65nm in 2006.

"can you agree that if AMD has a substantial advantage, Intels pricing will not influence AMD's pricing? "

No. AMD's pricing is still having an effect on Intel's even though Intel has the advantage.

AMD should have less pricing pressure but also reduced costs by Q4. AMD could have less pricing pressure before then but that would depend on DTX and its chipsets rather than K10 since K10 is still going to be mostly server in Q3.

enumae said...

Scientia
Your understanding is completely wrong.

Ok.

AMD's pricing is still having an effect on Intel's even though Intel has the advantage.

It is huh... Thats why AMD had to lower prices 5 times on there desktop processors?

AMD could have less pricing pressure before then but that would depend on DTX and its chipsets rather than K10 since K10 is still going to be mostly server in Q3.

Well we disagree here, and I guess well have to wait and see, but you are putting alot of your belief on a form factor and chipset.

------------------

So was that 36% advantage the E4300 had over the Celeron 440 because it was dual core or cache, I am betting dual core... Just kidding :)

Scientia from AMDZone said...

BTW, it has been suggested that R600 has sound and physics processing capability that nVidia's 8800 doesn't have. I wonder if this will make a difference.

Unknown said...

Here's your market share numbers from Reuters.

"Intel's share of the $30 billion market for x86 processors that power most personal computers was 80.5 percent in the first quarter, according to Mercury Research, a market tracking firm whose data are closely watched by the industry.

That represented a gain of more than 6 percentage points from the 74.4 percent Intel had in the fourth quarter.

Intel's gain came at the expense of Advanced Micro Devices Inc., which saw its share fall to less than 20 percent for the first time since 2005, Mercury said.

AMD had gained share steadily since 2005, rising from 21.4 percent in the fourth quarter of that year to 25.3 percent a year later, thanks to chips that ran faster and used less energy than Intel's."

enumae said...

Scientia

One of your comments has stuck with me... "Only about 5% of Intel's desktop products will be quad core by year's end. AMD will have enough quad core for this plus server."

I did not remember seeing this article, but it does relate to what we had been talking about, specifically the Quad core to Dual core ratio for servers.

Maybe you had already seen it, anyways, whats the next article?

anonymous said...

While in the desktop the quad core volumes are expected to be low, the server segment is just the opposite. The same sources report Clovertown @ 20% of server volume in Q107, rising to 70% by Q4. Sell the more expensive parts in the server segment, where there is less price sensitivity and more stable demand. Makes sense to me...

Scientia from AMDZone said...

bubba

Thanks but I need the complete numbers. The "less than 20%" statement is too vague and nothing is given for Intel.

Scientia from AMDZone said...

dr yield

That is true. I don't know exactly what AMD's quad core server volume is expected to be. Obviously it will be higher than the desktop volume since the server chips will be released first and these will ramp longer. There is another possible factor though. Intel's 4-way Caneland chipset is going to be both power hungry and expensive. It could be the case that Intel wants to push quad on dual socket in preference to 4-way. But, I can't really say for sure; Intel may be willing to take a loss on the chipsets.

Pop Catalin Sever said...

Intel seems to want to apply the same strategy using .45 nm that it did when Conroe came out. It wants is to quicken the transition to .45 and have at least a 6 month lead, wich means that by the time AMD ramps to .45 nm, Intel can apply severe pricing pressure on AMD.

My opinon is that AMD shouldn't have made the transition to a platform company by buying ATI and instead it should have focused on staying market leaders by focusing on implementing .32 nm ahead of Intel when they had the money and chance to do so, and by supporting it's platform bussiness through strong industry partnerships with ATI and nVidia.

The curent situation is that AMD's product line is percived on a large scale as inferior to Intel's. And this is no small thing. With curent products not even the press favoring AMD can't help much because not even benchmarks tweaked for AMD to look good would only manage to make things seem less worse and that's all.

AMD's wings have been cliped again by the giant.

abinstein said...

"AMD shouldn't have made the transition to a platform company by buying ATI and instead it should have focused on staying market leaders by focusing on implementing .32 nm ahead of Intel when they had the money and chance to do so."

AMD has never had the ability nor chance to compete with Intel on process technology. AMD's process research is mainly done by IBM, and the money spent there over 2 years (time to reach the next node) is probably on the same level as buying ATi ($2-5 billion USD).

Lets assume AMD doubles the investment, shorten the advance time, and get the process technology lead 6 months over Intel. What's next? Does it guarantee better market share or profitability? IMO it's a more risky bet than the ATi purchase.

Unknown said...

The news is getting a bit less frequent as it gets closer to the Barcelona release and very accurate numbers on AMD's Q1 finances and market share. Also, 98 posts, -1/3 of those posts is still 66 posts. That's better than most tech bloggers out there.

enumae said...

Scientia

I am wondering if you have come to a conclusion about the Celeron 440 vs the Core 2 Duo E4300 in regards to comparing a single core to a dual core?

Looking at the E4400 vs the Celeron 440, the Celeron 440 is about 41% slower when comparing similar clock speeds.

abinstein said...

"Looking at the E4400 vs the Celeron 440, the Celeron 440 is about 41% slower when comparing similar clock speeds."

What "benchmark" are you talking about? Here's benchmarking 101 that you should take:

Always note the problem you are running.

For single-threaded applications, dual-core isn't faster than single-core. For parallel programs, dual-core can be 100% faster than single-core. Between them there's 100% difference and the 41% you mentioned is simply a meaningless dot.

enumae said...

Abinstein

Scientia came to a conclusion ...

"However, the comparison between the 2.0Ghz Celeron 440 and the 1.8Ghz E4300 is not so good. With a 10% greater clock speed, the lower cache C2D is actually 36% slower."

This is stated in his original post, and is the average of all the results, I have since looked up the results for the E4400 and its average is about 40% slower when using the same clock speed compared to the Celeron 440.

So my question is how can he come to this conclusion while comparing a dual core E4300 to a single core Celeron 440 and say that the 36% is due to cache?

Isn't that incorrect considering that about 20 out of the 26 benchmarks used by X-Bit were multithreaded?

If all of this is correct, shouldn't he acknowledge that he made a mistake?

abinstein said...

"Isn't that incorrect considering that about 20 out of the 26 benchmarks used by X-Bit were multithreaded?"

The "number" of the tested benchmarks is meaningless. The benchmarks are arbitrarily selected and do not represent the actual program mix of any usage environment.

The xbit labs did a really bad job comparing Conroe-L and Conroe. They should've separate single-threaded and multi-threaded benchmarks, and show both stock and overclocked results. IMO the tests show nothing of value.


"If all of this is correct, shouldn't he acknowledge that he made a mistake?"

IMO the average of the scores has no significance. It does not prove nor disprove scientia's claim (that Conroe's main performance lead comes from large cache).

enumae said...

Abinstein
...It does not prove nor disprove scientia's claim (that Conroe's main performance lead comes from large cache).

Understood.

Scientia from AMDZone said...

ho ho

If you really want to post here it still requires an apology. Until that happens any comments by you are moot.

Scientia from AMDZone said...

pop catalin

"Intel seems to want to apply the same strategy using .45 nm that it did when Conroe came out. It wants is to quicken the transition to .45 and have at least a 6 month lead,"

Which is half the 1 year lead Intel had with 90nm.

"My opinon is that AMD shouldn't have made the transition to a platform company by buying ATI"

This was and still remains AMD's best strategy.

"and instead it should have focused on staying market leaders by focusing on implementing .32 nm ahead of Intel when they had the money and chance to do so, and by supporting it's platform bussiness through strong industry partnerships with ATI and nVidia."

AMD is doing this. AMD has excelerated the time to 45nm and will again excelerate the time to 32nm. This means that Intel will have no lead time over AMD at 32nm. AMD is obviously continuing its partnership with nVidia but its ownership of ATI gives it a much greater strategic advantage.

The curent situation is that AMD's product line is percived on a large scale as inferior to Intel's.

If this is true then it should change in just a few months.

"And this is no small thing. With curent products not even the press favoring AMD can't help much because not even benchmarks tweaked for AMD to look good would only manage to make things seem less worse and that's all."

I don't know what press or benchmarks you feel favors AMD; these have always tended to favor Intel.

"AMD's wings have been cliped again by the giant."

This is the third surge for Intel but my guess is that it will be half the length of the last one.

Scientia from AMDZone said...

enumae

"I am wondering if you have come to a conclusion about the Celeron 440 vs the Core 2 Duo E4300 in regards to comparing a single core to a dual core?"

Yes, I edited the original article. This would be the change:

We can plainly see that for a very small number of benchmarks like Company of Heros and Zip compression the cache makes a huge difference and artificially boosts the speed by more than 50%. For Fritz, Pov-ray, and Cinebench the boost is at least 20%. However, for most benchmarks the boost is probably about 10%.

So, this would be less of a factor than I origninally implied. However, I suppose since C2D is typically only 20% faster you would still wonder how much of this is accurate in the real world.

Scientia from AMDZone said...

greg

"The news is getting a bit less frequent as it gets closer to the Barcelona release and very accurate numbers on AMD's Q1 finances and market share."

Yes, I'm still waiting for volume figures. It amazes me that so many sites are clammering about the revenue share without even knowing the volume share.

" Also, 98 posts, -1/3 of those posts is still 66 posts. That's better than most tech bloggers out there."

Actually, I'm at 29 out of 106 so 27%. Ho ho believes that he is the most important poster here so he can't believe that I'm not begging him to come back. But, he is young and apparently too proud to apologize so this is mostly sour grapes on his part.

I would rather have a handful of good comments than 200 fanboy remarks (my processor rulz, yours sux) and personal attacks.

enumae said...

Scientia
We can plainly see that for a very small number of benchmarks like Company of Heros and Zip compression the cache makes a huge difference and artificially boosts the speed by more than 50%...

I am not trying to drag this out, but 7-Zip (compressing) is multi-threaded as of release 4.42.

Also, Company of Heroes seems to be multi-threaded, while I am unable to find a source to back this up, looking at the review of the E6420 (here), there is only a 2.5% drop in performance when compared to the E6400.

Maybe I am wrong, but that is a 2MB difference, not 1.5MB and it is no where near the 44% difference.

Again I am not meaning to drag this out, but it would seem to still be incorrect.

Scientia from AMDZone said...

enumae

"7-Zip (compressing) is multi-threaded as of release 4.42."

Yes, I assume it is. E4300 is 165% faster than Celeron 440. Perfect scaling due to mult-threading would be 100%.

"Also, Company of Heroes seems to be multi-threaded,"

That was my assumption as well. E4300 is 147% faster than Celeron 440. The maximum gain with perfect scaling due to multi-threading would be 100%.

"while I am unable to find a source to back this up, looking at the review of the E6420 (here), there is only a 2.5% drop in performance when compared to the E6400.

Maybe I am wrong, but that is a 2MB difference, not 1.5MB and it is no where near the 44% difference."


Yes, obviously 2MB's makes a big difference over 512K. And, apparently it doesn't need more than 2MB's at 2.13Ghz.

enumae said...

Scientia
Yes, obviously 2MB's makes a big difference over 512K.

You can't simply subtract 100% from the 165%, or the 147%.

Maybe I am confused, but when you apply perfect scaling to the Celeron 440 in those two test the gap between the Celeron 440 and the E4300 comes down to about 19% for 7-Zip, and about 14% for Company of Heroes.

So it would then seem your statement is still incorrect...

"We can plainly see that for a very small number of benchmarks like Company of Heros and Zip compression the cache makes a huge difference and artificially boosts the speed by more than 50%."

Ho Ho said...

scientia
"If you really want to post here it still requires an apology"

Problem is I still don't understand for what do I need to apologise for. All I did was expressing my doubt in your unbiasedness. Many people have done that before though you haven't bothered to harass them. So please, would you tell me exactly for what I should apologise.

Scientia from AMDZone said...

ho ho

Okay. One more time.

You implied that I wanted to use the PG compiler while knowing that it favored AMD. And, you implied that I did this while claiming impartiality. In other words, you called me a liar.

That was not and still isn't the case. To my knowledge and to anyone with common sense, the PG compiler would not be designed to favor AMD.

No one (and certainly not a boy young enough to be my son) is going to call me a liar on my own blog.

If you are still confused about what to apologize for then you can be confused elsewhere.

abinstein said...

"No one (and certainly not a boy young enough to be my son) is going to call me a liar on my own blog."

Well, it doesn't (have to) have anything to do with age, scientia. IMO, even if he's old enough to be your grand dad, he should apologize for doing this on your blog..

enumae said...

Scientia

Any response to my last post?

Scientia from AMDZone said...

enumae

"You can't simply subtract 100% from the 165%, or the 147%."

Because?

"Maybe I am confused, but when you apply perfect scaling to the Celeron 440 in those two test the gap between the Celeron 440 and the E4300 comes down to about 19% for 7-Zip, and about 14% for Company of Heroes."

?? I'm sorry but I don't understand your math.

Conroe-L
Zip - 1051
COH - 46.2

E4300
Zip - 2506
COH - 102.5

2506 * 2.0 / (1051 * 1.8) = 2.649
102.5 * 2.0 / (46.2 * 1.8) = 2.465

"So it would then seem your statement is still incorrect..."

Or, it would seem that you need to brush up on your math skills.

enumae said...

Scientia

I almost feel this is intentional...

Conroe-L - Single Core
Zip - 1051
COH - 46.2

E4300 - Dual Core
Zip - 2506
COH - 102.5

We are doubling only the single cores scores with perfect scaling (100%)... If you want to use 80% fine, but that is not what I had said.

2506 / (1051 * 2) = 1.192
102.5 / (46.2 * 2) = 1.109

Or, it would seem that you need to brush up on your math skills.

Really...

Woof Woof said...

enumae, you didn't factor in the clock speed.

Also, you used a difference base for the comparison. Scientia used the single core as 100%. You used the dual core as a reference point.

2506 * 2.0 / (1051 * 1.8) = 2.649
102.5 * 2.0 / (46.2 * 1.8) = 2.465

And in fact, I think he is being very generous in attributing a 100% scaling from single to dual core.

Aguia said...

Xbit

The Celeron 440 at 3.0GHz, lost almost in all test to the E4300 1.8Ghz CPU.

And the ones that it wins were only by very few %.

It seams that this CPU is an easy target for the current Sempron and Athlon 64 line. If AMD gets the 65nm Athlon or Sempron to market it will easily outplace it even to the ones that think in OverClocking.

Intel always had a weak budget line. Celeron = Bad. Except in the mobile, but even today is not that good.

Scientia from AMDZone said...

enumae

No, you're right. The per core increase would be more accurate than the total. This is half of the clock adjusted total. This means that the typical boost is only 5%.

enumae said...

Scientia

Sorry for dragging this out.

---------------------------

Aguia

If you are interested in seeing how the Celeron 430 and 440 compare to Sempron and Athlon single cores HKEPC has a review.

bk said...

Scientia
No, you're right. The per core increase would be more accurate than the total. This is half of the clock adjusted total. This means that the typical boost is only 5%.

How do you come to the 5%? I come up with enumae's 19% for zip and when you factor in the cpu speed difference it becomes 21%. For COH it's 11% and 12% respectively.

Average would be 16.5% for the two tests.

Scientia from AMDZone said...

This was the quote:

benchmarks like Company of Heros and Zip compression the cache makes a huge difference and artificially boosts the speed by more than 50%. For Fritz, Pov-ray, and Cinebench the boost is at least 20%. However, for most benchmarks the boost is probably about 10%.

Just divide by 2 to get the per core increase.

Zip and COH 25%
Fritz, Pov-ray, Cinnebench 10%
Most benchmarks 5%

James said...

There are a few things in your post that stand out to me I'd like to address. First of all, you speak of games, compression utilities, and video editing as though they are a benchmark. This really, really isn't the case -- they are real applications. IE, when I'm playing, I see real fps gains. real faster compression. Real shortened encoding times. That's real-world performance.

Now, on my next point, I'm not entirely sure, but I've already put some thought and research into it and it seems to come out solid. With core2duos, 1mb L2 cache per core is a necessity. That's because of the 4 IPC. The amount of core logic necessary to execute 4 instructions per clock is an order of magnitude higher than what 3 instructions per clock requires. In order to determine what can and can't be executed in the same clock, all of the instructions have to be sifted through a pretty darn big array of binary gates. This takes time. If you can't cache far enough ahead of time, you don't have all the time you need for executing the more complex core logic, and much of your IPC efficiency is wasted.

Speaking of efficiency, I'd also like to point out that core2duos are -madly- thermally efficient. There are some problems with your argument, pretty gaping ones. You are comparing their thermal efficiency against...nothing that currently exists. Never before has something like TAT existed (or at least made publicly available) -- ask around, all current AMD and previous Intel chips have been tested using utilities like Orthos. As far as I'm concerned, that's great. Who needs TAT? My computer literally -cannot- reach the temperatures TAT records no matter what I do to it in the real world. It's a useless tool to me. I'm not even going to get into the 50c vs 55c vs 60c debate here, because, well, electromigration doesn't begin to occur until 61c...60c is within Intel's thermal specs, so I'm going to damn well use 60c.

...Thought I'd clear that up.