Saturday, October 14, 2006

What Quads May Come

I had a technical look at AMD's K8L in AMD Unveils Barcelona Quad-Core Details . There are improvements that I had not heard about before. There are several that should make a difference and boost both K8L's SSE and Integer performance. From this information it appears that 2007 will be much tougher for Intel and that it will lose most if not all of the ground it gained with the release of Core 2 Duo.

The K8L architecture seems to have more tweaks than I had first thought. There are many changes that pull AMD up to where Intel is now with Core 2 Duo. For example, Intel has had a dedicated stack unit since Pentium M and now AMD matches this with its own Sideband Stack Optimizer. Intel has been ahead of in out of order execution and AMD moves closer with enhanced reordering and some out of order loading. Intel had already increased the size of the reorder buffer.

AMD now has data dependent latency on the divide instruction as Intel had already done. AMD greatly improves SSE width and the number of SSE operations. It doubles the bandwidth of Prefetch and the L1 bus. Again, these are changes that Intel had made from Yonah (Core Duo) to C2D. AMD might have a small advantage here due to twin L1 buses rather than just a wider L1 bus.

It is also no secret that AMD's K8 architecture works better with lower latency memory like DDR. Intel has been pushing Jedec toward DDR2 and higher because its larger cache tends to reduce the bad effects of latency while it hurts AMD's smaller cache architecure more. However, AMD is now moving to a split memory controller where each half can act independently. This along with the larger L3 cache, improved memory scheduling, a better write burst mode, and out of order loads more than makes up for the higher latency of DDR2. We should see a big improvement for AMD with DDR2 and into DDR3. This again, moves AMD up to where Intel is now.

AMD's L3 is also more complex than I had thought. I had expected the L3 to be exclusive but AMD has apparently decided to make it more complex. They use it exclusively with data but can use it inclusively with instructions. This is a big improvement and would tend to negate most of Intel's advantage with inclusive cache and its advantage with the shared L2. However, this multi-mode function allows AMD to retain its current cache size flexibility. It would be able to change the size of both L2 and L3 without a severe penalty in performance. Intel does not have this flexibility. Further, Intel's use of shared L2 leaves the processor vulnerable to cache thrashing where one thread pushes instructions out of the L2 only to have its own instructions pushed out by the first thread in a cpu clock wasting tug of war.

AMD is expanding the number of FastPath instructions. This is important because the decoders can do 3 fastpath instructions or 1 complex instruction. The more instructions that are fastpath the better. This again makes up for some of the improvement that Intel gained by adding a fourth decoder and macro-ops fusion. This is also helped by doubling the Prefetch as Intel has done.

Overall, AMD moves much closer to Intel's C2D with the changes in K8L. AMD will add to its already greater memory access speed and match or beat C2D in terms of SSE. However, the changes don't quite add up to C2D's 4 instruction issue and Macro-ops fusion. I would guess that AMD will remove about 3/4 of the current IPC gap. This would mean that a K8L of 3.1 Ghz would match a C2D of 3.0 Ghz. This is much better than today where it would take a 3.5 Ghz K8 to match the same 3.0 Ghz C2D.

I am very curious to see how the testing will go when K8L is released as Antares for the desktop in Q3 07. There are so many improvements that move K8L closer to C2D that it will be difficult to bias the testing in Intel's favor. I'm guessing that when Intel is faced with the threat of K8L and the imminent loss of its performance advantages it will be sure to rush the new 45nm chips into reviewer's hands. Even these new chips though will probably not be enough to regain the C2D lead. I'm guessing that review sites like Tom's Hardware Guide and Anandtech will fall back on the classic cheat of comparing an overclocked Intel chip against a stock AMD chip. I can always hope that if the loss of Intel's lead becomes unavoidable that perhaps THG and Anandtech will even do some real testing with the second core (or other three cores on quads) loaded properly. They might even do some testing that actually stresses the memory bandwidth. I think that right now with the C2D lead that it is just too tempting for them to push the tests in Intel's favor and avoid real testing that would make C2D more even like loaded testing. It is generally expected that C2D will bog down under load more than K8. I'm still wondering how the review sites will avoid testing Barcelona (K8L Opteron quad) against Clovertown since Clovertown is expected to fair pretty badly in such a test. I guess we will see.

37 comments:

Erlindo said...

Great post, but, I'm kinda worried about what you've said that K8L needs one more speed grade to match core2 and with intel announcing 3.7GHz quad core processors, I don't know how AMD will clock higher their processors.

So, is this overall performance or maybe you are balancing it due to the lack of integer power?

Also, I'd like to know what are your takes on AMD's APM and Intel's copy-exact.

Pop Catalin Sever said...

Even though C2D has 4 decoders 3 are simple decoders and 1 is complex while K8 an K8L has 3 complex decoders. Also average IPC for real life aplications is around 2.5 even for Conroe wich has 20-25% more IPC than K8. If AMD manages an IPC increase compared to Intel they just might beat them on average IPC only using 3 decoders, also for the first time in history AMD anounced that they can produce better transistors than Intel, wich they will integrate using the second iteration of CTI on 65 nm.

Also Conroe is noting more than a marketing weapon for Intel because only the high end Conroe procesors are superior to AMD from a price/perf point of view, also they are also not very cost efective due to the large caches. Intel realy needs some other kind of cores to cover the low and maintream markets and gain some cash from it also.

Scientia from AMDZone said...

No, that's my estimate for Integer. I figure K8L will match or slightly exceed C2D with SSE. Integer performance is general processing power. Most applications don't use SSE.

Well, as far as I can tell, APM is better than copy exact. This should mean that AMD ramps as quickly as possible and produces a good yield. Copy exact slows Intel down because it takes two months to move to another FAB. Unfortunately, AMD doesn't really get the benefit of this until FAB38 begins production.

Scientia from AMDZone said...

Do you have a link to AMD's announcement of better transistors? I agree that CTI will give better components as we continue through 2007.

Right now, Intel's larger cache isn't hurting them because of the smaller size for 65nm. It isn't clear yet if AMD will even be able to match Intel's die size with 65nm. Personally, I would suggest that AMD switch to TTRAM for its L1 and L2 caches. Maybe Z-RAM for L3. This is probably the only way that AMD will be able to create a competitive cache cell size.

Erlindo said...

No, that's my estimate for Integer

I've thought that.

Still, we have to wait and see the final product, maybe AMD might surprise us with something untold about the K8L architecture.

Azary Omega said...

I would guess that AMD will remove about 3/4 of the current IPC gap. This would mean that a K8L of 3.1 Ghz would match a C2D of 3.0 Ghz.....

I disagree. I think going from 3 to 4 decoders will add 15-20% in performance. But the real gravy is in SSE&L1bus enhancements, remember now, games and compression/decompression can benefit greatly from it, so in my view K8L (which is not the name of it) ratio vs. conroe would be more like 2.6GHz(K9) = 3.0GHz(conroe).

Pop Catalin Sever said...

here's some info on new AMD transisotors. I lost my original link in English. But also they conclude that we will see a 50% impovement in transistor speed compared to 90nm
http://translate.google.com/translate?u=http%3A%2F%2Fjournal.mycom.co.jp%2Farticles%2F2005%2F12%2F07%2Fiedm4%2F&langpair=ja%7Cen&hl=en&ie=UTF8

Anonymous said...

Not meaning to nitpick, but I see the same regurgitated article across several Ziff Davis publications, so I think it's a bit unnecessary to give credit to ExtremeTech.

Wee, old hardware review site conspiracy:) Even if they do bias it in a way to make Intel look better then they should, most people should be able to just comprehend the non overclocked results, while those interested in overclocking will also have a data set.

What about their testing makes their loading of the second/third core improper? Don't the results with multi threaded apps show that? Please remind us of some apps that stress memory bandwidth. We see that latency hurts AMD, but not necessarily bandwidth for Intel.

http://news.com.com/2061-10791_3-6120470.html
Barcelona, if arrives on time, should be competing with Clovertown for a few months, but Harpertown will come soon after.

"It is generally expected that C2D will bog down under load more than K8."
Explain.

Anonymous said...

"Does anyone know when the Q3 reports will be released?"

I wish you allowed anonymous posting so I would not have to look like a fool posting short answers to short questions;) And I wish AMDZone would allow discussion with an opposing viewpoint;)

Intel reports Tuesday afternoon, AMD reports Wednesday afternoon.

ashenman said...

Thank you red, for failing to actually post the portion of that post on 180 that actually mattered. Ya, I dislike Intel's business practices, but I think the core2 is an awesome processor, and would dearly love to have one in my rig.

To your question of the core loading. Seeing as neither review site actually said what they were using, no one has any clue what they even thought they were doing, much less what they really were doing, to load down the cores. Okay, so VR zone said they were using 3dmark06 or something like that, but the xmarkxx programs weren't meant to stress cpus, only to measure throughput performance in a specific way. Loading the processor down with something it's not meant to handle (like prime 95 instances of 2xnumber of cores actually on the processor) should show some real thermals. What scientia meant concerning that is that 06 is at best a dual threaded application (since scalable multi-threaded applications are only a dream at the moment), so at best, they're loading down 2 cores (sort of) and leaving 2 idle. Not only that, but the load on those two cores is one that probably isn't going to generate alot of heat. Ya, they're comparing it to a core 2, but the kentsfield has 2 more cores to put unrelated processes on that would normally add signifcantly more stress to cores that were already in use.
Also, seeing as they were using a 1 kw power supply, the slope of the delta of power efficiency at loads that are that much smaller than the capability of that power supply would be huge, leading to the kentsfield system being much more efficient percentage-wise than the core2 system. (Power supply efficiency works on a sorta-bell curve, with the theoretical maximum being somewhere around 2/3 of that power supply's capacity, and a large slope at the very beginning of the curve).

Scientia from AMDZone said...

Even if they do bias it in a way to make Intel look better then they should, most people should be able to just comprehend the non overclocked results, while those interested in overclocking will also have a data set.

Curious, then why don't they ever put overclocked AMD chips up against stock Intel chips? The problem is that once a review site has shown intentional bias it is reasonable to assume that their bias extends to things that they don't publish.

What about their testing makes their loading of the second/third core improper? Don't the results with multi threaded apps show that?

No. There are different kinds of stress. For example, running two copies of SuperPi will stress the processor in terms of SSE function however it will not stress the processor in terms of Integer, branching, cache, or memory bandwidth. Tom spoke about the different kinds of stressing tests and his surprise at discovering that Prime95 didn't stress the processor as he had expected. He talked about this five years ago. So, either the staff of THG has been exchanged for a bunch of incompetent boobs or THG is knowingly skewing the results.

Please remind us of some apps that stress memory bandwidth. We see that latency hurts AMD, but not necessarily bandwidth for Intel.

Even in the THG results, DivX6.22 shows bandwidth problems. They managed to keep this down to just two results by including a lot of meaningless tests.

Barcelona, if arrives on time, should be competing with Clovertown for a few months, but Harpertown will come soon after.

If Harpertown is true quad core then it should be a bit better. I'm not sure this will be enough to make up for the Bensley platform through 2009 though.

"It is generally expected that C2D will bog down under load more than K8."
Explain.


Generally, servers are tested more thoroughly than desktop processors. K8's have always stood up well under load. P4's did not. What I've been hearing lately is that C2D doesn't either. Just don't hold your breath waiting for this on THG or Anandtech.

Scientia from AMDZone said...

I disagree. I think going from 3 to 4 decoders will add 15-20% in performance.

AMD is not going to 4 decoders. K8L will only have 3.

There is no doubt that K8L will get a boost in SSE. That is why I estimate that AMD will catch Intel in terms of SSE. It is possible that AMD could pull somewhat ahead.

Anonymous said...

Opinions can change over the course of years;) A real paper launch would be the X2 5000+, launched solely for benchmarketing purposes, with a supposed good price of $300. Oops.

Bapco? Who cares about SYSMark anyhows:)

I haven't had experienced with Firewire..
http://www.pcstats.com/articleview.cfm?articleid=1104&page=2
But that could be a reason for its failure. My camcorder with USB isn't exactly a slouch when uploading to my PC.


I've had an SiS system that sucked. SiS > Via, therefore Via is total suckage that doesn't matter.

Was not the chipset shortage a result of one of the Intel guys suggesting a scale back of production ahead of unexpected demand?

I'm not aware of those other situations so no comment. Unless you could linkify them:D

As some would call it, your anti Intel wind is obvious even if you claim to be neutral..
http://scientiasblog.blogspot.com/2006/09/idf-amd-and-intel-in-perspective.html
"I don't tend to give either AMD or Intel credit just because of their name."

http://blogs.intel.com/it/2006/10/read_ye_and_be_advised_of_the.html
I don't exactly see anything about that post that shows that he is any more open minded than the 'tyrannical' Intel. Perhaps he's considered a radical because of his 'suicidal career behavior' and 'disrespectful oddballness'.

Why would it matter if their bias shows up in things they don't publish? What does it matter what they don't publish?

Can you give us any situations where K8's memory strength shines? If Core 2 Duo with its inferior memory bandwidth beats X2 in DivX..Perhaps DivX was not optimized for 2+ cores? I still believe that anything that is irrelevant does not matter, and I will not try and figure out if DivX will work on my DVD system. So screw those results:) Those other encoding apps showed no problems.

Bensley will supposedly live onto 2009, but there would be a new core refresh of it before then. 2006 Clovertown, 2007 Harpertown, 2008 Harpertown Nehalemized.

Your hearsay on Core 2 was not exactly the explanation I was looking for..

Scientia from AMDZone said...

Yes, I read the article about the transistors. I don't think a 50% improvement in transistor speed translates to a 50% improvement in clock. However, I would expect that this would allow a good boost over 2.9Ghz.

Scientia from AMDZone said...

Bapco? Who cares about SYSMark anyhows:)

Once Intel has shown its willingness to alter benchmarks to its favor this leaves the question of whether this has happened with any other benchmark.

therefore Via is total suckage that doesn't matter.

Which does not change the fact that it was only VIA's unwillingness to pay a FSB license fee that changed their status with Intel.

Was not the chipset shortage a result of one of the Intel guys suggesting a scale back of production ahead of unexpected demand?

Apparently you aren't familiar with Intel's on again, off again approach to partners. They waver back and forth between needing partners and wanting the entire market for themselves.

As some would call it, your anti Intel wind is obvious even if you claim to be neutral..

Give me an example. What makes your statement so absurd is that in this same series of comments I've now been opposed twice to more optimistic statements about AMD. So, show me where I've been biased and then show me where the people who say so have been opposed in the same way to optimism about Intel.

Why would it matter if their bias shows up in things they don't publish? What does it matter what they don't publish?

Well, because they may be dropping tests that show Intel processors more negatively. If your doctor did this you would say that he was not fit to practice medicine. If a scientist did this you would say that his research was faked. If a prosecutor or defense attorney did this it would be grounds for disbarment. I'm puzzled why you seem to think that it would be acceptible for a review site to do it.

Can you give us any situations where K8's memory strength shines? If Core 2 Duo with its inferior memory bandwidth beats X2 in DivX..Perhaps DivX was not optimized for 2+ cores?

I've already talked about this in the last article's comments. The way that THG listed the "benchmarks" you could not tell whether a given test was thread limited or whether it was exhibiting memory stalling. Interestingly, THG had done an earlier series of tests that they somehow "forgot" to include in the Kentsfield review. When you compare the earlier tests with the Kentsfield tests you can indeed tell which benchmarks are thread limited and therefore are useless in the Kentsfield review. That this was done is clearly either a sign of incompetence by THG or a knowing attempt to defraud the readers. There simply is no other choice.

Bensley will supposedly live onto 2009, but there would be a new core refresh

I was only referring to the chipset. AMD's northbridge will be upgraded in 2008 with DC 2.0. I'm wondering if Intel will be able to compete if it doesn't upgrade its chipset.

Your hearsay on Core 2 was not exactly the explanation I was looking for..

I guess you'll have to wait until they post it on THG.

Scientia from AMDZone said...

I disallowed anonymous posting when I turned off post moderation. The comments are posted immediately now.

Anonymous said...

What would it matter if Intel optimized those other Marks to their favor? Scandalous business practices:)
http://www.amd.com/us-en/Weblets/0,,7832_8366_7595~110132,00.html
Look AMD does it too!

I wouldn't want random people making money off of my infrastructure and me not getting anything out of it either.

Nope, I'm not aware of their on again off again approach. Please enlighten me with some dates and events:) If given the chance, don't you think any company would want to sit fat, dumb, and happy?


Intel is evil and trying to squash competitors! Screw Intel for not trying to optimize their solution instead of adopting their rival's technology! Intel does not show CSI at IDF, let's assume it's screwed and cause mass AMD fanboi hysteria:) Let's write a huge rant to Intel. I bet they won't post it since they're so tyrannical.

It seems they tried to get results as quick as possible for Kentsfield before IDF and others for 'exclusive' rights. It would not have made sense to use their usual benchmark set. They also included thread limited benches in their preview..So what's up with the conspiracy of only using apps that showed good results? And how could they have listed it so that you could tell whether it was 'thread limited' or 'memory stalling'? Why would DivX be memory starved while all those other encoders did fine? Seems like a threading problem.

Oakley is coming in Q307 with 45 nm. Nehalem in 2008 will supposedly bring IMC+CSI and therefore a new chipset.

I would like to know where your hearsay came from btw:)

ashenman said...

I'd like to say that I also visit the Intel blog regularly because when he started talking about working at the fabs, I thought he would actually talk about the industry, instead of the way internal business politics at Intel work. I did post a response to his first post though, and it was blocked. All I said was that he'd have heavy traffic from AMD and Intel fanboys now that it was linked on HardOCP. I then proceeded to say I then proceeded to say that I hoped he was really out to give us an inside look instead of just office jokes and hyping. I reposted with just the first part, and apparently that's more acceptable?
What Scientia is saying is that the Bapco benchmarks, or any benchmarks that are designed by a company whose products are being benchmarked by them, are pretty worthless. So any results gained by them are equally worthless, and thus grounds for discrediting in the hardware review realm.
Your assumption that Intel's new chips will come exactly when they say they will shows you're a fanboy. I don't assume AMD will release everything it says it will when it says it will. It's more likely that they will, because in recent history that's what's happened.
His assumption that Intel is behind on CSI because they didn't mention it at IDF makes way more sense than assuming it's doing just fine. Any company that's as large as Intel has to make a good showing, especially if it's at its own developers conference. They need to get developers excited about technology that will be coming within the next couple of years, so they can start investing their resources in developing for it now. They shouldn't show technologies that are way far off, because then people will invest in developing for them, and when they become vaporware, they lose faith and reliance in intel. If they release something too soon after they fail to mention it at a developers conference, no one has time to invest in it effectively, so they have to throw money at becoming compatible, and overall, are even more pissed than if it didn't come around at all.
I don't understand why it wouldn't make sense to use their usual benchmark set if they were in a hurry. It's not like loading up the programs you're most used to and practiced with would take longer than the ones you're less used to. I'm less apt to cry wolf, but at the least THG is very unprofessional about how they conduct their benchmarks and write their articles. Some of the conclusions they come to are not only erroneous in and of themselves, but use erroneous data, such as Intels wattage systems and so forth. If you actually look at it, it's all too conenient. But all I'll say, is that it is very convenient, but not for me. So I wont use their data, or trust in many of their results.

Pop Catalin Sever said...

"Yes, I read the article about the transistors. I don't think a 50% improvement in transistor speed translates to a 50% improvement in clock."

Actually even current Intel's 65nm transistors allow higher frequencies than current ones but at the cost of higher TDP as many overclocking tests have shown. And this means that AMD won't be able to beat Intel power wise and performance wise at the same time, having lower IPC than Intel but higher frequencies, unless their transistors are also not only faster but also more efficient(less leakage).

Intel seems to have learned its Netburst lesson very well because they've put allot of work into making higher IPC even though their transistors allows for higher frequencies. And they also targeted lower TDP than AMD's TDP ratings from ground zero. Also AMD didn't project K8L with a drastic IPC increase in mind compared to K8 and they relied on transistor speed increase, focusing instead in solving DDR2 latency problems and current K8 bottlenecks plus adjusting the pipeline for more memory throughout put in general. To reenter the efficiency race again I thing AMD needs to do some more tweaking to their Cores possibly adding one more decoder and macro ops fusion (they do have micro ops fusion since K7).

Anonymous said...

Scientia's latest response concerns whether or not Intel rigs..gaming and encoding benchmarks? He has not publically invalidated Bapco.

I will assume that Penryn comes on time since it would match up with HKEPC, Intel executives, Bearlake's arrival. I also assume that AMD should have Barcelona ready by Q2 since they seem set on that date, and have shown us at least a wafer:)
How can you say you don't assume AMD will release what it will release yet say that it is likely that they will?

CSI has had its 2008/2009 date set for a while, and Scientia's conclusion based on no show at IDF is that..It will arrive 2008/2009. They also had 'CSI Partner Day' to inform folks that need to know about it.
How did Intel's visionary showing of photonics and teraflops affect developers? That was showing the world what their hardware was possibly capable of.

You don't have to trust THG, there are many other sites out there. And if all else fails..AMD rul3z!

Scientia from AMDZone said...

What would it matter if Intel optimized those other Marks to their favor?

I don't know. Why would it matter if a gas station changed the metering of its gas pumps? Why would it matter if a deli changed the value of its scales? Why can't you say that there are 16 ounces in a can if there are only 12? Why do MPG estimates and HP estimates have to be accurate? Why can't air conditioning and furnace manufacturers lie about energy efficiency?

Scandalous business practices:)
http://www.amd.com/us-en/Weblets/0,,7832_8366_7595~110132,00.html
Look AMD does it too!


You've lost me with this one. Exactly how is an announcement about 4X4 scandalous?

I wouldn't want random people making money off of my infrastructure and me not getting anything out of it either.

Intel makes money the same way as every other supplier makes money: by selling products to customers.

Nope, I'm not aware of their on again off again approach.

You are joking, right?

So what's up with the conspiracy of only using apps that showed good results?

I thought you said it didn't matter what they didn't publish.

And how could they have listed it so that you could tell whether it was 'thread limited' or 'memory stalling'?

It wouldn't have been difficult.

Why would DivX be memory starved while all those other encoders did fine? Seems like a threading problem.

No. I've already told you that it wasn't a threading problem. Most of the tests were thread limited but not DivX.

Nehalem in 2008 will supposedly bring IMC+CSI and therefore a new chipset.

It does not appear that CSI will show up on an Intel X86 processor any sooner than 2009.

Scientia from AMDZone said...

The Bapco scandal was common knowledge. I don't know how Bapco rates today.

I'm certain that 45nm die shrinks of Intel processors will begin trickling out of D1D just as soon as possible, however, this is not the same as real production. Real production will require one of the new 45nm FABs and this will be two months later at the earliest.

The TeraFlop chip was 5 years at the earliest and photonics were indefinite. Perhaps it should be called the Intel Dreamers Forum.

It made no sense at all that CSI was not mention in combination with the related announcements. The most optimistic estimate for CSI that I've heard of is end of 2008 for Itanium. I've seen no indication that CSI will arrive any earlier than 2009 for Intel X86.

I think I've already shown that neither THG nor Anandtech have any credibility left.

Anonymous said...

Why does it matter if Intel optimizes Mark?
http://www.amd.com/us-en/Weblets/0,,7832_8366_7595~110132,00.html
Look, AMD is cheating, they're optimizing for their platform rather than Intel?..That scandalous comment was sarcastic btw.

Yep, they sell to consumers. They also get money from licensing. If you were say..Nintendo. These companies decide to sell controllers that undercut your own controllers and you get nothing out of it, yet it was you that set up the infrastructure for them to plug in.


It doesn't matter what they don't publish..In this situation, they were time constrained. They don't exactly run a full test suite when previewing previous chips either. But why are you insisting that they only published 'good' results when they also included mediocre ones?

No not joking, and that was not a necessary response btw:) Since you won't bother, I'll just assume they never happened.

If it's not difficult..Explain. I'm not liking your deflecting recently.

Excuse me then, I seem to have forgotten, but why do the other encoders not choke on the very inferior memory bandwith but DivX 'does'[still 26% gain]?

http://www.aceshardware.com/forums/read_post.jsp?id=115161830&forumid=1
http://www.theinquirer.net/default.aspx?article=34779
"..CSI, the second of our three horsemen, comes out on X86 in Q4 2008"
Q408/Q109, same thing;)
Just like AMD is rushing 65 nm for an 06 launch, Intel looks to be doing the same with CSI in 08.

Yes you've reminded us of your non love of THG and Anandtech many times already;)

ashenman said...

How does some guy named charlie on some forum somewhere constitute solid evidence that CSI will be released in 08-09? How does the Inquirer making reference to the fact that Intel has stated elsewhere that CSI will be released in 08-09 constitute solid evidence of that?
Neither of these are people that I know of as being Intel Egnineers, and neither actually said anything about Intel reaffirming an 08 release date recently.

"How can you say you don't assume AMD will release what it will release yet say that it is likely that they will?"

Well, let's start with the first piece of logic that allows me to differentiate between the two statements. "Assume" and "believe" are different words, so we can guesstimate that they're likely to have different meanings. In this particular example, believe, means I have some amount, though not necessarily a complete amount, of faith in the product release date. Assume, means it's not even an issue of faith, it's a fact I know and am counting on. I guess I really don't need other arguments, because this is getting too long, but if you really wanted me to get into an argument with you about the meaning of words and their logical applications, maybe I'll e-mail you. Beyond that, I'm referring to the products that are a year or more off, however, I still don't assume anything more than 4 months away is going to happen exactly when they say it will.

On the subject of licensing. Requiring a license to make a compatible product for your own product or a competing product that uses aspects of your product, is like having tariffs. There are times when it makes sense, and there are times when it doesn't. Your example of Nintendo controllers is a poor one, as there is a direct link between the quality of peripherals to a console, and the sales of said console. When you assume, like Intel, that not licensing an interface increases direct competition with your own products that use that interface, you show how unilaterally you approach the subject. There are many levels in which chipsets can be better. Some are more power efficient, some have better performance, some have better I/O options, and some are more configurable. AMD realizes that it is neither profitable nor practical to try to compete with all the chipset manufacturers on all of those levels, and thus allows others to build for it without requiring significant investment. Maybe it's different from Intel, but their chipsets don't seem to demonstrate that. Although they are very good chipsets, and they have a wide variety that can address many of these issues, the shortage of the 800 range chipsets that may or may not have caused the sudden decrease in demand for intel processors was an error that stemmed from this. The quality and quantity of their server chipsets also demonstrates this.

AMD has had 65 nm chugging for a while, and has had a fab that they've been letting sit as mostly storage for a good part of the year that could have been converted to 65 nm, and thus made an early and easy 65 nm ramp.
Regardless of that, Intel doesn't go quiet when it rushes something, because it can't afford to. Intel makes too much of too many products that will be using the CSI standard to be quiet about it at a developers forum. Not only that, but Intel is too large a company to rely on regaining face through demonstration of ability. They have to hype it because they have the resources for it, and because AMD makes them look stupid when they outmaneuver them quietly.
Yes, there are other review sites, and I (as well as scientia, I hope) read them. HardOCP is an excellent one. People make fun of hardocp because it uses really expensive hardware when it tests really expensive hardware. But that's because they're too stupid to realize balance in a system matters more than how good individual parts in it sound.

You're not getting scientia's point, even when he calls it the Bapco scandal, instead of the Intel scandal. His point is not that Intel is evil because it compiled and designed most if not all of Bapco's benchmarks, but that those Benchmarks are worthless and discredit review sites that use them, like I just said above. If a website used the Benchmarks that AMD compiled/wrote then I'd say they had little credit as well. It doesn't matter which way you sway it, you're still giving inaccurate benches.

Anonymous said...

http://amdzone.net/index.php?name=PNphpBB2&file=viewtopic&t=10079&highlight=dimm&sid=36d4ebf82eba984ae25daa90eabfc632
Since Scientia seems not to mind theinquirer.net's validity there, I thought that another inquirer link would be fair data.

I will not try and argue grammar semantics here..
http://techreport.com/onearticle.x/4685
http://www.geek.com/news/geeknews/2006Oct/bch20061003039092.htm
http://techreport.com/onearticle.x/10111
What in recent history makes you think that it's likely that AMD will be on time?

So far, it seems that only Via was too poor(or stubborn) to afford to get licensed from Intel. If they can milk others for what it's worth, why not? And I thought that the chipset shortage was a result of one of the Intel guys demanding scaled back production when there was more supply then demand, not because Intel couldn't make enough on its own.

I don't think we saw Core 2 2 years ago and the situation looks the same for CSI. And for those that need to know about it that far in advance..
http://www.theregister.co.uk/2006/09/21/intel_open_chips/

I do not like [H] because their website sucks..
http://enthusiast.hardocp.com/article.html?art=MTEwOCwxLCxoZW50aHVzaWFzdA==
While he rambles on about how other sites don't show real world performance..
http://enthusiast.hardocp.com/article.html?art=MTA2NSw3LCxoZW50aHVzaWFzdA==
Look, 2 months earlier with AM2..
"As always, these benchmarks in no way represent real-world gameplay. They are all run at very low resolutions to try our best to remove the video card as a bottleneck."
And in the review before that, which is 16 months old?
http://enthusiast.hardocp.com/article.html?art=Nzg3LDQsLDMw
"As usual, we use low-resolution benchmarks to take the video card as far out of the equation as possible. Do not think that the below benchmarks represent a true gaming experience."
Ok, switch to 'real world' benchmarking when it makes Intel look not so hot:)


Scientia has said he does not know how Bapco rates today.. I personally am not a fan of the Marks though.

ashenman said...

Yet Kyle's conclusion is that you wont really see a difference as an end user, so staying with a ddr based system makes sense, but not going to a ddr2 if already upgrading to that extent doesn't make sense because of compatibility, and because it does show some increase in performance, even if it's not very noticeable.

HardOCP is more of a "get pissed off at the way everyone is doing it, so go with something that shows something different" sort of method. They did the "Core 2 doesn't effect gameplay" article, because everyone was saying the Core 2 completely changed their performance. They did the AM2 one because everyone was saying that AM2 was slower than 939. This is actually a good example of why they're a good site, because they show things in a different light than everyone else was at the time, yet still have a conclusion that makes sense. Ya, they should've had high resolution and settings benchmarks in there to at least show the difference in what you should really expect out of gaming, but their conclusion wouldn't have changed, because they talked about it that their conclusion.

Of course core 2 wasn't there 2 years ago, because Intel didn't know it needed to fix netburst 2 years ago. Core 2 was a last minute fix that used other technology they already had. It worked really well, and is probably the most elegant last minute fix I've ever seen, but that's still exactly what it is. CSI on the other hand is something they've never worked with in any way shape or form, and the Industry knows that.

Ya, Via was too poor to buy the to buy into the licensing scheme, because they have their own very profitable technology to worry about. Although via might not have been that big of a player in terms of Intel chipsets as of late, they've been alot bigger, and they've had much more on their budget lately. I think this is a bad sign for Intel if Via still does well without them.

I wasn't saying that having too many chipset options makes Intel capacity constrained, I was saying that it makes managing which chipsets to make that much harder, which you proved in your counter point.

The Socket F delay was one of the reasons I said I wouldn't assume AMD would be on time. I did enjoy the history lesson on the release of the Hammer, and will have to consider that as a part of how much I trust AMD's roadmaps as well. However, I still trust them more, since they never said they'd get 4, 5, or even 10 ghz on their chips. There are tons of broken promises from Intel, that I'm sure scientia would be very capable of listing for us, if he felt like having that on his blog. I'm cutting this one off here, because my last one is way too long, and I really need to get to work on my engineering homework.

Azary Omega said...

AMD is not going to 4 decoders. K8L will only have 3.

Okay. You got the pic of that quad thingy - show me those 3 you talking about. Hehe.

Scientia from AMDZone said...

I think I've explained several times now why reliable benchmarks are important and why it is important for review sites to be unbiased.

I think I've also shown several times that neither THG nor Anandtech have any credibility. This is why it makes no sense to draw general conclusions based on THG's testing.

If you truly believe that THG and Anandtech are not biased then you are welcome to do an article demonstrating this. If I have been wrong in my assessment then I would certainly welcome a proper demonstration of the facts.

http://www.amd.com/us-en/Weblets/0,,7832_8366_7595~110132,00.html
Look, AMD is cheating, they're optimizing for their platform rather than Intel?..That scandalous comment was sarcastic btw.


The link you gave says nothing about optimizing for AMD hardware. All it actually says is that their games are becoming multi-threaded to take advantage of more cores. This would presumably include Intel.

http://www.aceshardware.com/forums/read_post.jsp?id=115161830&forumid=1
http://www.theinquirer.net/default.aspx?article=34779
"..CSI, the second of our three horsemen, comes out on X86 in Q4 2008"


Your source is Charlie who has been wrong more times than I can remember. This is the same person who insisted that Intel was going to have a great new processor in January 2006. It didn't happen.

Scientia from AMDZone said...

Excuse me then, I seem to have forgotten, but why do the other encoders not choke on the very inferior memory bandwith but DivX 'does'[still 26% gain]?

At Hot Hardware:

Core 2 Extreme QX6700 Quad-Core Kentsfield Performance Preview

Pov-Ray - 99.4% scaling
3ds Max - 79.8% scaling
Sony Vegas - 64.4% scaling
DivX - 55.1% scaling

Pov-Ray shows perfect scaling because it is FP limited. The FP calculations take long enough that memory is able to keep up. However, the other three show reduced scaling due to the FSB bottleneck.

Scientia from AMDZone said...

Okay. You got the pic of that quad thingy - show me those 3 you talking about. Hehe.

AMD has just released some information on K8L. There was no mention of any additional instruction issue. If you've been looking at the pictures of K8L then you've been looking at the decoder ROMs rather than the decoders themselves.

Scientia from AMDZone said...

AMD has had 65 nm chugging for a while, and has had a fab that they've been letting sit as mostly storage for a good part of the year that could have been converted to 65 nm, and thus made an early and easy 65 nm ramp.

What FAB are you referring to? I'm not aware of any space that has been left as storage. I'm also not aware of any FAB space that could have been converted more quickly.

Scientia from AMDZone said...

http://amdzone.net/index.php?name=PNphpBB2&file=viewtopic&t=10079&highlight=dimm&sid=36d4ebf82eba984ae25daa90eabfc632
Since Scientia seems not to mind theinquirer.net's validity there


Actually, I had already seen Jedec's contingency plan for the non-adoption of FBDIMM. And, the INQ story was consistent with everything else about FBDIMM such as the problems of latency and power draw.

However, when the INQ says things that are not consistent, such as when Charlie claimed that Intel would have a great new processor Jaunary 2006, I disagree with them. You should be able to understand the difference.

What in recent history makes you think that it's likely that AMD will be on time?

The last time AMD was late was with the release of K8 in 2003; they have been on time since then. In contrast, Intel has not.

If they can milk others for what it's worth, why not?

Because that is not how you treat a partner. This attitude has pushed various companies that were strongly Intel towards AMD. I guess there is nothing wrong with this attitude if you don't want partners or customers. However, you should also understand that the low licensing fees have been what has given HT a boost.

Your assumptions about Intel's chipsets are simply incorrect. At the time, Intel was still making chipsets on one its older 200mm FABs and was capacity constrained. Since then, Intel has moved chipset production to one of its 300mm FABs.

I don't think we saw Core 2 2 years ago and the situation looks the same for CSI. And for those that need to know about it that far in advance..

I have no idea why you are comparing CSI with Core 2 Duo.

However, the correct comparison is that AMD announced Lightning Data Transport in October of 1999. AMD then announced the change of the LDT name to HyperTransport and partnership with 100 companies in February 2001. The HyperTransport Consortium was announced July 2001. I assume that you are also not aware that even though the physical connection was different, AMD actually used the cache coherent HyperTransport protocol with Athlon MP.

Ok, switch to 'real world' benchmarking when it makes Intel look not so hot:)

You should be able to understand that benchmarking the GPU tells you nothing about the processor.

It makes no difference whether a company publishes phoney benchmarking results or alters a benchmarking program so that others can unknowingly publish faked results for them. This is still the same level of deception.

Scientia from AMDZone said...

The Socket F delay was one of the reasons I said I wouldn't assume

This wasn't actually a delay. AMD pushed the release date back 3 weeks to better matchup with Intel's release dates.

AMD had genuine delays in 2002 when they switched to the 130nm process. AMD also had trouble with 130nm SOI. They had been partnering with Motorola and then tried to partner with UMC but they didn't get what they needed in terms of process technology until they purchased technology from IBM.

Anand tried to suggest that Clawhammer was further delayed because he had assumed that K8 would be released on the desktop first. He was quite bitter about this although I have never been able to find any information from AMD that suggested that K8 would be released on the desktop first. As far as I can tell, his only reason for assuming this was a comparison with Intel whose normal pattern was desktop first and then server. He might also have assumed this because Athlon came before Athlon MP. However, since the Athlon MP was AMD's first server processor and could not have been released without the special 760 dual bus chipset it is difficult to draw generalities from this.

ashenman said...

I was referring to fab 36, which AMD has said they've been testing 65 nm production on, and that they've also said they've been using part of for storage. I'm guessing that they've been ramping it slowly because they want to decrease infrastructure spending and decided it was best to correlate the finish of its ramp with k8l production. However, I'm probably wrong, since I don't remember specifically where they said any of that.

Scientia from AMDZone said...

I was referring to fab 36, which AMD has said they've been testing 65 nm production on, and that they've also said they've been using part of for storage.

You would have to show me where AMD has said that they've been using FAB 36 for storage. The only place I've seen this claim is from Charlie who wrote a ridiculous article about how supposedly AMD expanded the capacity of FAB 30 by storing things in FAB 36.

I'm guessing that they've been ramping it slowly

AMD's statement was that they've been installing tooling as fast as was physically possible.

decided it was best to correlate the finish of its ramp with k8l production.

FAB 36 won't finish ramping in capacity until mid 2008. I'm not sure what the correlation would be.

ashenman said...

I guess what I remembered was off then ;). Still, that seems a little slow for a fab ramp. Even if Intel is ahead of them in terms of fab production technology, I don't think it would take that much longer for AMD to get one to capacity would it?

Scientia from AMDZone said...

AMD Earnings transcript

Overall, our manufacturing strategy remains on track. Fab 36 continues to ramp to plan, and we have begun to transition that factory to 65 nanometer technology on schedule.