Friday, January 12, 2007

TechWeb -- Utter Confusion And Disappointment

Recently my blog was mentioned at TechWeb, Scientia's New AMD Blog. Although I do appreciate the mention I can't say that I appreciate the commentary by Max Fomitchev. After trying and failing over and over to post a corrective comment at TechWeb I decided to handle it here. I'll look at Fomitchev's comments and see how much his opinion is actually worth.

Fomitchev:
Scientia, the force behind AMDZone.com, has started a new blog devoted to all things AMD.

First of all, this is not an AMD blog. If I wanted to promote AMD, I assume I would do what other blogs do and simply cherry pick any news items that favored AMD or were bad for Intel and repost or link them here. I don't do that. Secondly, I'm not the driving force behind AMDZone. There are other knowledgeable people who post there. There are knowledgeable people on Toms Hardware Guide's ForumZ as well but too often the discussions there break down into pointless name calling. If the environment there were more civil it too would be a good forum.

Fomitchev:
Scienta clings to his beloved company

I'm not quite sure where Mr. Fomitchev got this idea. Let's compare me with Mr. Fomitchev.

Fomitchev:
As and AMD loyalist for 15 years I am utterly disappointed and confused: I cannot wait for K8L and need to upgrade now, and given the street prices for AMD and Intel CPUs I am left with no choice but to upgrade to Core 2 Duo

So, the supposedly objective Mr. Fomitchev has been an AMD loyalist for 15 years. I bought an Athlon 550 about 15 years ago however my next purchase was a Pentium 4 1.8Ghz. Since then I've bought an AMD notebook and a faster Pentium 4 system. So, I guess if I'm clinging to AMD I'm not doing a very good job of it with a 50/50 AMD/Intel purchase record. It looks like Mr. Fomitchev is trying to project his "utter confusion and disappointment" to me.

Fomitchev:
and refuses to acknowledge the benchmarks that favor Core 2 Duo over Athlon X2 . . . All the benchmarks cannot be wrong . . . No matter how much you analyze and criticize the benchmarks looking for AMD-favoring scenarios that have been downplayed the tests clearly indicate that Intel does have a significant lead in most applications and in power consumption . . . we must admit that Core 2 Duo on average is better than Athlon X2.

I'm wondering now if Mr. Fomitchev has ever actually read my blog. This characterization is nothing but fiction. Again, let's compare what I say with the supposedly objective Mr. Fomitchev.

Fomitchev:
Core 2 Duo getting better power consumption and 10-20% more performance

Scientia:
I generally give Intel a 15% integer lead so I'm surprised that Intel doesn't seem to show this kind of lead for a lot of the tests . . . it seems for many tasks, Intel only appears to be 10% faster.

So, let me see if I understand this. When Mr. Fomitchev says "10-20% faster" he fancies himself to be the essence of objectivity. Yet, when I say "10-15% faster" I'm somehow in denial and refusing to acknowledge C2D's superiority. This is obviously silly. And, perhaps he should actually read my blog instead of projecting.

It is true that my articles are often in defense of AMD, however, that is not because I lack objectivity. It is simply that when I find many articles on the web that are pro-Intel or incorrectly anti-AMD I don't feel any urgency to defend Intel. However, on AMDZone I have corrected people several times when they understated Intel's processor performance or features. In contrast, I have seen erroneous information about AMD processors go uncorrected on ForumZ several times.

The only other reason I can think of that Mr. Fomitchev would speak ill of my blog is that he seems to place far more importance on the testing done at Tom's Hardware Guide than I do. For example:

Fomitchev:
After all Tom's Hardware comparative CPU benchmarks clearly show that E6600 leads Athlon X2 5000+ by a good margin on all the text except for Sandra's Floating Point/Memory bench. So much to my disgust I had to order a Core 2 Duo system.

So, indeed, he likes the Tom's Hardware Guide testing. However, I've never felt any disgust at purchasing an Intel system so again I can only assume that he is projecting his feelings to me. In terms of the THG testing, however, my standards are simply higher than his. At TechReports they give a lot of information about their testing procedures whereas THG tends to be more secretive. TechReports shows core loading graphs during testing whereas THG does not. In fact, THG still favors single threaded benchmarks on multi-core processors. Although some have attempted to argue that single thread benchmarks represent what most people run and therefore are legitimate for testing this is an empty argument. If what most people actually run is single threaded code then the conclusion of any such review should be that there is no reason to purchase a multi-core processor; a single core would do just fine. However, there is no way that THG is going to reach such a conclusion because that would be saying that there is no reason to buy Intel's new Core 2 Duo and THG is not going to say that. Therefore, we end up with testing that does not support the conclusions and apparently people like Mr. Fomitchev fail to notice that.

If anyone can honestly say that there is a reason to buy a multi-core processor then they need to do proper testing where each core is loaded which THG does not do. THG has also shown a willingness to cheat in testing by using the Intel compiler when a neutral compiler is available (which will probably become the standard for software developers). THG also cheats by putting overclocked Intel processors up against stock AMD processors and by "tweaking" the performance of Intel processors by overclocking the FSB. I have nothing against overclocked testing but these should be separate articles and not in general reviews. Anandtech's recent blunder in measuring AMD's 65nm L2 cache speed along with the huge error in DailyTech on AMD's expected earnings also show that Anandtech is still untrustworthy.

TechReports makes the admirable statement that all of their testing should be repeatable. I don't know of any similar claim from Anandtech or Tom's Hardware Guide. I will readily admit that C2D could be faster than I have stated but without proper testing there is no way to know. Finally, if the web-press does turn against Intel and starts publishing a lot of erroneous information then I will have to start doing more aticles in Intel's defense. I don't see AMD as an underdog fighting for truth and freedom against the evil Intel. AMD is a pretty large electronics company that is working to gain enough share of the market to be able to share in standards creation. Intel's adoption of AMD64 and the recent speculation about DDR2-1066 could suggest that this is happening.

99 comments:

Unknown said...

Scienta,

I think you misunderstood me. We are both AMD fans, and I think we could have a constructive discussion. I just cannot buy the idea that all of the published benchmarks are wrong. I would like to carry on this discussion and you are welcome to contribute and point out flaws in my thinking.

MAX

PS I am actually Dr. Fomitchev. Unlike Sharikou whose Ph.D. is challenged by readers mine is real as I hold an Asstn. Prof. position at Penn State.

Scientia from AMDZone said...

Fomitchev
I think you misunderstood me. We are both AMD fans,


I'm not really an AMD fan. All I try to do is make sure that AMD gets measured with the same yardstick. It seems that too often Intel gets the benefit of the doubt and AMD gets the scepticism. As I said, if the web were more against Intel then I would have to do more articles in support of Intel.

and I think we could have a constructive discussion.

Well, judging by your post here I would say that you seem pretty reasonable.

I just cannot buy the idea that all of the published benchmarks are wrong.

I have no idea if they are wrong. I just really hate unprofessional testing. As far as I can tell, Tech Reports does the most professional job of testing. And, when THG or Anandtech acts unprofessionally then I'm not going to put much faith in their reviews.

I would like to carry on this discussion and you are welcome to contribute and point out flaws in my thinking.

Well, as I said, THG needs to be less secretive and stop trying to tweak the Intel chips. Overclocking is a separate area much like pro-sport in cars. THG needs to have a similar statement that all of their testing should be repeatable. To be honest it is very difficult for me to imagine that a university professor wouldn't be saying the same thing. This is the cornerstone of the scientific process.

I believe your Phd is real. I have a bachelors in computer science. I have no idea if Sharikou's is real or not but his articles are often flawed.

Also, your degree does not impress me as much as your reversal on your position about the purchase of ATI. It is clear to me that Intel is almost unchallenged in the area of integrated graphics and the purchase of ATI gives Intel some competition that it has never faced before. Another reason this move is so sharp is that it is not dependent on having the fastest desktop processor since corporate purchases tend to be value driven rather than pure performance. This is the kind of move that doesn't tend to get any fanfare but could very well prevent Intel from taking back share.

BTW, if you keep being reasonable I'll be forced to edit my original article and make it less critical.

Unknown said...

Scientia,

My apologies for poor spelling (I am known for misspelling my own name). Perhaps THG's testing is unprofessional. But I have looked at must be 20 benchmarks and all of them favored Core 2 Duo over AMD. I must conclude that:
- either all of the published benchmarks (except for Tech Report, wich I have not yet seen) are unprofessional;
- or unprofessional or not they are otherwise consistent in results.

From the first opion I must conclude that by the same token earlier benchmarks favoring Athlon X2 over P4 are unprofessional and thus incredible.

From the second option I conclude that even though the individual benchmarks are unprofessional and flawed overall statistics tells us that in volume with or without tweaking and in real-life scenario Intel is better.

ATI: I thought that Fusion is a good idea, so I stand by that claim. What deeply concerns me, however, and this is where my reversal is, that AMD did not have the money to buy ATI. The situation is particularly bad now with the Q4'06 warning. With no money to finance the purchase the acquisiton may fail regardless of how brilliant the strategy is. It is only too easy for Intel to keep squeezing AMD to death. I fear that hector got overly ambitious and geopardized AMD's security seeking grandeur. Money decides everything. Hector gambled, but gamble does not warrant winning. My 2c worth.

PS Don't you think AMD could not develop its own graphics technology cheaper than for $5B?? Oh, C'mon!

Christian H. said...

Interesting read as usual. I did notice the blurb about you "over there." I have been in IT for more than a decade and still don't understand the "almost deadly" rivalry amongst AMD and Intel "evangelists."

Both companies make CPUs for Windows and should be afforded the same respect.

AMD has shown themseves to be a worthy CPU maker.

(totally off topic)

Intel has the disadvantage in this case because they are the majority and since physics demands that "market" moves from areas of greater concentration to those of lesser concentration, they will continue to lose share until "equilibrium" is reached.

I said months and months ago on THG that Intel would be bleeding profusely because of Core 2 pricing.

The SHARP drop in Intel profits since then has borne out this opinion.

(maybe kind of on topic)

I won't get into the "benchmark conspiracy" because I don't like "clean machine" (rebooted, defragged, one app running, services turned off) benches, but I will say that in random testing with "Best Buy" PCs of the X2 and Core 2 variety Core 2 IS NOT noticeably faster though benchmarks show "superiority."

Anyway, from what I've seen running X2 under heavy load Outlook, Word, VS2005, SQL 2005, too many browser tabs without rebooting for months, Agena with its improvements will a REAL workhorse and Vista will probably be better as MS' THIRD optimization of AMD64 (32 bit or not).

So just like my 4400+ will last until (HOPEFULLY before Agena FX) RS690 releases for QFX and cuts some of the power to peripherals anybody with X2 can "tough" it out.

gdp77 said...

Scientia imo fails to acknowledge that in almost ANY real word scenario , for the SAME (+/- 20€) amount of money u can get 5-15% more performance from a C2D system @ default settings. In addition he fails to acknowledge that the system of the above example can be overclocked to the sky, while AMD's system can't. So scientia fails to see that the 5-15% advantage can become even larger for the SAME amount of money.

AMD may have better future, superior architecture, better ideas, you name it. But the problem imo is that scientia will never admit that there is absolutely no reason for someone to buy a AMD system TODAY. I don't know about tomorrow.

gdp77 said...

I am talking about desktop/mobile of course. Server market is another thing.

Aguia said...

Scientia imo fails to acknowledge that in almost ANY real word scenario , for the SAME (+/- 20€) amount of money u can get 5-15% more performance from a C2D system @ default settings.

You talk like if the Core 2 Duo 6300 is the only processor currently selling from Intel.

In addition he fails to acknowledge that the system of the above example can be overclocked to the sky, while AMD's system can't.

And you fail to acknowledge for that you must:
-Know what you are doing (99% or more doesn’t know how to do it right).
-Proper motherboard for the proper effect, cost some 200€ in my country.
-Rumors of very short life for the processor. I think is a little strange that Intel as processors that OC up to 3.6Ghz but doesn’t have one commercially available that achieves those clock speeds. (Maybe waiting for AMD or waiting for them selves?)

So scientia fails to see that the 5-15% advantage can become even larger for the SAME amount of money.

And I could go buy one 3700+ (2.2Ghz) for 80€ OC to 2.8Ghz easily plus an 100€ good SLI mobo. And all this for the same money of your "marvel" processor.

AMD may have better future, superior architecture, better ideas, you name it. But the problem imo is that scientia will never admit that there is absolutely no reason for someone to buy a AMD system TODAY.

I have just given one good one in the previous response.
The desktop market have processors that cost from 50€ up to 1500€.
I think strange that according to you, only Intel products have the proper price or for any money Intel is the only way to go.

Aguia said...

PS Don't you think AMD could not develop its own graphics technology cheaper than for $5B?? Oh, C'mon!

No if you are interested in GP GPU. Just that SIS was more than enough, or Matrox, or some one else.

Unknown said...

I just don't understand the "Intel is bleeding" people.
1.) Intel reported increased ASP vs. Q3'06
2.) Intel reported an 11% increase in sales vs. AMD's 3% increase. This means Intel is taking back share.
3.) Intel's profit was up almost 50% from Q3'06

So, we are in a situation with increasing ASP, Revenue, and Profitability for Intel while AMD's financials are so poor that they hid the real state of the business until they could arrange for additional debt financing. As to how sustainable this is for Intel, is trading at a forward PE of 18.6, which is within the range of 12-20 which is has been trading for years now. Will it return to bubble valuations? - Probably not but the AMD fanatics believe AMD will take 50% market share without the price war and I guarantee you Intel's market cap will go down for $100B if that was the case.

So, not only is Intel perfectly rational, but it strategy is sustainable long term and with AMD's dire financials straits (expect huge combined losses for DAMMIT), it looks like another 3-4 quarters of price war and AMD will have run out of its ability to add additional capacity. With Intel trying to move 1.5 generations ahead of AMD by 2H'07, I really see little reason to be optimistic about AMD's future.

I am not trying to flame here and would welcome well thoughout responses.

Aguia said...

I just don't understand the "Intel is bleeding" people.

Did you even look at the numbers?

http://biz.yahoo.com/bw/070116/20070116006261.html?.v=1

I don’t understand them very much so I can’t explain them to you but I do know what the - (minus) means and isn’t good for sure.

2006 vs 2005:
Revenue -9%
Operating Income -53%
Net Income -42%
EPS -39%

Like sharikou said Intel is going bankrupt, if this (%) numbers keep the same for the next year.

And the EU is after them too:
http://www.dailytech.com/article.aspx?newsid=5742

Things aren’t too rosy at Intel.

Aguia said...

I really see little reason to be optimistic about AMD's future.

With Vista requiring good GPUs, and ATI (AMD) providing the GPU for those people. I see great potential in the upgrade market where ATI (AMD) will be there to sell you a GPU just for Vista.

You must see that now AMD future is more than just one good CPU.
Now it’s all about good chipsets, good GPU.

Good platforms for every need.

From mobiles to servers.
TVs, cell phones, gaming consoles, ... co processors for every needs to be released soon.

So many, many markets that AMD as the potential to grow further.

I'm very very optimistic.

Unknown said...

"I don’t understand them very much so I can’t explain them to you but I do know what the - (minus) means and isn’t good for sure."

Aguia, if you are listening to Sharikou, it is quite obvious you do not know what the numbers mean so let me explain them to you.

Apparently you were sleeping the last 12 months so let me fill you in on what has happened since Q1'06 so you can better understand the numbers.

In Q1'06,
1.) Intel annouced Core2 architecture and showed how much better it was than P4, thereby causing P4 demand and pricing to tank.
2.) Intel responded by slashing prices on the P4 inventory and accelorated the ramp of Core 2 into the Server segment
3.) In Q1'06, for the first time in about 2 years, Intel held its marketshare.

Since then, Intel's performance advantage has allowed them to pit the old P4 against the newest X2's and thereby drive AMD's pricing into the dirt while ramping Core2 into the high end and server market segment (this is evidenced by Intel's increased ASP's and AMD's falling margins and cries of 'price war' when Intel's ASP's are increasing). As a result of this, Intel has taken back a large chunk of margin share from AMD (11% revenue increase for Intel compared to 3% for AMD vs. Q3). Further, AMD's profits have gone from record levels to now a huge loss for the combiend DAMMIT. This has been a combination of Intel's strategy playing out to perfection while AMD stumbled on product development and process transition.

The net effect of all of this is Intel has reclaimed the high end of the market - with 45nm Penryn in 2H'07 vs. the 65nm X2's or K8L's (that will have lower clocks than the previous 90nm X2's), you can see the AMD is getting 'fragged' (thanks Sharikou for this retared expression) by Intel and given the losses which are expected from AMD for the foreseeable future, I find it hard to believe that bankers will continue to loan money when they are clearly falling further and further behind. Lastly, the earnings by AMD being negative or fractionally positive will ensure a share price collapse which will make raising money through stock sales a non-viable option.

In closing, although Sharikou and his followers choose to ignore what has transpired over the last 12 months and prefer to look back to 2 years ago, their closed minded view only ensures that they will not understand marketplace. BTW, isn't this supposed to be the 2nd quarter of Intel's consecutive massive operating losses as predicted by Sharikou? Given profits are up nearly 50% vs. Q3'06, it seems the numbers are moving in the opposite direction as predicted by Sharikou - shocking neh?

sharikouisallwaysright said...

I cannot see any success for Intel.
The Numbers are horrible but maybe expected so someone counts it a succesfull strategy to dump prices for a superior product without any reason make P4 cheap dirt and shrink profit nearly by 40%.
Thats called success ?

AMDs numbers arent out yet, but if they could realy sell all produced CPUs and stay capacitiy constrained they will have stable income but Intel will shrink heavily for one more quarter.

Intel is a giant forced to shrink itself or die over their expenses.

I would bet on AMD!

Unknown said...

Sharikoishigh: how is Intel shrinking when it posted an 11% increase in revenues versus Q3 and a 50% increase in profits? Versus Q3, AMD will be up 3% in revenue and tip from marginally positive into huge losses. I see that you don't want to join the reality based community yet, but when one company has kicked the crap out of the other for the last 3 quarters (since launch of Core2 and 'pricewar') and the performance advantage is clearly in Intel's court for the next 1+ year, how can you actually feel AMD is not losing unless you are high?

Lastly on your 'selling everything they can make' AMD stated they were not capacity constrained for several quarters in a row already. Further, if they are selling everything they can make but dropping prices drastically to do it (which it is obviously doing by the net profit and margin warning), then your argument is meaningless.

Christian H. said...


AMD may have better future, superior architecture, better ideas, you name it. But the problem imo is that scientia will never admit that there is absolutely no reason for someone to buy a AMD system TODAY. I don't know about tomorrow.


SO I can then assume that you went from BLOG to BLOG and forum to forum telling people there was NO REASON TO BUY NetBurst.

If not, why? X2 still delivers all the fps that it did in May. My 4400+ is more than worth buying.

5200+ at UNDER $300 makes me wonder what you peoples' problem is.

It fely good to know that there were some CPUs beyond my price range (and I make good money), but now even Intel will be hurting even more as Netburst (artificially inflated right now) disappears and the C2D and C2Q are all that's left.

They will have a slight saving grace in that they can charge more for mobile chips, but in the end they will continue to bleed.

Now that RD600 is around and 680i is taking mobo share, that part of the business will suffer also.

Server is a dead end because 2xClovertown cannot overtake 4xOpteron in transactions (TPC-H).

Hopefully AMD will charge a nice premium on Barcelona quads and move the dual cores to the current X2 prices.

Wow, was that off topic.

sharikouisallwaysright said...

Intels one and only hope is K8L will be a horrible flop so AMD can only lower CPU prices to insanity to stay in competition.

Unknown said...

Intel dropping P4 prices did not force AMD's prices down. AMD is not releasing numbers late because they have to fake them as you basically describe, and last I heard, bankers don't rely on stock prices to figure out if they loan a company money.

Not only that, but AMD is not "losing money" they are making less money than they did before, but they are not losing it, and they are certainly not dumb enough to assume that they can acquire a company like ATI and then depend on still increasing profits to actually afford such an acquisition (HOLY CRAP HOW MANY TIMES DO I HAVE TO POST THESE-PEOPLE-ARE-NOT-STUPID ARGUMENTS!!!).

Real, your argument on AMD's financial record seems largely pointless given aguia's (especially since you try to refute his with yours). In a market as seasonal as processors, comparing QoQ should only be done while also comparing YoY. The safest option is thus only comparing YoY. Would it make sense to say that strawberries is a sucky company because its profits are down in november compared to october? In the same way, would saying it's an awesome company because its profits shoot up in the spring.

AMD is currently capacity constrained, which is why it isn't seeing as strong of growth as Intel is this Christmas season. Intel is leaving much of its capacity dry for the time being, so it doesn't continue to have the same problem it had before with surplus. AMD sees surplus now (though to a very small extent) because of its huge dependence on Dell, HP, IBM, etc. It is not actually a bad thing, because it just means that those companies still have demand, just for different packaging than they expected, which is why these surpluses have been fluctuating.

Your demand argument is also extremely weak, because by the same logic, Intel has a problem with the fact that AMD still has x2 processors that are pretty much price/performance competitive since this is Intel's newest core and process, and the process and core that AMD has been on for years. Also, the P4 is not price/performance competitive with the A64 or x2, as I just did a build for my dad, and could not find a retail or self-built system where the Intel system was not 30-40 dollars more and did not have other significant issues in terms of performance, upgradability and the like.

If there is a share price collapse, it just shows how stupid and fickle investors are, but would probably be problematic. Regardless, if it does happen, it will not be to an extent that jeopardizes all of AMD's business. Also, since AMD's prices fluctuate to such an extent for such mundane reasons, they'd be extremely stupid to depend on it as a good source of income.

Red, how many times have we had this argument about the whole what counts as a credible review argument before? Seriously, don't bring it up again unless you actually have a new argument.

Unknown said...

greg - when AMD posts a loss as the combined DAMMIT in Q4 will you admit that you are wrong?

SweepingChange said...

aguia

Rumors of very short life for the processor. I think is a little strange that Intel as processors that OC up to 3.6Ghz but doesn’t have one commercially available that achieves those clock speeds. (Maybe waiting for AMD or waiting for them selves?)

Intel hasnt released one because they are sticking to a desktop TDP of 65W for Conroe top bin. x6800 is known as a top-bin+1 product and is at 75W TDP. But of course, if AMD tries to play the greater clock-speeds game, then Intel will be forced to play it as well. But a 3.2-3.46Ghz Conroe would overshadow Kentsfied quite a bit, as it is stuck at a 2.66/120W TDP. Hence, Intel is focussing on quad-core rather than clock-speed.

Regarding the price-wars, probably the sooner Intel gets rid of its Netburst production, the better it would be for both companies. Getting rid of obsolete stuff, for dirt cheap prices is not healthy, when there is a war on for market share.

Though we forecast doom for AMD, we tend to ignore server dynamics. In servers, AMD is still quite strong, their Opterons are holding their own, and not being embarrassed by Core as much as in desktop and laptops. Intel's design decisions in Bensley as in FB-DIMMs is going to stand exposed when AMD's quad-core launches. It will have very good idle-power levels compared to the Clovertown platform, and both Intel and AMD will be in a raging fist fight of benchmarks. It will be a perfect time to judge the sales teams of both companies. Simply because AMD will hold their customer base, and Intel will probably hold theirs, but whether AMD can nibble off more of Intel's customer base, will be dependent on their execution and sales teams performance.

In MP, AMD and Intel will be neck and neck again, Tigerton vs Barcelona 4-way (Deerhound). But probably the MP server market moves much slower than the 2-way market, hence you will probably just see a lot of benchmarks but not much in terms of market share movement.

AMD will probably hold good performance leadership and some market momentum in servers for atleast a quarter and 1/2. This will probably be around late Q4 when the first Intel 45nm dual and quad-cores come out. Then that is another battle... but not as dramatic as Netburst -> Core transitions.

Scientia from AMDZone said...

real
- when AMD posts a loss as the combined DAMMIT in Q4 will you admit that you are wrong?


Well, calling AMD/ATI DAMMIT shows that you read the INQ a lot. That also might indicate that you are not that familiar with corporate finances.

So, I'll see if I can explain this. When you talk about a loss that is a pretty loose term. There are losses and then there are losses. All losses are not the same and the loss you are talking about is trivial.

What you need to understand is that AMD makes a lot of money, billions of dollars. And, AMD spends the bulk of this money on two things: R&D and Capital Spending. Capital spending includes the purchase of new tooling and the construction of new FABs and associated construction such as the new bump and test facility in Dresden.

The reason such a loss would be trivial is scale. If you bring in $1 Billion in one quarter then a $10 Million loss is not that big a deal. If you were only bringing in 100 Million then a 10 million loss would be a big deal.

It is not unusual to have dips or to borrow money when starting new projects and a Q4 dip in earnings would not be unusual with the ATI purchase.

Scientia from AMDZone said...

Real

There are several errors in your posts. Intel did not hold share from Q1 06. Intel lost share from Q4 05 to Q3 06. However, as you've pointed out with Intel's higher growth they may have taken back share in Q4.

AMD/ATI does not have huge losses. Having a dip in earnings or having borrow during a major development is not unusual. AMD was also aware that Intel would pull the ATI contracts after the purchase; none of this was unexpected.

Suggesting that AMD will run out of capacity in 3-4 quarters is incorrect. AMD will grow until the end of 2009 when the tooling will all be installed in FAB 38. That would be 11 quarters. In fact, FAB 36 won't even be at full capacity until mid 2008 so that alone would be 5 quarters.

Intel is not bleeding and you certainly didn't get that from my blog. Thekhalif is incorrect. Intel shrank in 2006 but is now growing again.

I have to say that I am puzzled by your 1.5 generation ahead statement. Once K8L is released Intel is 0 generations ahead. What would put them ahead 1.5 generations? Maybe you want to rethink that.

Scientia from AMDZone said...

fomitchev
Perhaps THG's testing is unprofessional. But I have looked at must be 20 benchmarks and all of them favored Core 2 Duo over AMD.


I'm sorry but this statement is worthless without qualification. I've already said that C2D is 10-15% faster at the same clock. This isn't in dispute.

Secondly, single threaded testing on multi-core cpu's is useless no matter who does it and no matter if they get the same results.

With no money to finance the purchase the acquisiton may fail regardless of how brilliant the strategy is.

This is where you are not looking deep enough. AMD has more money than ATI. The acquisition gives ATI advantages in lead time. The cost savings balance the loan interest. Also, I'm not talking about fusion; I'm talking about motherboard integrated graphics, not GPU on the die. This should give AMD a big boost in corporate sales and this alone is worth the $5 Billion. The additional areas should be worth a lot more than 5 Billion.

It is only too easy for Intel to keep squeezing AMD to death.

Again, you have this backwards. Intel is almost totally unchallenged today in integrated graphics. Intel now faces a very tough competitor. Intel will get squeezed, not AMD.

PS Don't you think AMD could not develop its own graphics technology cheaper than for $5B??

AMD to date has developed the 760 chipset for K7, the 760 MP variant with twin FSB's and its chipset for Opteron back in 2003. Sure, they could develop their own but money isn't the issue, time is. It would be impossible for AMD to put together a graphics team and come up with anything in less than 18 months. AMD would also have to establish Foundry contracts, etc. ATI already has foundry contracts and an established team of designers. The purchase does make sense to me. Just withhold judgement until Q2 07. AMD should be established with corporate OEM's by then.

Scientia from AMDZone said...

real
I see that you don't want to join the reality based community yet, but when one company has kicked the crap out of the other for the last 3 quarters (since launch of Core2 and 'pricewar')


Real. What are you talking about? AMD made more money in 2006 than they did in 2005. Intel on the other hand is back to about 2004 revenues. Presumably, Intel will be caught up to 2005 revenues in 2007. AMD gained share during Q2 and Q3. How exactly did this constitute being kicked?

Think carefully before you answer and make sure that what you say is connected to reality in some way.

Scientia from AMDZone said...

sharikousisalwaysright
AMDs numbers arent out yet, but if they could realy sell all produced CPUs and stay capacitiy constrained they will have stable income but Intel will shrink heavily for one more quarter.


I doubt Intel will shrink in Q4. The volume increase alone should mean that Intel has a boost in sales.

Scientia from AMDZone said...

red
TR had the same innocent mistake using the same commonly used tools as AT. But you can't criticize them because they're just great right?


This is called a "strawman fallacy". Basically what you did was only repeat one part of what I said so that you would have a weaker argument to attack. Show where TR mistated AMD's earnings and then read the "Anandtech Melts Down" article. When you can give me a solid argument for all of that instead of just one small piece then I'll listen.

If you look for errors, you're going to find them.

Then I challenge you to find where TR has done as bad a review as the Anandtech server comparison that I mentioned in the past article. After you've found one then you will have a point. Good luck.

Roborat, Ph.D said...

Scientia wrote:
Secondly, single threaded testing on multi-core cpu's is useless no matter who does it and no matter if they get the same results.
Why not when most applications out there are single threaded? Doesn't this test show the CPU's capability in one type of workload as much as FP benchmarks showing FPU calculations?

Aguia said...

Secondly, single threaded testing on multi-core cpu's is useless no matter who does it and no matter if they get the same results.

I dare to disagree!
You many people fail to see that Intel Core designs have shared cache.
So with shared cache on dual core designs means that when you are running single threaded applications, all the processor cache will be available just for one of the cores.
So when you see Core 2 Duo vs AMD X2 in single threaded applications you see one processor with 4MB cache (or 2MB with 6300/6400 models) VS one processor with 1MB or 512KB off L2 cache.
That is just 4 or 8 times the size of the AMD L2 cache!

In other words you should see a very good performance boost in single threaded applications in Intel multi core cpus because of that.

AMD must catch up with Intel on this one because as far as I remember when AMD as equal or higher capacity L2 vs Intel processors, AMD normally wins.

Athlon 256KB vs Pentium 3 256KB
Athlon XP 256KB/512KB vs Pentium 4 256KB/512KB
Athlon 64 512KB/1MB vs Pentium 4 512KB/1MB
Athlon 64 X2 512KB/1MB vs Pentium D 1MB/2MB «« L2 Not shared so I cut it down in half

Athlon 64 X2 512KB/1MB vs Core 2 Duo 2MB/4MB

I think AMD is going to release higher L2 cache soon o X2 products. And Intel is going to respond with E6xxx with 4MB instead of 2MB because of that.

I think it was the inquirer that reported K8L (the dual core version) with 2MB L2 and 2MB share L3.

http://www.theinquirer.net/default.aspx?article=35209

sharikouisallwaysright said...

The only number of sales mean nothing if the result is that disapointing!

Maybe Intel has sold off a whole lot of their P4 junkpile means absolutely nothing as it is something like a last ditch but never a healthy business.

I am right or wrong?

Aguia said...

Real

You have put a fantastic clarification but you fail to explain what I said.

So you I start Q1 with 0%.
On Q2 I go down by 25%.
On Q3 I go down again by 30%.
And on the Q4 I go up 10%.
By your explanation I'm doing great?!

You talk of Intel new products but you fail to see that Intel as done excellent years even with the Pentium 4, having better products does help but isn’t absolutely necessary.

And AMD will also have great new products:
- New top chipsets for desktop.
- New chipsets for mobile systems that they didn’t have.
- Great GPUs to run with Vista.
- GP GPU for the server market.
- Mobile phones chips.
- TVs chips.
- Co processors to be released soon (physics cards).

A bunch of great things that the market needs and Intel isn’t going to deliver, and AMD will be there, innovating again.
AMD will gain market share for sure because this time AMD as products that didn’t have before, AMD now have good complements for its own products.

AMD doesn’t need VIA or NVIDIA to the rescue this time. They have stable and good platforms for their products.
They can now release a good processor and don’t have to wait for some one do a good chipset for it and some mobo maker adopt it and have the courage to do the design without suffering repercussions from Intel.

Unknown said...

Sharikouishigh:
"Maybe Intel has sold off a whole lot of their P4 junkpile means absolutely nothing as it is something like a last ditch but never a healthy business.

I am right or wrong?"

Like usual, you are wrong. Intel had ASP INCREASE while AMD has ASP DECREASE. This means AMD is blowing out parts as a last ditch effort. If your comment had been in regard to AMD, it would have been true.

Unknown said...

Actually, red, to people like me who already have their copies of vista, 32 bit testing is worthless. To anyone who actually cares about increasing the performance of their system and using most new applications, 32 bit testing is worthless, because they're going to be forced to move to vista. It doesn't matter if it's fair or not, it's just the way it is.

Real, while revenue share is an essential portion of how operable these companies can be, it doesn't not show which one is successful. As I have said before, AMD is sacrificing now for gains in the future, which is coming closer and closer. Aguia is talking about volume, share, which, long term, is far more important than revenue share, and especially so in a case where AMD is simply fighting for the type of market recognition it has hardly ever had. Market recognition has been the boundary to AMD being able to reap the fruits of its better process, and has also been why Intel has had far greater growth and marketshare despite poor processes, and as someone with your training and certification, you should realize that AMD has always desperately needed.

Unknown said...

BTW, red, did you just repost the argument I've told you you've already posted countless times before, and that has been argued and basically defeated in the eyes of this blog's readers countless times before?

Aguia said...

real

My source was also yahoo.

http://biz.yahoo.com/bw/070116/20070116006261.html?.v=1

Give me yours.
I can only see negative numbers there. Where are the positive ones?

The ones that show little improvement are the ones comparing Q4/2006 over Q3/2006, Q3 normally is weak and Q4 normally is the strongest off all, why don’t you look at the numbers that compare 2005 VS 2006 or the ones that compare Q4/2006 VS Q4/2005 those only show negative (%) numbers!

sharikouisallwaysright said...

Uhm, lets think about...

Ok, on time Intel cut down ASP to insane level by own force.
Next time they tell us how good its ASP skyrocketed?

All the minuses in their statement make me think it worked out...

Scientia from AMDZone said...

Red
Where did AT misstate the earnings?
It seems they cited the $1.85B


$1.85 Billion is incorrect. The correct number is $1.37 Billion

Fujiyama said...

I think that all comments regarding rise or shrink do not reflect to actual situation.
The whole mess, profit warning - all of them has one reason - DELL.
This is second quarter where it is difficult to buy selected AMD CPUs!
I assume that the reason of 3% rise only is that:
1. AMD was mistaken in production and was not able to fill orders in channel.
2. DELL probably postponed the order on even canceled it because of shrinking desktop market. Maybe they asked for 1mln Turions instead of X2s and the answer was - we don't have it.

All these problems means that AMD doesn't have extra capacity, cannot overproduce, is not able to show flexibility in customer orders.
I do hope that 65nm capacity solves the problem, the question remains is AMD able to produce more chips in case when DELL or anybody else changes the order?

You need 300k X2 5000+?
Next week? No problemo amigo!

Scientia from AMDZone said...

Real

Stop. I am not going to have a discussion about cooked numbers here. I am tired of having people sift and strain through Intel's numbers trying to find something positive.

First of all, stop posting profits. I'm not interested in those numbers as they do not represent the actual income of the company. Intel spends a lot of money on stock buyback which is not counted as income and AMD spends most of its money on R&D and Cap Ex which are also not counted as income. Now, even someone with very little understanding of finances can see that Intel could spend less on stock buyback if necessary so only quoting the reported earnings is silly. Likewise, it incredibly silly to suggest that AMD is losing money while it is spending half a Billion every quarter on FAB expansion.

If you want to discuss money then limit your numbers to cpu revenue only and gross margin only. That is, only the actual money earned by cpu sales after the cost of sales.

Secondly, stop trying to post cooked numbers on Intel's growth. Intel did not gain any revenue share or volume share in Q2 or Q3. Do not post the numbers that include VIA's share which artificially show a gain for Intel. If you want to post share numbers limit the numbers to Intel and AMD only. Again, I haven't looked at the numbers yet so I don't know what Intel did in Q4. Intel likely has a share gain in Q4. However, I need to know the totals because this could be a temporary gain. In other words, when AMD gains share in a falling market the gain may not be real and when Intel gains in a rising market the gain may not be real.

Again, Intel's revenues are way down from 2005 levels and are only about equal to 2004 levels. AMD's revenues are up significantly from 2005. Finally, quarter on quarter growth is useless without a baseline. If you really want to show that you understand statistics then start your baseline with Q3 05 and show revenue from there. Only show cpu gross margin revenue.

We have to wait at least another quarter to start comparing chipset sales. So, it might be interesting to look at chipset sales at the end of Q1 07. Obviously, we are not going to count flash memory sales at Intel.

Scientia from AMDZone said...

Real

Baseline does not mean quarter on quarter. I am not interested in showing the dips and rises for each quarter with a floating baseline of the previous quarter. The only way this would be relevant would be if you average the quarter on quarter change for say the past 5 years and compared with that.

Yes, I understand that you desperately want to show some improvement with Intel so if you ignore the huge drop in Q2 then you can pretend that Q3 shows a big upward trend. Now, wake up. Intel is improving but this by no means erases the massive declines of 2006. This would be just as foolish as claiming that AMD's steep gains in Q3 02 showed that it was gaining on Intel when in reality those huge gains were simply corrections from the previous huge declines.

Scientia from AMDZone said...

Red And anyone else who is still confused about this

You cannot give a review using single threaded benchmarks on multi-core processors and then reccomend buying multi-cores. If the common applications are single threaded then there is no reason to buy or reccomend a multi-core. These reviewers are using a very dishonest bait and switch where they suggest that single threaded benchmarks represent what people run and then push C2D anyway. If you are running single threaded apps then you don't need C2D nor X2 and certainly not Kentsfield. The larger cache is also misleading because typical system activity splits up the cache more and therefore you will not see the same speed as you do while runnng one benchmark. This is another reason not to load the second core as this will split the cache on C2D while having no effect on X2.

If you are going to review a multi-core then load all of the cores. Red, the loading graphs are shown in each benchmark; these are the cpu activity meters which show how much cpu activity is taking place.

Aguia said...

Red And anyone else who is still confused about this

That’s why I think they removed single core processors from the reviews.

And all those 3D applications, 3D marks, sisoft sandra, that show the performance scaling with more cores are completely unrealistic, because most real applications I use and everyone uses doesn’t take benefits of multi core.

Just look at recent declarations of Carmack and Gabe Newell they seam not like of multicore because it’s hard for them to program.

Higher clocked CPUs and higher performing GPUs especially at the low end seams the way to go on the years to come.
That’s why the Fusion idea may be very good; let’s hope it will not be “late”.

Fujiyama said...

Just look at recent declarations of Carmack and Gabe Newell they seam not like of multicore because it’s hard for them to program.

Aguia If it's true, why AMD and Intel don't push faster single-cores?
3GHz Athlon 4400+ Single Core would be nice.

Aguia said...

Aguia If it's true, why AMD and Intel don't push faster single-cores?
3GHz Athlon 4400+ Single Core would be nice.


http://www.xbitlabs.com/news/multimedia/display/20070116154134.html

http://www.dailytech.com/John+Carmack+Speaks+on+DX10+Vista+Xbox+360+PS3+Wii/article5665.htm

"obviously would have been better if we would have been able to get more gigahertz in a processor core"

Because of marketing I guess. And maybe because it would be bad that one single core 2.8Ghz processor (AMD or Intel) beat one 1.8Ghz dual core processor (AMD or Intel) in most benchmarks/tests by a large margin.

Your example for instance, would put that processor costing more or less than an X2 4200?

For example in mobile processors I think for the most people (and the uses that mobiles computers normally gets) it would be better if their offerings used single core processors that used 20W or less than their current dual core offerings using 30W (or more).

Ho Ho said...

Power usage doesn't scale linearly with MHz. That means 1.5GHz dualcore doesn't take as much power as 3GHz singlecore. When both CPU's are produced with same technology that dualcore will use less power.

Only pumping MHz and IPC won't help for long. Moving to smaller tecnology nodes will make a lot more die space availiable and using single cores you can't use it to speed you CPU up. With multicores you can. Multicore CPU's are inevitable, there shouldn't be any questions about it. It is the last time for developers to start thinking about concurrency.


Saying that most curret software uses only single cores and therefore multicores are pointless is the same as to say that most current games use <=DX9 so DX10 is pointless.

Aguia said...

Saying that most curret software uses only single cores and therefore multicores are pointless is the same as to say that most current games use <=DX9 so DX10 is pointless.

Well you didn’t see me saying that did you?

The people I quote where just two of the best game designers of all time.

I also didn’t say that Dual core 1.5Ghz consume as much as one single 3.0Ghz (unless it’s a double core packing).
In fact if you look at AMD new TDP values, single core goes 45W and dual core goes 65W.

And regarding DX9 vs DX10, the features/functionality you get with DX10 is not the same from keep using DX9.
Single to Dual core you get wasted time with more programming to win very little to nothing added to the game.

J said...

Scientia, why are you deleting my posts? If you are casewhite, you did not address me respectfully, and I had no intentions to respond to disrespect.

They say AMD, excluding ATI, expects 1.37. They should've also for clarity, say the excluding ATI expectations, but I don't see where they are factually incorrect.

While you deleted my post, you forgot to address why AMD, who had oversight of the review, didn't say anything about the motherboard. Maybe AMD's benchmark guys don't have any more of a clue than Anandtech then, but you haven't proven that a dual bus system would've impacted the results in any significant way.


Greg, I will care for 64 bit when it is required. Say Vista 2.0, when the new DVD standard is required (requires 64 bit now), driver support is 100%, app support is 100%. Until then, 64 bit is useless for me, and I don't think you speak for most of us when you say that 32 bit doesn't matter.

Thanks, I've seen the core loading graphs now..:) Must've been Adblocked with Firefox or something before.

You argue as if THG runs single threaded apps exclusively. Please show me how THG is more single threaded then TR. Also, I see more real world apps from THG, while TR has lots of synthetics (what's up with picCOLOR?). I do not view chips as "out of order", "dual core", "64 bit", etc. CPUs. I see them as chips with features, none that define them, so I still don't understand why chips that have multiple cores should only be subjected to multiple threads.

aguia, I am aware that not all things can be processed in parellel, and even if they can, still can't scale well. And that is why single threaded apps are still important.

J said...

Clarifying myself, you never said that single threaded apps were irrelevant, but that evaluation of multi-core CPU's should be based on multi-threaded apps.. Let's see the exclusive apps (except games) that TR/THG test from their Core 2 reviews. Also, TR doesn't have core loading graphs for each individual bench so it's hard to see which ones are stressing the CPU's or not.
TR:
*WorldBench (woo, rigged *Marks[also encompassing MusicMatch perf, whatever that is, WME, Adobe Premiere, VideoWave, Photoshop, Office, Mozilla (?), ACD (photo), WinZip, Nero)
*picCOLOR - woo synthetic
*Sphinx - relevant to.. the new MTV show that relies on lie detection software?:p
*Cinebench
*POV-Ray

I'd say WorldBench is irrelevant because I don't believe in prepackaged bench suites.. And that leaves Cinebench and POV, relevant to a few 3D people. So we have 2 multithreaded apps that matter to a few.

THG:
*AVG virus - kind of irrelevant if you don't use IE, but the Wiki entry speaks for itself
*Clone DVD
*iTunes - (at this point, realize that these benches aren't prepackaged suites, but results from RW situations)
*Photoshop - the app, not a bench suite
*PDF conversion
*WinRar
*Ogg - irrelevant codec
*DivX -irrelevant codec
*MainConcept
*Adobe Premiere
*Pinnacle
*xVid - irrelevant codec
*Multitasking benches

TR is consistent, but unfortunately, I wish that half of their review didn't focus so much on WorldBench, synthetics. Since THG doesn't have core loading graphs (but they do specify whether the test is threaded or not), I prefer THG's "single threaded" RW review as opposed to TR's dependance on synthetics.

Scientia from AMDZone said...

Red

I've deleted posts from four or five different people. I wanted Dr. Fomitchev's comments at the top where they should be instead of a ridiculous discussion about the word naiive. I have no idea who or what "casewhite" is. AMD did not oversee any review at either TR or Anandtech. I don't know what motherboard issue you are talking about.

Now, for the last time, Red, I only listed multi-threaded benchmarks in my article. If there had been quality multi-threaded benchmarks available from THG I would have used those. I have nothing against THG; I just want more professional testing. As long as the benchmarks are open and professional I will always give Intel credit when they are faster.

As far as Intel's finances go they are very simple. Intel went from the best quarter in its entire history in Q4 05 to a severe drop in Q1 and Q2. Intel is now increasing again just as I expected. I further expect Intel to keep increasing through 2007. However, the increases in Q4 did not make up for the much larger decreases in Q1 and Q2.

Overall, Intel has done worse in 2006 than it did in 2005 while AMD has done better in 2006 than it did in 2005. Now, let's stop denying the facts:

Intel is doing better now.
The 2006 revenue drops also hurt AMD.
Intel could indeed start taking back share from AMD.
Intel did not take back share in Q2 or Q3. There was a very tiny shift in share in Q1 down from Q4 05. It is possible that Intel has taken back share in Q4 but I don't know until I see AMD's numbers.

Unknown said...


Please kick this jerk out! This is a very enjoyable blog where people have intelligent and civilized discussions. We don't need this!


I was 100% civilized until Scientas deleted my posts and called me a liar by saying I posted incorrect numbers.

sharikouisallwaysright said...

I agree to this:
1.) Stopped the share loss
2.) Made AMD a ZERO PROFIT company

Plus!

3.) Hurt Intel Profit as much as possible

Numbers will show as history will if this strategy worked out for or against Intel.

Scientia from AMDZone said...

real
I was 100% civilized until Scientas deleted my posts and called me a liar by saying I posted incorrect numbers.


All of the numbers are correct. Everytime someone tries to put a spin on Intel they always pick out some correct number.

Earnings are not a good measure. I would be happy to discuss microprocessor revenues. And, I am not denying that Intel is doing better, clearly they are. I just don't understand the notion of concentrating on the last quarter and ignoring the large declines in Q1 and Q2. BTW, I wouldn't mention those declines if Intel's microprocessor revenue were back up where it was but it isn't.

I would also not mind discussing revenue and volume share. Basically, I feel that microprocessor revenue and share along with volume share are the best indicators.

I simply do not want to get into a discussion about stock values, operating income, earnings, etc. I feel that is a waste of time.

Scientia from AMDZone said...

Real

I don't currently have numbers for AMD so I don't know what the revenue share is. The best I can do is use the 1.37 Billion number that AMD gave. Using this number, Intel would have taken back 1% of the revenue share. However, the 4th quarter was a larger percentage of total in 2006 than 2005. So, the revenue increase could be temporary.

As far as I can tell, AMD has grown by 35% in processor revenue from 2005 while Intel has declined by 11%.

I think by almost any measure this is better for AMD than Intel.

enumae said...

I am hoping someone could look over my numbers for processor revenue percentages from Q1 2004 thru Q4 2006.

Link

It took a while and if you want links I can dig most of them up.

Ho Ho said...

aguia
"The people I quote where just two of the best game designers of all time."

Programmers are like that, they whine about everything. I should know that since I'm one of them.

aguia
"Single to Dual core you get wasted time with more programming to win very little to nothing added to the game"

If done wrong you can loose performance with going multithreaded. If done right you can win (almost) linear performance scaling with (almost) any amount of cores.

People have been programming stuff that runs on several CPU's for ages, it is not a new thing. Sure, it is more difficult than your average linear singlethreaded program but it is not that difficult.

Btw, did you know that 2.66GHz P4D can beat FX57 in Quake4 in CPU limited scenarios? I'm sure you know that engine is largely programmed by one of the people you quoted.

Another game with SMP support is Quake3 but unfortunately that one isn't programmed that well. When I turned SMP support on with my old P4D I lost around 20% performance and had to settle with "only" ~600FPS. In the old times the link between CPU's were much bigger compared to the speed of CPU's than it is today, especially with P4D's.

aguia
"For example in mobile processors I think for the most people (and the uses that mobiles computers normally gets) it would be better if their offerings used single core processors that used 20W or less than their current dual core offerings using 30W (or more)."

Didn't CoreDuo have the same power requirements as its single core predecessor? Basically that would mean that people get "free" performance without a loss in battery time.

Also there will be single core mobile CPU's in a while, just that for some reason Intel didn't bring them to market together with its dualcores


red
"aguia, I am aware that not all things can be processed in parellel, and even if they can, still can't scale well."

Interesting thing is that I've asked lot of people, including fellow programmers, for algoritms that you can't run in parallel. During the last couple of years I haven't got a single answer.

red
"And that is why single threaded apps are still important."

Sure they are important but I personally wouldn't trade dualcore for a singlecore for any price. Main reason besides increased performance is massively increased user experience. Do you remember what happened when some program decided to take 100% CPU time on single core CPU? Often that macine became uncontrollable since the UI was just way too unresponsive. I've never seen anything like that since the day of HT capable P4, with the two dualcores I've had things improved even further.

As for benchmarking single threaded programs on dualcores I wouldn't say that is wrong thing to do. We do have to run them once in a while and currently shared cache CPU's have better architecture for such things. If anything it should be noted that when loading all cores on the shared cache machine you might not get as good performance than running only a single program.

Scientia from AMDZone said...

kalle

I think the best description of parallelization with code is QuickSort. Anyone who is familiar with QS knows that the algorithm is only efficient down to a certain size. Below that size you switch to BubbleSort. Or conversely, BubbleSort is efficient up to a certain size and then QuickSort becomes better. Parallel code is similar. For small scale, single threaded code will always be more efficient and then at some point the parallel code will be better.

I suppose if we got technical then we could use a three tiered approach where we had BubbleSort at the bottom then ShellSort in the middle and QuickSort at the top. Likewise code be single threaded, dual threaded, or N-threaded. Dual threading is the current mid point because there are dual core processors. This could move up to quad threaded once true quad cores become available.

hyc said...

Actually we switch to Insertion Sort below a particular size. IIRC unless your array size is only 1 or 2 elements, Bubble Sort always loses.

Database lookups using tree-oriented data structures cannot be meaningfully parallelized. You can run multiple lookups concurrently, but you cannot accelerate a single lookup by parallelizing it. There's quite a lot of problems in the commercial world that can only be coded sequentially; I'd say that most parallelizable problems only exist in scientific computing.

hyc said...

Sorry, forgot to state the main point: whether a problem can be parallelized is not the same as whether it can be multithreaded. The two are very different constraints, but both can benefit from multiple cores. It's the difference between SIMD and MIMD - a vector or array processor can accelerate SIMD work - that's parallelization. It cannot benefit MIMD work. Multiple cores can benefit both.

Aguia said...

If done wrong you can loose performance with going multithreaded. If done right you can win (almost) linear performance scaling with (almost) any amount of cores.

But not with everything. If you look well, besides encoding applications there are almost near to 0 (zero) in the mobile/desktop software market. In server you see benefits because server software normally get doing more than one thing at the time (running several programs) or doing simultaneous repetitive tasks.

People have been programming stuff that runs on several CPU’s for ages, it is not a new thing. Sure, it is more difficult than your average linear singlethreaded program but it is not that difficult.

No? The multithreaded software that I know of is normally very simple programs (simple code). Besides benchmarks and encoding applications that I have just meant I didn’t saw real benefits yet.

Btw, did you know that 2.66GHz P4D can beat FX57 in Quake4 in CPU limited scenarios? I'm sure you know that engine is largely programmed by one of the people you quoted.

Then why did he whine about? I’m sure that I read some where that Quake4 was not his game (his doom3 engine yes), and was further optimized by some company (maybe it was Intel but I’m not sure).

Didn't CoreDuo have the same power requirements as its single core predecessor? Basically that would mean that people get "free" performance without a loss in battery time.

No. The predecessor was manufactured in 90nm the Core Duo is 65nm. Not exactly the same. And since at least Intel doesn’t have a single core CPU designed in 65nm we cant know that. Dual core CPU with one disabled core is not the same thing as one native single core CPU. The same goes for Double dual core not being the same thing as one native quad core. Works like the real one but isn’t like the real one.

I've never seen anything like that since the day of HT capable P4, with the two dualcores I've had things improved even further.

For just that the second core could be one 486DX33.

As for benchmarking single threaded programs on dualcores I wouldn't say that is wrong thing to do. We do have to run them once in a while and currently shared cache CPU's have better architecture for such things. If anything it should be noted that when loading all cores on the shared cache machine you might not get as good performance than running only a single program.

Exactly.

J said...

Maybe they didn't "oversee" the review.. But with TR's latest server review where they had to fight with AMD for a Socket F machine, and the AT server review, where they referenced AMD for "assistance", I'd say that if they did more than threw in their 2 cents.

Motherboard issue: The AMD Opteron system, however, used the MSI K8N Master2-FAR. This choice of a motherboard for AMD shows either extreme incompetence or an outright attempt to cheat in Intel's favor.

I have nothing against THG
Heh..;)

If there had been quality multi-threaded benchmarks available from THG I would have used those.
Well then, I'd like to see a review with more "quality" benches (sorry, not a fan of WorldBench or picCOLOR).

bk said...

I have been programming for 20+ years. Everything from dBase, C, Delphi, and real time machine control. In my opinion there are not any programs that could not take advantage of multiple cores. Yes, it is a little more difficult to program this way, but once a pattern is established the extra effort is minimal.

For example I currently am working on a very large database application that has hundreds of tables and hundreds of business objects with hundreds of thousands of lines of business rules. We have bottle necks in performance when particular objects are used that have many business rules to process. It would be very beneficial to run these business rules in multiple threads. Most business rules are data validation code that could be run in parallel.

I wish my current IDE compiler would use multiple threads to compile my 1.5 million lines of code.

I'm sure Microsoft Word could utilize a second thread for spelling and grammer checking. Maybe it already does.

Browsers could certainly take advantage of multiple threads. Especially with JavaScript and AJAX becomming more popular.

Very few programs (except for scientific programs) spend much time processing one particular algorithm. A quick sort on 10,000 items already loaded in memory is done in a blink of an eye on todays processors. I haven't tried it but you could probably sort a million items in under a second. My point is that most applications that run on a users machine contain many different pieces of code that run in serial to perform a particular task. Many of these pieces of code could run in parallel. The programmer just needs to organize the code to run these pieces in multiple threads. Yes, it is more work, but it is certainly doable. I suspect it will take a few years before average applications start to utilize this power, but it is there for the taking.

Ho Ho said...

scientia, funny that you brought up sorting. Here is something interesting for you to read: A Minicourse on Dynamic Multithreaded Algorithms
Fast Parallel Sorting under LogP: Experience with the CM-5

Also, did you knew that there are sorting algorithms that can run on a GPU?
In short, it is possible to efficiently sort things in parallel.


scientia
"For small scale, single threaded code will always be more efficient and then at some point the parallel code will be better."

Yes, I know that there is no point in parallelizing things that run in just a few milliseconds*. There is too little gain in that. As you said yourself, you could parallelize things on higher level to get more performance out of it.

*) Technically it is possible, just that thread creating overhead will kill the performance. That is mostly the problem with OS'es and libraries used. E.g in tests, NPTL succeeded in starting 100,000 threads on a IA-32 in two seconds. In comparison, this test under a kernel without NPTL would have taken around 15 minutes.. I dare you to do similar test on Windows. You'd be lucky to get 10-30k threads running without the machine crashing, I won't even bother about talking about speed :)


hyc
"Database lookups using tree-oriented data structures cannot be meaningfully parallelized."

Yes, without redesigning your application datamodel you can't parallelize these things. But as someone later said, it is possible to run several of those things in parallel to keep CPU's busy. Compared to multithreading you'd loose on latency but gain from throughput.


aguia
"If you look well, besides encoding applications there are almost near to 0 (zero) in the mobile/desktop software market"

What kind of programs you think would benefit from parallelizing? IM, web browser, text editor? What parts of them are too slow and why do you think it is impossible or too hard to parallelize those parts?


aguia
"Then why did he whine about?"

He is a whiner :)
He also whined about DX10 not being important and Vista being only a marketing trick, not to mention almost all the consoles he has programmed. Still, he works with all of those things. Nobody likes changes that (at first) make their lives harder but in the end they'll just suck it up and live on.

aguia
"No. The predecessor was manufactured in 90nm the Core Duo is 65nm. Not exactly the same"

I was talking about power requirements, not processor technology. Of cource if you meant that Intel could have provided longer battery life with 65nm singlecores then of cource you are right. My point was that going dualcore didn't shorten it compared to old singlecores. Also the new dualcore ULV CPU's needed quite a little juice compared to old ones IIRC.


aguia
"For just that the second core could be one 486DX33."

It could but for some reason they don't make x86 CPU's like that.


bk
"I have been programming for 20+ years. Everything from dBase, C, Delphi, and real time machine control. In my opinion there are not any programs that could not take advantage of multiple cores"

At last another fellow programmer that actually has done the things personally and not just talk what he has heard or red :)


bk
"I wish my current IDE compiler would use multiple threads to compile my 1.5 million lines of code."

Well, thats probably the problem of your build scripts. Compiling scales excellently with added processors.

Only when you try to compile a single file since (all) the compilers have been designed with being singlethreaded. To gain from multithreading you would have to redesign them from scrach. It would be an awful lot of work to make a "parallel compiler" but it is certainly not impossible.



Here is another important point:
making single threaded app to use multiple CPU's is hard. Not because multithreaded programming is hard but because you have to take the application apart and redesign most of it from scrach.

It would be a lot easier to design the application with parallel procesing in mind from the start than to first do it with only single thread in mind and later port to multiple thrads.

Unknown said...

I have the feeling that for the above reasons, we'll see a pretty huge explosion in multi-threaded applications a little while after quad core comes out (at least, in development). While I can't comment on whether or not companies are being panzies or being lazy about switching to multi-threading, they will eventually start tearing things apart and rebuilding them. What will be interesting is when modular cores become mainstream, and applications begin becoming optimized for that.

Unknown said...

Red, that would be really pointless to not use 64bit vista, as pretty much every component with updated drivers will have 64bit drivers by then. If they don't, then you should probably buy another product. People complain about creative, but that's just because they made the unfortunate choice of only buying creative soundcards, despite the fact that other choices, that are simply harder to find, exist.

Also, all 64bit processors and systems gain from moving to 64bit code, the question is simply by how much, so again, it would be pointless not to use it.

As such, I'll say again that 32 bit doesn't matter. You don't buy a machine based only on how it performs now, you buy it based on how well it will perform in 3-4 months, so that you don't have to upgrade as soon.

sharikouisallwaysright said...

Vista 32 Bit should have never be relaesed.
It only slows down the transition to a 64-Bit-OS and let the programms stay for use with max. 2 Gig Ram.
Not to speak from the drivers who are for sure optimized to use with 32-Bit-Vista.
Bad decision MS!

J said...

http://www.theinquirer.net/default.aspx?article=36938
Again, incompatibilities should fizzle out by the time that Vista 2 comes around, and that will be more than just 3-4 months out. Your other point that 64 bit is faster.. Sorry, I don't see it.

Ho Ho said...

greg
"Red, that would be really pointless to not use 64bit vista, as pretty much every component with updated drivers will have 64bit drivers by then"

Did I miss something or will there be a miracle and all tpe programs and dirvers will have 64bit versions in a couple of weeks?

greg
"Also, all 64bit processors and systems gain from moving to 64bit code"

Clearly, you have no ideas what you are talking about. Did you know there are programs that loose performance significantly when moving to 64bit?

sharikouisallwaysright
"Vista 32 Bit should have never be relaesed.
It only slows down the transition to a 64-Bit-OS"

MS might be suicidal at times but it isn't stupid. Without 32bit version it would lock itself out for years.

sharikouisallwaysright
"let the programms stay for use with max. 2 Gig Ram."

Did you know that with Linux I can use up to 64 gigs of RAM in 32bit with anything since Pentium Pro? It should also be possible on some versions of Windows, just that MS don't wants to force you to "upgrade" to Vista.

Pop Catalin Sever said...

"I have been programming for 20+ years. Everything from dBase, C, Delphi, and real time machine control. In my opinion there are not any programs that could not take advantage of multiple cores. Yes, it is a little more difficult to program this way, but once a pattern is established the extra effort is minimal."

It's not just a problem of making parallel algorithms or to say splitting a problem in smaller bits. This is how it is seen from the surface but there a whole lot more to it. Threads in general don't see each other. No thread sees what the other one is doing, they blindly exchange data in a punctually manner, and to do that you need to use volatile memory reads and writes (which use special cpu instructions), locks, semaphores, mutexes, signaling events etc. Locks, the only available mechanism available to synchronize thread access has a global state which means you can't make any guarantees upon it and there's no enforcing mechanism to make you take a lock before accessing shared memory. This might not be a problem for a single application that has all the resources under it's sole control, but for composite applications this is actually a very serious issue. Add to all of this subtle bugs that are sometimes impossible to debug (Vista is the first OS to have deadlock detection mechanisms for threads) lots of legacy non multithreaded libraries, performance issues that can arise out of nowhere and the picture isn't so bright anymore...

"For example I currently am working on a very large database application that has hundreds of tables and hundreds of business objects with hundreds of thousands of lines of business rules. We have bottle necks in performance when particular objects are used that have many business rules to process. It would be very beneficial to run these business rules in multiple threads. Most business rules are data validation code that could be run in parallel."

It would definitely help, but it isn't going to be easy...i mean can you prove that all of the rules are made up of pure functions ? (alter no global state) ... this is where things start to get complicated ...

"I wish my current IDE compiler would use multiple threads to compile my 1.5 million lines of code."

My IDE uses multiple cores :), also for projects of this size build farms can be employed to do the build process.

"I'm sure Microsoft Word could utilize a second thread for spelling and grammer checking. Maybe it already does."

Current version of Word uses more threads from a thread pool besides the main thread to do arbitrary tasks which may include spelling or grammar checking.

"Browsers could certainly take advantage of multiple threads. Especially with JavaScript and AJAX becomming more popular."

Browsers DO take advantage of multiple threads and use more threads to render a page, and also use separate threads to run various plugins like Flash for example. My Firefox has 15 threads open this moment.

"Very few programs (except for scientific programs) spend much time processing one particular algorithm. A quick sort on 10,000 items already loaded in memory is done in a blink of an eye on todays processors. I haven't tried it but you could probably sort a million items in under a second."

Well my K6-III 450 MHz was able to sort 1000000 numbers in under a sec. using quick sort :). I'm 100% sure of this because at that time (highschool) I was solving allot of problems (algorithms) and was timing them ...

"My point is that most applications that run on a users machine contain many different pieces of code that run in serial to perform a particular task. Many of these pieces of code could run in parallel. The programmer just needs to organize the code to run these pieces in multiple threads. Yes, it is more work, but it is certainly doable. I suspect it will take a few years before average applications start to utilize this power, but it is there for the taking."

Yes it's doable but the implications sometime can be very high and it certainly isn't as simple as you said it.

Ho Ho said...

pop catalin sever
"Locks, the only available mechanism available to synchronize thread access"

I assume you haven't heard of Lock-free and wait-free algorithms. Also there is always shared memory for communicating between different processes and threads of one process can see each others memory anyway.

pop catalin sever
"Yes it's doable but the implications sometime can be very high and it certainly isn't as simple as you said it."

Simple? Of cource not. Doable? Yes, of cource. Meaningful? Not always.

Not every piece of code needs to use every availiable processing unit to get its job done in reasnoable time. Platforms that really need every bit of processing power don't generally have several (embedded stuff) distinct CPU's, at least not yet.

Jeff Graw said...

Regarding load testing:

Seems like there are two opinions here, the first being that since dual core processors are being tested, every test should be properly loaded across all cores. The second being that since most software is still single threaded, it is more realistic to test with single threaded software. Both of these viewpoints are flawed.

You can't just test single threaded software, as that artificially inflates the C2D scores due to shared cache. Likewise, you can't just test completely loaded as this deflates C2D scores and punishes the shared cache.

Scientia does make a good point that most reviews still almost strictly use single threaded tests, but also claims that he would like to see every test fully load each core. This is not right. That would be like saying "Since car x and car y are both have 5 gear transmissions, we're only going to test them at 5th gear." No, you need to do a broad gradient of testing across the product's capabilities. This means, in the case of dual cores, you should test single threaded, fully loaded, and as much in between as possible to get the best understanding of the product.

J said...

Scientia does make a good point that most reviews still almost strictly use single threaded tests

Most? Can anyone point me to a review that almost strictly uses single threaded tests? I wouldn't say that image editing, file compression, video editing, 3D rendering, multimedia encoding, multitasking, etc. are single threaded, judging by the gains from initial reviews of dual core CPU's. How threaded do these reviews have to be?

enumae said...

To all who don't like single threaded benchmarks run on Dual cores...

The advantages of Core 2 Duo's cache in single threaded test should be/and is irrelevant.

This is Intel's design, and based on this design and the current software available (primarily single threaded), it has an advantage over K8 in single threaded and multi threaded applications.

For any of this debate to have any merit and/or credibility, please show links regarding your dislike of dual core processors in single threaded test prior to Core 2 Duo, other wise it looks like a an anti Intel debate.

Thanks.

Pop Catalin Sever said...

"I assume you haven't heard of Lock-free and wait-free algorithms. Also there is always shared memory for communicating between different processes and threads of one process can see each others memory anyway."

Yes I've heard and I actually use CompareExchange and atomic Increment, Decrement, whenever I have the chance to avoid locking. But this is all you get when writing generic lock free multithreading algorithms, otherwise you'll have to get into very architecture specific details which isn't very practical or doable. Also even for lockfree algorithms there are no enforcing mechanisms to guarantee correct memory access, they have the same issues as lock based multi threaded algorithms concerning safety and/or reliability.

There isn't always shared memory for communicating between processes . And when there is usually synchronizing mechanisms are employed.

Simply writing a simple lock mechanism for threads will get you into 30 or more issues and special case scenarios that must be handled on multicore machines. That's why it's a bad idea to use you own locks or custom multithreading primitives.

sharikouisallwaysright said...

"MS might be suicidal at times but it isn't stupid. Without 32bit version it would lock itself out for years."

Win XP is running fine and there is no need to upgrade for another 32 Bit OS so far!

Ok, i agree, MS will sell Vista 32 like Win XP before because its bundled with any new PC and overhyped.

Christian H. said...

Intel is not bleeding and you certainly didn't get that from my blog. Thekhalif is incorrect. Intel shrank in 2006 but is now growing again.

I have to say that I am puzzled by your 1.5 generation ahead statement. Once K8L is released Intel is 0 generations ahead. What would put them ahead 1.5 generations? Maybe you want to rethink that.


No I'm not incorrect. When your profits shring bu up to 50% that means you are bleeding. With the price structure Intel currently has and the fact that 45nm will only segment their inventory even more(while not providing immediate ROI), the growth you speak of is industry growth. Yearly growth in the market is around 10% (approximately their increase in revenue).


AMD is still gaining share because Lenovo ramped in Q4 in the Chinese market. But share is also subjective as the industry grows. You can gain share but it may in fact be relative to the gains of the competition if the level of orders remains the same relative to growth.

Profits will be down for at least 2 more quarters. AMD on the other hand will get a MAJOR restructuring of their costs by ramping both FAB36 and FAB7 which will give them at least 3x more chips per wafer start.

They have also released multiple platforms which will appeal to the HTPC crowd. This will allow them to keep the "innovative" edge they have established.

HTPC designs can even spill into corporate because of energy-efficiency which AMD is also known for.
And even though QFX is supposedly a bad platform, the parts keep selling out.

Christian H. said...

If you want to discuss money then limit your numbers to cpu revenue only and gross margin only. That is, only the actual money earned by cpu sales after the cost of sales.

Yet another area where Intel is bleeding. Otellini reports that they don't expect to get back above 50% margins before Q3, which confirms my opinion that these lost profits and operating income will CONTINUE.

Unknown said...

Very good points khalif, and they do dispel quite a bit of the "AMD is bleeding" argument. However, they don't point to an "Intel is bleeding" conclusion. Intel could be a lot worse off than it is, and their corporate restructuring should help them remain viable. Yes, 32 bit vista will continue to sell to people who don't care about their specific speed.

Please, point out which programs slow down in 64 bit, and make sure they're ones that are actually relevant to the gamer/server/media/scientific crowd, as those are the only people who actually need to care about performance. Yes, 64 bit drivers will explode into existence after vista is launched, because that's what people will demand, and thus hardware makers will be forced to supply.

I know not everyone will be getting Vista 64 bit, but 64 bit as a feature has been hyped too much for those who care to miss out on it.

J said...

http://www.lostcircuits.com/cpu/amd_quadfx/17.shtml
At this point, it seems that the probably best operating system for the Quad FX platform may be Windows XP-64, which unfortunately is already at the brink of obsolescence. Vista-64 is fine and dandy but if only half of the features of the motherboard can be used, i.e., no support for SLI or nV RAID, then that's not a real option either.
I'll steer clear of the incompatibilities. Greg, can you show me a review using RW apps that shows off the power of 64 bit? I'm having a hard time seeing what you are, besides excessive amounts of memory?
http://blogs.adobe.com/scottbyer/2006/12/64_bitswhen.html
I reiterate. 64 bit is not for the mainstream.

Christian H. said...

http://www.lostcircuits.com/cpu/amd_quadfx/17.shtml
At this point, it seems that the probably best operating system for the Quad FX platform may be Windows XP-64, which unfortunately is already at the brink of obsolescence. Vista-64 is fine and dandy but if only half of the features of the motherboard can be used, i.e., no support for SLI or nV RAID, then that's not a real option either.
I'll steer clear of the incompatibilities. Greg, can you show me a review using RW apps that shows off the power of 64 bit? I'm having a hard time seeing what you are, besides excessive amounts of memory?
http://blogs.adobe.com/scottbyer/2006/12/64_bitswhen.html
I reiterate. 64 bit is not for the mainstream.



You obviously have never used XP X64. I think it is at least 3X more responsive than XP.

The biggest advantage of X64 is support for more than 3.25GB RAM. Vista will make it apparent that you will need to address at least 4GB. Any mobo nowadays supports 8GB .

Pop Catalin Sever said...

"http://blogs.adobe.com/scottbyer/2006/12/64_bitswhen.html
I reiterate. 64 bit is not for the mainstream."

Is there! and best and most direct benefit of all is the fact that all processors will have a minimum instruction set of SSE2, so you can from beginning compile everything with G7 compiler option (P4 or above) and for applications that have high data density and are computationally intensive this usually means around 30% more performance. You can say that 32 bit applications can be optimized also, which is true and games usually are, but to be able to do this with every arbitrary os library or application out there can make a little difference.
Also the fact that there are more registers, and twice as large, means that the code can be optimized with more ease compared with 32 bit where registry pressure is higher.

There are some downsides to 64 bit like double pointer sizes which make applications larger and caches less effective, but this is entirely offset by the above mentioned facts and many others.

And about the financial situation of AMD and Intel will someone take a look at a direct comparison here:

http://moneycentral.msn.com/investor/research/wizards/srwcompare.asp?company2=INTC&Symbol=AMD&nobuttons=1

... because not being a specialist in finances I don't know if I'm getting the correct picture ...
what does "Income Growth 483.50%" (over past 12 months) for AMD actually mean?

Scientia from AMDZone said...

enumae
For any of this debate to have any merit and/or credibility, please show links regarding your dislike of dual core processors in single threaded test prior to Core 2 Duo, other wise it looks like a an anti Intel debate.


My objections predate dual cores. I was discussing this back when Intel first introduced HyperThreading. At that time, none of the review sites were sophisticated enough to do proper testing. Nevertheless, there seemed to be plenty of benchmarks which showed improvement with hyperthreading.

greg
Scientia does make a good point that most reviews still almost strictly use single threaded tests, but also claims that he would like to see every test fully load each core.


I've never said that all testing should load all cores. However, since only the loaded testing justifies buying a dual or higher processor it can't be less than half of the testing. There is nothing wrong with also showing single threaded benches but this cannot be the main thrust of the testing.

The earnings numbers won't be released until Jan 23rd by AMD so I can't go into any more detail on this. I'll write a detailed analysis when this is available.

pop
K6III in high school? Well, when I was in college the second time the math lab had brand new 386DX40's. I can think of several different methods of synchronizing threads, exchanging information, and avoiding deadlocking.

bk
I was doing assembler and real time control programming 25 years ago.

hyc
Database lookups using tree-oriented data structures cannot be meaningfully parallelized. You can run multiple lookups concurrently, but you cannot accelerate a single lookup by parallelizing it.


I'm familiar with all of the sort algorithms including HeapSort and MergeSort which were not mentioned. Insertion Sort is better? I wouldn't bet money on that.

So, trees cannot be parallelized? This is definitely not true. I can easily speed up a depth first or breadth first search with parallel code. This is even more the case if you have greater than 2 dimensional trees.

Well, again the 64 bit argument stumbles and falls on its face as someone tries to push benchmarks over real code. It is possible to increase a given benchmark to 64 bits and have it run slower. However, on a real system this would be incredibly unlikely since other code sections would be sharing the registers. Likewise, no one bothers to see what happens on C2D when real system code causes the large cache to fragment. Since when was dumbing down the testing considered professional?

Red

I'll say this again and you probably still won't get it. I have nothing against THG. They used to do quality testing and they used to be more professional. If they were again I would be happy to use their results. Your standards are obviously lower than mine but I'm not sure how that would make you more objective.

Ho Ho said...

thekhalif
"You obviously have never used XP X64. I think it is at least 3X more responsive than XP."

Only being 64bit cannot be the reason. If anything it should be less responsive since it has twice the amount of memory to move around as pointers. For how long had those OS'es been installed on your PC when you compared them and on what HDD's were they installed? For how long were those HDD's (not)defragmented?


thekhalif
"The biggest advantage of X64 is support for more than 3.25GB RAM."

Any CPU since Pentium Pro can address up to 64GiB of RAM in 32bit.

thekhalif
"Vista will make it apparent that you will need to address at least 4GB."

Are you saying that it sucks so much memory that having <4G of it is bad?


pop catalin sever
"so you can from beginning compile everything with G7 compiler option"

Just a little remark, MSVC C++ compilers plain suck at generating fast code. Switching to GCC can easily bring you 30% speed increase, especially with SIMD.

Also all current compilers are bad at generating SIMD code (aka autovectorizing). They are way behind compared to programmer that knows how to use intrinsics.

pop catalin sever
"You can say that 32 bit applications can be optimized also, which is true and games usually are, but to be able to do this with every arbitrary os library or application out there can make a little difference."

I agree, it makes a little differenece. I'd say it is easier to find things you can multithread in a program than it is to find things you can SIMDify.

pop catalin sever
"There are some downsides to 64 bit like double pointer sizes which make applications larger and caches less effective, but this is entirely offset by the above mentioned facts and many others."

Depens on application and the workload.
Are 64-bit Binaries Really Slower than 32-bit Binaries?
Xeon vs Opteron Database Benchmarking MSSQL and Oracle

For me personally only good reason to move to 64bit is that GCC is around 10% faster in 64bit. Then again most applications are either just as fast as 32bit or a bit slower and they all take up more disk space and memory so I might not move to 64bit that fast.

Aguia said...

Greg, can you show me a review using RW apps that shows off the power of 64 bit?

Techreport

Pcstats

After seeing this old review I guess we may get more performance out of 64 bit than of dual core processors.
But since most of those applications are just that benchs I could be wrong.

And again like the multi core applications, the encoding applications seam to like also of 64 bit. Does any one still as any doubt what I have just said in the above posts?

By the way wasn’t the AMD Athlon64 much faster than the P4 than the Conroe is vs the Athlon64?

enumae said...

Scientia...
"My objections predate dual cores..."

Thank's.

Jeff Graw said...

Scientia

I've never said that all testing should load all cores. However, since only the loaded testing justifies buying a dual or higher processor it can't be less than half of the testing. There is nothing wrong with also showing single threaded benches but this cannot be the main thrust of the testing.

Ok. You may want to be careful with your wording in the future then. I got the distinct impression that you though that any single threaded test is worthless for testing dual cores.

J said...

However, since only the loaded testing justifies buying a dual or higher processor it can't be less than half of the testing. There is nothing wrong with also showing single threaded benches but this cannot be the main thrust of the testing.

So I showed how THG uses current, threaded apps, while TR uses a 4+ year old synthetic suite and a few 3D apps thrown in.
http://www.tomshardware.com/2006/07/14/core2_duo_knocks_out_athlon_64/
And is that the review where single threaded are the "main" thrust of the testing? Care to give examples of how THG is any more single threaded than others?

J said...

TR:
All of our gaming tests showed very little performance delta between WinXP and WinXP x64, and the same was generally true for other apps.

Aguia, the only encoder I see is DivX, and that is an irrelevant codec, so the gains are pointless to discuss anyhow. Even if it were another codec, I don't think saving 2 seconds off of a 10 second bench constitutes subjecting oneself to the incompatibilities.

Unknown said...

Red, you realize 2/10=20% right?

Ya, I noticed the 4th quarter results kinda show scientia was right on the money.

Aguia said...

Scientia I'm starting to believe you maybe right.

It seems that too often Intel gets the benefit of the doubt and AMD gets the scepticism.

TH title:
AMD loses $574 million in Q4

J said...

20% in an irrelevant bench with an OS with constant incompatibilities. No thanks. Again, a RW review?

A while ago: From all of these disadvantages it is clear that Intel is not playing its own game.
Now: Intel is doing better now.

What exactly was on the money btw? And still waiting for Scientia to justify THG being more single threaded than others.

Unknown said...

I've been reading through the dozens of articles that are on the net about AMD's Q4 postings, and I've concluded a few things.

1: No one likes change, because it costs them. This is why analysts are "angry" which seems pretty retarded since they are supposed to analyze, not empathize.

2: AMD is now trying to transition to a more stable business model, but to do so has to, quite obviously, maintain a period of very large spending with less income. If you look at a lot of companies histories though, this tends to really pay off (despite the fact that it pisses everyone off until it's done).

3: AMD saw Core coming a ways back, and decided that it couldn't reliably develop a new chip in time, and thus felt its ability to continue to compete in a stable manner was threatened too much to try to take the path of slow and easy. This is why they're buying new fabs like crazy, and why they're aquiring the world's second largest discrete graphics card company.

4: AMD has decided that as long as they're going to be aggressive, they're going to be very aggressive, and force the market to readjust painfully to them and Intel, in order to counter the losses it is currently sustaining. We see this in the LCD industry right now, with the bloating of LCD monitor availability driving prices below the manufacturing cost, which will soon force many of the lower-quality vendors out of the market. By creating, with their new fabs and Intel's new fabs, the ability to bloat the market, AMD is potentially hoping it can make as much floor space sit unused with Intel so that its operating costs increase and that with Intel's new restructuring, they will attempt to trim some of the fat, and with it, their ability to control the market the way they once did. They may even force Intel to readjust the way they treat fabs, as disposable items. AMD will obviously continue to suffer profits lower than those they experienced in early 06 if this is indeed their strategy, but it has faired far worse than that, and survived to tell the tale. What is important is that Intel will be forced to adjust to a new situation that AMD came adjusted for. Adjustments of any sort, no matter how intangible they may be, are costly in the business world, as we can all see from little spats like Home Depot's and HP's managerial drama.

Obviously I leave this to you guys to pick apart, as I'm in no way trained enough to make perfectly accurate conjecture.

enumae said...

Before I say anything, I would like your opinions of SiSoft Sandra Mandelbrot.

Does it show any link to the computational power of processors?

Is it valid as a source of comparison between AMD and Intel?

No the rest is up to you based on your answers to the previous questions...

Looking at this article in which Randy Allen, AMD's corporate vice president for server and workstation products is claiming that in Floating Point calculations Barcelona will be 3.6x faster than an equally clocked Opteron.

Looking at the numbers, and yes I understand I am equating Server numbers to Desktop parts for a general idea and not factoring platforms, but I want to know if any of you think this is going to be enough to hold Intel in check for the second half of the year.

Looking at Tech Reports article for Intel's Core 2 Extreme QX6700 processor, and looking on page 14, these are the numbers for my reference.

Scientia, and I believe some of you other readers here like TR, hence my choice in reviews.

Now if you read the article featuring Randy Allens quotes, there is a statement about the performance coming at equal clocks compared to Opterons, so I will use Intel's QX6700 (2.66 GHz) vs AMD's Athlon 64X2 5000+ (2.60GHz) for the following.

SiSoft Sandra Multimedia Floating Point x8

AMD = 60,046 Mandelbrot iterations/second

38,842 * 3.6 = 228,174

Intel = 212,862 Mandelbrot iterations/second

212862 * 100 / 228174 = A 7% advantage for AMD on Floating Point.

Now, I am not sure if there is any connection between this test and real world and any details explaining would be great.

If there is a direct conection then AMD could be in trouble if the rumors about Intels 45nm power and clock speed increases are true.

Your opinions would help me understand this more.

Thanks

Unknown said...

Ya, but that's just floating point. Integer operations matter much more to most users, and they haven't even mentioned the performance increase for those yet.

Aguia said...

enumae, maybe you should had look at the values in those tests and the processors that got them.

If you see like I see one Pentium XE 965 in front of one AMD FX 62 ... and the 950 beating all the X2 line … I must say it is one very unreliable test.

And also according to TR
The one of interest to us is the "multimedia" benchmark, intended to show off the benefits of "multimedia" extensions like MMX and SSE/2

It’s not an FPU benchmark.

One more proof those synthetic benchmarks are complete flops.

Ho Ho said...

aguia
"One more proof those synthetic benchmarks are complete flops."

Anyone has any good benchmarks that test server applications? E.g some J2EE stuff, database or something similar. FPU is almost never needed in those things.

Aguia said...

enumae just to finish this.

This example:
Sharky Extreme

Sisoft Sandra 2007 FPU benchmark:
AMD 16035 * 3,6 = 57726
Intel 33802

A 58% advantage for AMD on Floating Point.


Everest Ultimate Edition 2006 FPU:
AMD 6938 * 3,6 = 24976
Intel 15074

A 60% advantage for AMD on Floating Point.

I heard AMD talking of 70% performance improvement, but that is very easy to say. But if they say it’s compared to the original K7 design that’s another story.

Aguia said...


SharkyExtreme working link

enumae said...

Thank you Aguia, that helps me understand the relation to real world, and how much synthetic test may distort that relation.

hyc said...


Database lookups using tree-oriented data structures cannot be meaningfully parallelized. You can run multiple lookups concurrently, but you cannot accelerate a single lookup by parallelizing it.

I'm familiar with all of the sort algorithms including HeapSort and MergeSort which were not mentioned. Insertion Sort is better? I wouldn't bet money on that.

So, trees cannot be parallelized? This is definitely not true. I can easily speed up a depth first or breadth first search with parallel code. This is even more the case if you have greater than 2 dimensional trees.

For small lists Insertion sort has the lowest linear factor, etc.
http://en.wikipedia.org/wiki/Insertion_sort

For a binary search tree there are only log(N) comparisons to find a single node. Note that I said lookup, not traversal. breadth-first/depth-first are traversal/scanning patterns for visiting every node in the tree, but they are not relevant for lookups.

If you have an unsorted tree where you must visit every node to locate an item, then sure, searching it in parallel is faster than a single search, but when lookups are more frequent than changes, just keeping the tree sorted and balanced wins over any parallel lookup.

Pop Catalin Sever said...

"It have been a long time that we don't see Scientia.

I guess he's bussy writing another article regarding AMD's fourth quarter results."

There are some silly myths circulating around about some strange fellows who are supposed to have a real life outside the internet ... but I don't really believe that ...

Scientia from AMDZone said...

hyc
For small lists Insertion sort has the lowest linear factor, etc.
http://en.wikipedia.org/wiki/Insertion_sort


You need to read that again. For a completely sorted array Insertion Sort runs in a time of O(n) and so does Bubble Sort. For a random array, Insertion Sort runs in a time of O(n^2) and so does Bubble Sort. The main advantage for Insertion sort it that it works well with linked lists. However, for array based sorting, Insertion Sort is not necessarily better than Bubble Sort and can be slower. Generally either is good for a small array but one or the other could be faster depending on the system and compiler.

For a binary search tree there are only log(N) comparisons to find a single node.

A binary search of any kind whether on a tree or an array is the theoretical minimum effort and therefore cannot be speeded up. I thought this was obvious.

If you have an unsorted tree where you must visit every node to locate an item,

Obviously, it is possible to have one element in a binary tree sorted into perfect order. However, it is not unusual at all to have associated data which is not sorted into any order and this requires a full search unless every element is also part of its own sorted and linked tree.