Friday, December 15, 2006

Intel Versus AMD: The Truth Unfolds

It has not been easy to find out the real story of how Intel hardware compares with AMD hardware. The reviews from places like Anandtech and Toms Hardware Guide have been so sloppy and unprofessional that no useful information could be obtained. However, after a second review at Tech Report there is finally enough accumulated data to draw some conclusions.

Often when I mention the horrible state of hardware testing today I get some flack from Intel fans who really want to believe that any published test results that show Intel favorably must be true. I suppose there have been a number of people who have suggested that I have been somehow denying reality by not embracing these results. Of course, these people don't understand at all how I think. Poor testing doesn't mean that AMD is better; it just means that you don't know. With proper testing, the results could show that Intel is even better than what was shown with poor testing. I suppose since I'm now talking about testing done at Tech Report some must be wondering if I am endorsing Tech Report as a good fact source. Well, not exactly. The problem is that it has taken two reviews (and sometimes a third) to get the facts at Tech Report. If the testing had been truly prossional then we would have been able to get what we needed from the just one review. But, this is not the case; it is necessary to look at two or more reviews to find out how reliable the tests themselves are. What this means is that some of the data published by Tech Report does not fully show what is happening with the hardware and therefore this data can be misleading (and the author's comments can be incorrect as well). However, by using more than one set of tests there is finally enough information to draw some conclusions. I suppose this is at least better than most other review sites where the testing still shows essentially nothing even with two reviews in hand.

One of the issues that keeps getting tossed out is the idea that Intel processors now draw a lot less power than AMD's. This seems to be due in part to a desire to throw off the very bad reputation that the Prescott cored processors had. Yet, there is no indication of any big advantage for Intel. The highest clocked chips from both companies are rated at 120 watts and the second level from both companies runs about 80 watts. You have to add a bit to the Intel numbers because their TDP isn't a maximum like AMD's and you also have to add the power draw for the Northbridge which is included with the AMD numbers. However, Intel's C2D processors have much better power management than AMD's current Revision F and 65nm Revision G processors so this tends to even out. The sole advantage that Intel currently has for any class is for their quad cored processors and this is only because AMD doesn't have a quad core out yet. You could however argue that if you rate processors by SSE performance per watt then Intel is better. This is true if you use SSE most of the time such as for heavy duty rendering or scientific or engineering calculations. But, for most areas Intel and AMD are about equal. This shouldn't change until late 2007 when Intel releases 45nm chips.

The first benchmark is Cinebench which renders a graphic scene. Surprisingly, Intel and AMD are almost dead even on this benchmark. This is not due to AMD's greater memory bandwidth because this test is not memory intensive. I would have expected Intel to be faster but this is not the case. At the same clock speed the two are tied. This benchmark also does not scale very well; perhaps the code will be better by the time AMD has its own quads out.

The next is Pov-Ray and once again the results are very close; Intel trails by about 4% at the same clock. Either Pov-Ray does not make use of SSE or AMD is gaining more from the 64 bit code. This application is not memory intensive but it does show good scaling even with 8 threads. A dual socket Clovertown Xeon 5355 system should deliver about the same performance as a quad Opteron 8218 system and a single socket Kentsfield QX6700 should be about the same as a dual socket Opteron 2218 system.

With 3DS Max scanline rendering there are two factors. First of all, for a comparable system, Intel is 25% faster at the same clock than AMD. This is probably due to better SSE. Secondly, this code does not scale very well. Intel shows a 22% drop-off when scaling from 2 threads to 4 while AMD shows a 29% drop-off. This looks suspiciously like poor NUMA support. Finally, in contrast to the scanline rendering, 3DS Max's ray tracing code is so poor for multi-threading that it isn't useful at all for comparison beyond 1 core.

In the Quad FX review the Valve Source engine particle simulation numbers are all over the place. I've tried to see if anything can be gleaned from this information but unfortunately there are no matching numbers in the Clovertown review. This is a good example of a benchmark that less knowledgable fans might point to as an Intel win but these numbers are far too erratic to tell anything. The Valve VRAD map build however shows Intel 26% faster at the same clock. There really isn't anyway to tell if this is due to cache, integer IPC, or SSE but Intel is clearly faster. This will be a good benchmark to repeat with K8L. Since K8L should increase cache, integer IPC, and SSE the results should be much closer.

3DMark is not in the Clovertown review but after comparing with an older review with C2D E6300 it is clear that 3DMark 06 CPU Test 1 is good and not simply gaining from larger cache; Intel is about 10% faster.

The MyriMatch test is interesting. It is clear that this code is terrible with multi-threading and quickly loses efficiency as threads increase. However, the numbers drop off not only with more threads but with faster clock speeds as well. Worse still, the Intel numbers show variances of 5% when increasing clock by 15% and the AMD variances are even greater at about 8%. This leads to a very unreliable spread showing Intel is somewhere between 3% and 27% faster. Since I did not see this variance in the other tests and there would be no reason for Tech Reports to have reconfigured the systems during this test I would have to say that the code is at fault and the test is therefore useless.

The Euler 3D computational fluid dynamics test is a good example of how misleading the data can be and how equally far off the author's interpretation can be. He claims that the code is both floating point and memory intensive. Yet, when comparing the two sets, there is no evidence of any drop-off due to memory. With double the memory bandwidth, the dual socket woodcrest system is actually slower because of the higher latency FBDIMM's. There is no gain at all from the higher memory bandwidth, therefore the code is not memory intensive in the sense of saturating the memory bandwidth. Intel seems to be about 18% faster. However, some of the scores are far worse than 18% and this again brings up suspicions of NUMA problems. Looking more closely, we see that in spite of claims that the Quad FX tests were done using a NUMA compatible OS, there is clear proof of the opposite. Quad FX and the dual socket Opteron 2218 system are very similar as both are dual core on dual socket. And, their scores are almost identical even though the FX is clocked 15% faster and uses faster memory. Apparently, the interleaving that was used with the Opteron is making up for the faster speed of FX, but this could only happen if the FX were running without NUMA support. This means that for any code in the FX review that uses large working sets the FX processor will show an anomalous dropoff. This is why one review was not enough; you can't see this without comparing with the second review. So, Intel is 18% faster rather than the 73% as would be suggested by the FX numbers. Even with the interleaving hack this very poor number would have been cut in half and then half again with proper NUMA support.

The Folding @ Home series is interesting. But, there are errors in the totals for both reviews. The actual totals are pretty close and would give Intel about 6% more speed. However, what we really want is the best rather than totals or averages and this would give Intel about 25% more speed. However, it is stated that the same code for AMD uses 3DNow! rather than SSE so the actual difference could be less. However, I think 25% due to better SSE is reasonable. This is a benchmark to watch because it should have an increase with K8L unless the code is not actually making use of the AMD hardware.

The Sandra benchmarks are both good and bad. The Multimedia Integer tests are way off. I'm not sure what they are testing but whatever it is does not show anything real about the processors. If these scores were taken at face value Intel would be 253% faster in integer. Or to put this a different way 1 C2D core would be 89% as fast as 4 AMD cores. This is yet another published score that unknowledgable fans might tend to mention as a sign of the phenomenal speed of C2D. However, this is obviously impossible theoretically, architecturally, and from the fact that no other test shows speed like this. Therefore, we have to ignore this test. However, the Multimedia Floating Point test looks good. Intel is 79% faster. This is reasonable since Intel can easily do 100% faster with SSE and could theoretically, with perfectly matched data, actually hit 4X AMD's speed. However, this is not a likely thing to occur. This is another benchmark that should become about equal with K8L. Perhaps by the time AMD releases K8L we will know what is wrong with the Integer test.

The Windows Media Encoder shows a significant drop with 4 threads, but it drops about the same for Intel and AMD. With the original code Intel had a 23% lead, however with the newer 64 bit code this lead drops to only 15%. This is true for both 2 and 4 threads.

PicColor contains some anomalous scores which lessons confidence in the set's overall accuracy. The extremely low cpu usage graphs add to the uncertainty. The best I can say is that for most tasks Intel seems to be about 12% faster but is about 60% faster on floating point.

Panorama Factory is not memory intensive. It shows poor scaling with 4 threads and truly horrible scaling with 8 threads. However, the scores appear consistent. Intel is 7% faster. The only odd score is the FX-74 score which is slower (12% instead of 7%). This seems to be due to NUMA problems for the large datasets. Apparently, interleaving is enough for this task but FX-74 without even that shows a drop-off.

Tech Report has used three different methods of power consumption testing with the three reviews I referenced. This somewhat diminishes the value of the power consumption tests. For example, the Quad FX review does show some advantage for Intel, however the load application is Cinebench which has poor scaling. In the Clovertown review the load application is Pov-Ray which scales well and the power consumption is very similar. Realistically, Intel only seems to gain with lower power draw with its Kentsfield QX6700 processor. There is no real difference with dual core/single socket between Intel and AMD nor with dual socket.

I generally give Intel a 15% integer lead so I'm surprised that Intel doesn't seem to show this kind of lead for a lot of the tests. For example, I usually match X2 4200+ with E6300, 4600+ with E6400, and 5000+ with E6600. Yet, it seems for many tasks, Intel only appears to be 10% faster. This would mean that 3800+ would match with E6300, 4200+ with E6400, and 4600+ with E6600. This would make 5000+ equal to E6700 but AMD would still have no procesor to match X6800. This seems to be about right if we exclude SSE. However, if SSE is needed then Intel is the current bargain.

28 comments:

neso said...

why can you test them your self?
it doesn't need to be opteron flavour, it can be 6300 and 4400 but put trough diffrent scenarios.I to miss in depth analysis from anand, like it was in the past, but why there isn't any other test site that does something like that?. The suspicius thing is that when athlon 64 came, it was tested in every aspect that you could think of. The conroe on the other side is "exelent" (by the rewievs) in any aspect you could think of. You can fly on it and then some more.There were times when i knew the pro's and con's of any CPU architecture, but now it seems that conroe RULES and that is that.
Well people enjoy i say because the HDTV is something you will greatly enjoy when it arrives.
Sorry for language and width.

bk said...

Thanks Scientia for the article. Could you elaborate as to what type of applications would use SSE. Is SSE 8 bit wide integer operations? How many SSE operations can be processed in parallel on a single clock cycle?

J said...

This is true if you use SSE most of the time such as for heavy duty rendering or scientific or engineering calculations.

Well, geez, aren't those the benches people pay most attention too? What exactly is 'most areas' that both are equal to?

There is no point in discussing synthetics such as SuperPi, Sandra, Cinebench, POV, particle simulation, etc. That is all:)

And I'd like to know how you are able to determine what is memory intensive or not. You seemed quite sure about that about DivX and we all know how that turned out.

I agree that wattage is negligible until it is not justified..;D

Quite a reversal for you to now rate a 5000+ with an E6700 when in the past you equated the better specced FX62 with the lower specced E6600. I don't know what 'many' tasks with just 10% you're talking about btw.

Ho Ho said...

"The first benchmark is Cinebench which renders a graphic scene. Surprisingly, Intel and AMD are almost dead even on this benchmark."

I must say that one is one of the worsest ray tracers I know. It doesn't scale over multiple CPU's and it doesn't use any SIMD. My best guess is that it is using only the scalar FPU to do its things.

"The next is Pov-Ray and once again the results are very close"

AFAIK, PovRay doesn't use SIMD also.

"With 3DS Max scanline rendering there are three different stories. First of all, for a comparable system, Intel is 25% faster at the same clock than AMD. This is probably due to better SSE."

Quite likely

"This looks suspiciously like poor NUMA support"

That isn't really a surprise. On Intel there is no difference if program supports NUMA well or not. On AMD, badly supported NUMA program will stress memory system a lot more than on Intel.

Also I don't think only changing the OS would help a lot on NUMA. When program has one set of data being worked on with sevaral CPUs/sockets then there is no way to optimize that. A better OS will only help when you have several datasets or whole different programs running.


" Could you elaborate as to what type of applications would use SSE."

I'm not Scientia but I think can answer that :)
Lots of rendering, encoding/decoding and scientific programs use that, even some games.

"Is SSE 8 bit wide integer operations?"

SSE works on 128 bit datasets. With SSE2 and newer that data can be 16x 8bit integer, 8x 16bit integer, 4x32bit integer, 2x 64bit integer, 4x32bit floating point or 2x64bit floating point. On C2 instructions take one clock cycle to execute, on anything else it takes two. That means with one SSE instruction you can make four 32bit floating point calculations.


One thing I can conclude from that analysis is that rasterizing takes a lot of memory bandwidh and ray tracing uses a lot less. As FPU power is becoming more and more cheaper and I see no radical improvement on memory bandwidth I would assume that real-time ray tracing is becoming a reality quite soon.

There are already some enthusiasts working on it, with rather simple scenes (~76k triangles) I got ~25FPS at 600x400 on my 6300@3.1GHz. Similarly clocked x2 gets about half that, mostly due to half-speed SSE.

J said...

Poor testing doesn't mean that AMD is better; it just means that you don't know.
That makes no sense.

I guess the point of your article is to say that Core 2 is not as good as all of the review sites show?..I think the that for yesterday's price of AMD's slowest dual core for FX62 performance in the E6600 is pretty good:)

I think you should drop the synthetics from your entry. It would make it easier for us to see the point that you are trying to get across, especially when you talk about real things. And add pretty pictures too. You don't actually think that your hundred of readers are going to actually one by one look up what you are referring to?

Scientia from AMDZone said...

K8 can do two 64 bit SSE instructions per clock or one 128 bit SSE instruction.

C2D is twice as fast with SSE and can do two 128 bit instructions per clock. It could theoretically do four if the data were perfect which is unlikely.

To answer your question, Red, I can tell which are bandwidth intensive by comparing QX6700 with dual Xeon 5160. You can also tell by comparing QX6700 with dual Xeon 5355 since 5355 has a faster FSB.

I would say that the reason that Intel is downgraded from 15% to 10% is due to 64 bit code in the test samples and perhaps from software that uses FP rather than SSE. I'm sure that Intel could still claim a better lead if all the testing were done using single threaded code (which doesn't make much sense for dual and quad core processors).

J said...

Server review:Using 3ds max's default scanline renderer, we rendered a single frame of this scene at 1920x1080 resolution.

Desktop review:Using 3ds max's default scanline renderer, we first rendered frames 0 to 10 of the scene at 500x300 resolution. The renderer's "Use SSE" option was enabled.

:)..? Rather than ignoring what I just showed you because I corrected you, I would like you to address it so that I know that you've seen it:)

I don't know why your base your view of Intel's dual core advantage on recent quad tests. That is all:)

3D rendering, multimedia encoding, image processing, etc. aren't threaded enough for you? What benches are these reviewers lacking? And that's a problem that Octo FX will have to face, lack of threaded software.

Scientia from AMDZone said...

Yes, you are correct. The differences between the two testing methods prevent them from being directly compared. I have edited the article to remove the statement that Intel is showing a dropp-off due to bus contention. Yes, I wish they would stick to one testing method.

Unknown said...

"Poor testing doesn't mean that AMD is better; it just means that you don't know.
That makes no sense."

Red, that makes perfect sense. If the tests are bad enough you can't really know who is how much better than the other. If someone drove a shelby at 30 mph, took the time it took to drive 10 miles, and did the same thing with a magnum at 60 mph, and then tried to compare the times, it wouldn't matter what they said because the tests are too inconsisten.

Changing operating systems would drastically effect NUMA performance, as one operating system that's not optimized for it would choose a new core each time it needed to do a new operation, thus switching cores constantly, completely saturating the cache, and most likely saturating memory bandwidth due to cache recalling. An operating system that was numa aware would not do this, as it would keep each task on one specific core.

Ho Ho said...

"Changing operating systems would drastically effect NUMA performance, as one operating system that's not optimized for it would choose a new core each time it needed to do a new operation, thus switching cores constantly, completely saturating the cache, and most likely saturating memory bandwidth due to cache recalling. An operating system that was numa aware would not do this, as it would keep each task on one specific core."

Take a dualcore CPU and run one CPU intensive program on it. Open up your favourite CPU monitor (top, task manager) and see how often that process jumps the cores. I bet it is rather rare. Sure, XP is quite awful but even there processes doesn't just jump around constantly, quite certainly not often enough to make a big impact.

Scientia from AMDZone said...

As I said in article, the worst case with any benchmark due to no NUMA support gave Intel a 75% advantage. With interleaving this dropped to a 35% advantage for Intel but was only 18% with the NUMA problem. I'd say that 75% is pretty drastic since this would be a 43% drop-off.

gdp77 said...

And I'd like to know how you are able to determine what is memory intensive or not. You seemed quite sure about that about DivX and we all know how that turned out.

I would like an answer from scientia about that. It is true that scientia argued over K8's bandwidth advantage using DivX as an example in his rhetoric. However, as red implied, scientia was dead wrong.

Ho Ho said...

"as one operating system that's not optimized for [NUMA] would choose a new core each time it needed to do a new operation, thus switching cores constantly, completely saturating the cache"

Wouldn't that cache trashing also happen on Intel where caches are not shared (2P, dual die)? If it does then it should be much worser for Intel since, as many people claim, most of its performance comes from large cache. Also as it has roughly twice the latency and less memory bandwidth it should affect it quite a bit, even though it is not NUMA.


"Apparently, the interleaving that was used with the Opteron is making up for the faster speed of FX, but this could only happen if the FX were running without NUMA support"

I would like to know is how do you turn of NUMA support on Windows? I know that Linux has that as a kernel flag but as I haven't used any windows version for years I have no idea if/how is that possible on them. Or were those benchmarks run on *nix and with FX there was no NUMA support and with Opteron there was? I would check it myself but unfortunately I have no ideas what reviews are you commenting on. Next time at least provide some links.

Making a wild guess here but I would think that when one CPU on AMD 2P machine has to access the memory bank connected to the other CPU it has roughly the same latency and bandwidth as Intel has over FSB and external memory controller. Only problem might be that the other CPU memory controller will be doing twice the work, or about the same as NB on Intel machine. Is this theory about right?

J said...

I said it made no sense because I don't see how you can say that just because poor testing shows something, that people "don't know". I'd like a reprahsing:)

I don't think it's an apples to apples comparison to compare QX6700 vs 2x5160s because the 5160s have a 12.8% clock advantage, different RAM, etc.
http://www.gamepc.com/labs/view_content.asp?id=5160vs6800&page=1
In this review, in benches where you think the Xeon would win, it doesn't.

And what is all of this talk about Quad FX possibly performing better with an OS better at NUMA? Bad NUMA Windo is here now, Quad FX is targeted at Windows using gamers, Quad FX performs not so good now.

Unknown said...

Well, I'm sorry red, because I don't see how you could object to that understanding.

A good comparison does not necessarily have to be apples to apples (in terms of the hardware), however it must be apples to apples in terms of the tests and the platform. Even then, you have to know the eccentricities of those tests and platform in order to give a qualified conclusion or know when not to enclude them.

Vista will be here in months, and is an essential operating system for gamers (both due to windows force requiring it for its games, and due to performance increases inherent in using the operating system). 4x4 will also be more easily available in months, which is when the benchmarks will actually matter. So not testing it on Vista is like testing the core2 on games in windows ME (since 2000 and xp are too much alike).

Switching cores will happen like crazy on multi-threaded apps because xp no longer tracks which core it's using and can't control which set of memory and cache is being used due to the non-uniformity of NUMA (at least that's how I understand it). Intel does not use NUMA, as it's memory controller uses a uniform architecture, so windows will know how to deal with it, which is why they've never had problems with it.

Also, the reason xp would not show the core jumps would be because it is numa aware. How would it know it was screwing something up if it would have no way of knowing what could be screwed up?

Unknown said...

*edit* the reason xp would not show core jumps is because it is not numa aware.

Sorry if that was confusing.

J said...

Scientia says poor tests aren't indicative of true performance? That just because it may seem like Intel won on one site, they haven't? Well I think that the many poor tests from dozens of sites show pretty well that Core 2 is ahead of X2 and QX6700 is ahead of Quad FX.

You would think that the higher FSB powered Xeon would beat the X6800.. That is all:)

http://www.hwupgrade.com/articles/cpu/10/quad-fx-the-first-quad-core-amd-platform_index.html
I'm sure that other sites will review with Vista when it arrives in retail.

Scientia from AMDZone said...

Red, you've now brought up DivX so many times I've lost count. Let's just say that I was wrong and that DivX is not bandwidth intensive. Is your theory that if I was wrong about one thing then nothing I write is correct? I'm certain I will make mistakes from time to time and I will correct them when they are pointed out just as I corrected the error in this article about memory dropoff on the Cinebench test.

Links:
Xeon Vs. Opteron
Quad FX

You have to consider the results of poor testing as random. There is no way to interpolate random results. You can only ignore them.

Yes, it is true that other "reviews" have "shown" that C2D is faster. However, based on these "reviews" some have claimed that E6300 is faster than FX-62 which is definitely not the case except for SSE. Even in the Tech Report reviews there were several results that had to be thrown out. Tech Report is much better than Toms Hardware Guide because they are far more open about their testing methods. THG often leaves out details about how they test. Worse still, THG has a bad habit of fudging the results by changing the hardware configuration, for example, overclocking the FSB or overclocking the northbridge. THG also has a habit of using higher latency memory when lower latency is commonly available. I simply have no confidence in any objective testing from THG. Maybe someday they will get their professionalism back.

Sorry, there was a typo in my comment above about NUMA. What it should say is that in one test Intel has an 18% lead when NUMA is not a factor (single socket). This increases to a 35% lead when dual sockets are used with interleaving. However, with dual sockets and no interleaving Intel is ahead by 75%. Windows 2000 claims to be NUMA aware but obviously isn't if it performs worse than interleaving. Interleaving can be turned off and on. Yes, Vista, is supposed to be better at NUMA. We'll see.

No, when AMD processor A is trying to manipulate data in processor B's memory space this is not at all comparable to Intel's shared memory latency. Typically, this causes a 30% drop-off from both processors. In the test mentioned above we see a 28% difference.

J said...

I haven't brought up DivX in my last post but I brought it up to ask how you are able to determine whether apps are memory intensive and in my last post, I've shown a bench where you would think that a higher FSB would score better, and possibly so, but at least it shows that you can't compare 5160 and QX6700. Kind of throws away your method when there are huge discrepancies between the 5160 vs QX6700. I would think a better method is to compare different clocks until at a certain point there are no gains, or where Intels slow down and AMDs keep going.

And there are some that say that X2's are faster.. So what if you once saw a few claiming so? If you are saying that some individuals have a skewed opinion of the E6300 based on some reviews.. What reviews are you referring to?

Unknown said...

Red, quit being speculative and start arguing objectively, because right now all it sounds like is that you're just trying to make up arguments that we may never have to address.

Maybe the test suite you brought up is also erroneous and unusable? You should really analyze it a lot more before you just accept the results as worthwhile.

Also, it would be nice if they tested megatasking well, as the vista testing link you provided hardly tested any specific scenarios at all.

J said...

If you're going to repudiate the review, you should at least point out something wrong with it. Back to the GamePC review, Half Life 2, WME, file compression, seem to be greatly affected by the FBD's..
http://xbitlabs.com/images/cpu/amd-quad-fx/charts/multitasking.png
And according to this bench, eventually the QX6700 does crack, which would support the seemingly erroneous file compression benches where the X6800 whips the 5160.

The first scene of megatasking has all four of the following applications running at the same time:
* Mainconcept H.264 Encode: Conversion from DV to H.264 High definition video ECS factory tour - DV 16,836 frames;
* DVD Shrink 3.2: backup of film Fahrenheit 9/11; compress to 2000 Mbytes;
* Sony Vegas+DVD 7.0b: Video and Audio Benchmarking;
* POV-Ray 3.7: CPU rendering benchmark

And in the 2nd test, they launch the apps sequentially as opposed to simultaneously. How more megataskable could they have gotten?

Scientia from AMDZone said...

Red said ...
http://www.gamepc.com/labs/view_content.asp?id=5160vs6800&page=1
In this review, in benches where you think the Xeon would win, it doesn't.


Yes, Red, you have pointed out an anomaly in the test results. However, you aren't picking up the reason which is on page 7

Sandra 2007 - Memory Bandwidth

The Xeon 5160 system was configured incorrectly by putting both FDIMMs in the same memory channel instead of one FBDIMM in each memory channel. This is why the slightly slower X6800 beats 5160 by such a big margin in the compression benches on page 12.

x64 said...

Thanks for that overview Scientia!

But, I can't understand, why we have to do with such matters like SSE performance generally? The Intel's mArchitecture desings are with totally wrong concept, I think. At minimum "SSE #" is only an extension(s). To mix SIMD & classical ALU (per core) together is something like to use a highway simultaneously as parking lot and speedway?! Unfortunately the AMD is in similar situation. I think, that the right way is to design a seperate SIMD core(s) and here the AMD have an advantage.

J said...

Oops:p But I think we can at least conclude that some gaming, inefficient encoders, all file compressors are more memory intensive than most? If you're on the lookout for that FSB choking..:)

Unknown said...

x64, that would be really awesome, but AMD basically got pushed into doing the same dance as Intel in order to remain competitive.

Red, while launching the apps separately is also another way of showing megataskability, they should've had an entire 9 pages devoted to that if they were going to do any testing at all in my opinion. They should've done several tests where they only ran one type of test, but many instances of it, and then a smaller number of tests with different mixes of the tests they had done separately earlier. They should have also done single tests where they tested to see how many instances of an app they could run before there were greater than negligible losses in the single app's performance. While not all of this would necessarily be useful to most readers, generally the only thing that would be useful to most people is the conclusion, anyway.

enumae said...

Hey Scientia, I am wondering if you would be interested in posting your blog on a forum I am working on.

I will give you moderator privlages, and I will continue to pay for the site.

It is a way to bring alot of blogs to one place.

If you would like to look at the forum the link is here...

http://www.cpu-gpu-forum.com

It would be great if you would let me, and if not I completely understand .

Scientia from AMDZone said...

Well, theoretically, I will eventually earn some small income from the advertising here. Or, at least I will if I find an advertiser that isn't a scam like AdSense.

Apparently, AdSense sets their payment threshold quite high so that they have time to get free advertising on small websites before they create some pretense to cancel the accounts. Apparently, they do this routinely. I'm trying Bidvertisements now. Maybe they aren't a scam.

Scientia from AMDZone said...

Okay, Adsense reinstated my account so I guess I can't say that they are a scam. However, their filtering software is a bit hamfisted and it took more than a week to get it corrected.