Rick_Bergman_intro2.jpgImageLast year's agreement between Intel and AMD has caused a lot of attention on the IT scene. With the history of cooperation and conflict between these companies longs for decades, many were interested in the kind of agreements reached by the companies. We spoke to Rick Bergman, Senior Vice President and General Manager of AMD, who gave us many interesting informations.


InsideHW: Mr Bergman, our readers are especially interested in the information concerning last year’s “agreement” between Intel and AMD. Could you disclose a bit more on the topic?

Rick Bergman: You’re referring to the settlement?

IHW: Yes. Why opt for a settlement? Did you estimate that the trial would take too long, or that the court-assigned compensation in case of a court verdict in your favour would be lower than the offered one? Perhaps it was that the non-financial part of the AMD settlement was more interesting than the money itself?

Rick_Bergman_t.jpg RB: The negotiations reached a point at one moment where we felt they fulfilled our goals. There are many details that relate to that. Basically, we wanted to provide a fair market competition, taking into consideration patent licences as well as manufacturer ones, in order to better determine the “game rules”. We were satisfied with the results achieved, and of course, securing the compensation for losses caused to AMD by the previous business strategy. It was our estimate that the time was right for such a move.

IHW: When you say patent contract, are you referring to the right to the production of x86-compatible CPUs, or is that not part of this contract?

RB: There are two parts to that contract. The first part involves the exchange of licence rights between AMD and Intel, whereas the second part is Intel giving the licence rights to Global Foundries. Both parts are very important to AMD, since they enable us to be a company completely free of its own factories, while providing Global Foundries an opportunity to be an independent manufacturer of a wide semiconductor gamma.

IHW: Many rumours sparked on the internet saying that the compilers provided by Intel for its CPUs are created in such a way that they provide much worse performance on non-Intel CPUs. Does this deal concern that topic as well?

RB: There are no exact details on that in the settlement, since it only concerns business dealings in general. Nothing exact has been agreed upon as far as compilers are concerned. We know that it’s a part of the FTC charges against Intel, which I cannot comment upon at the moment.

IHW: But you will basically have to provide proof on the subject to the FTC committee if requested to?

RB: Should FTC send a warrant, we will certainly provide any info requested. We do business in full accordance with the law, and will therefore forward any information if requested to by FTC or the EU committee.


IHW: Intel currently has some very interesting products, both in desktop and portable segments. For now at least, their approach seems to be that of CPU and GPU unification on one socket, not one chip, which seems to be AMD’s approach towards the future Fusion platform. As far as we know, Fusion should appear on the market next year (2011). What exactly is the difference between the approaches to Westmere and Fusion projects?

RB: The differences are obvious. Unlike Westmere, our Fusion project will fully use APU (Accelerated Processing Unit) processing units which will be situated on the same silicon cradle. Owing to that, we will be able to provide an acceptable ratio of power and consumption. Westmere is a compromising solution which is not that different from the situation where the GPU is integrated on the motherboard. Our strategy is to create a full scale of single-chip products which will offer high performance with an acceptable consumption. It will be quite different from anything Intel has done so far.

IHW: According to preliminary tests, it seems that the performance improvement Intel managed to achieve is around 50%, compared to the previous solution. On the other hand, it is clear the AMD approach requires an entirely different manufacturing process, and since the GPUs of today offer very high performance, will Fusion have anything to offer in the mid, perhaps even high class as far as GPU performance goes?

RB: There are all sorts of synthetic tests, but we are interested in the performance gains we are able to achieve in actual applications. In fact, our goal is to achieve high performance in that segment exclusively. On the other hand, there is that moment when the financial side ceases to matter, especially with powerful workstations and enthusiasts. In such cases, the GPU effect is the thing that matters, which means that an independent graphics card will still be needed, offering higher performance. Naturally, we’re planning separate CPU and GPU products as well, since separate graphics cards will still be necessary to achieve maximum performance.

IHW: That’s good news, at least as far as companies in the GPU business are concerned. Could you comment on the delay of the Larabee project, since AMD predicted on multiple occasions that Intel would have problems with that, and thus be unable to develop Larabee in the intended period? According to Intel, instead of becoming available to end users, Larabee is phasing into development platform. Do you expect Larabee to be developed further or was it a dead end?

RB: I was working in ATI when it was bought by AMD, and even before that on the development of x86-compatible CPUs. About twenty years ago, there was the common opinion that a GPU was easy to make, which may’ve been true until 10-15 years ago. At the time, CPU design was much more complex. As years went by, GPUs became very complex and very powerful processors indeed, able to do many things. Therefore, it’s no surprise that Intel “tripped” in the attempt to combine the two. Other than the fact that neither a good CPU nor a good GPU are easy to create, they are also drastically different in design, and a fundamentally different way of thinking is needed for a successful project of that sort.

IHW: Talking about GPU manufacturing, I remember AMD around the launch of 4000 series of graphics cards making comments on Nvidia’s ways and expectations that they will switch to multi-core architecture with the next generation already, unlike the single-core architecture working on higher clocks. It’s easy to draw a parallel between that and the current situation in the CPU field, since four years ago, frequency was the main benchmark of performance, but it seems that we’ve been stuck with CPUs working at around 3 GHz for some time, only with more and more cores. The best graphics cards on the market at the moment also have multiple (two) cores with a lower clock. Nvidia Fermi isn’t out on the market as of yet, but they’re advertising it as something which would foreshadow everything else by its performance. Which approach do you suspect they will be taking, one massive chip or a multi-core solution?

RB: I believe we’ll have to wait a bit more for Nvidia to come forward with its final solution. AMD has had a clear strategy in years, which we like to call “sweet spot”. First we bring out a product which has a fantastic ratio of price and performance, and then we place two of those on a single board for maximum performance. If you attempt something like that with a larger and more powerful GPU, it’ll be difficult, you’ll be late, and so will your cards. That’s the reason that, at the moment when Nvidia still has no DirectX 11-compatible cards available, we’re selling millions of ours which fulfil the standard. Working frequency is unimportant with GPUs. With CPUs, who perform calculations serially, and where lower latencies are a critical benefit, clocks matter. As GPUs are more oriented towards parallel computing, multiple cores at a lower frequency do a great job. I’d also add that our products don’t function at lower clocks compared to the solutions offered by our “friends” in Nvidia. They have a smaller part of the chip working at a higher clock, while the rest of it works at a lower one. Our GPUs work at a unified frequency, as a whole, at an intermediate clock value, so to speak, unlike them, who have some parts at a higher and some parts at a lower frequency. If you take a look at and compare our products, you’ll see that “performance per watt” and “performance per dollar” are on our side.

IHW: Nvidia is currently leading in 3D display support in games and they seem to be focused on that technology. On our way here, we’ve seen announcements by AMD concerning support for 3D display of Blu-Ray films and their acceleration. Are there any plans in the making for the field of 3D display in games?

RB: We will, of course, fully support 3D display technology in games as soon as an established standard appears, being that we’re already cooperating with multiple companies in that field. However, we don’t think that there is a large demand for such products on the market at the moment. I believe that Eyefinity and buying three monitors present a much better investment than the same amount of money required for a pair of 3D display glasses and a special 3D monitor.

IHW: We agree completely that an investment required for 3D Vision is quite costly at the moment. On the other hand, we were particularly impressed by the demonstration of Eyefinity on eight Samsung monitors and the feeling created while watching BattleForge run on them. It is especially clear that most users still aren’t aware of just how much Eyefinity as a multi-screen technology is able to increase their productivity. Still, talking about gamers, we’re not sure how many of them have actual space needed to put three monitors on a single desk, as such an environment definitely needs some space. What do you think about that?

RB: As far as that’s concerned, the rule is simple: seeing is believing. Of course it’s a challenge to “make” home users opt for such a thing, since they probably haven’t had the chance to see how it works. Journalists and people from the industry itself have a much easier time doing that, as IT fairs such as CES offer an opportunity to actually try out such things. In this internet-oriented age, it is a tough task to reach the ordinary users with such radical ideas, but that can be overcome as well. We organised a presentation during a massive gamer conference and the gamer reactions were most positive. It’s also important to say that we’ve reached a stage where one can buy quite a few different models of HD monitors and fit them all in nicely, thus creating a nice three-monitor system for a mere 400$, which is a revelatory experience in games such as DiRT 2.

IHW: Ever since the launch of 2000 series, we’ve been hearing about tessellation, but up to now, that technology has only been used in a tech demo, while we’re still waiting for games using the tessellation method. Unfortunately, when the demo was actually run, we noticed a major drop in performance when tessellation is on, with just the explanation that the drop is dependent on the implementation, i.e. the quantity of calculations needed for the tessellation process. We had the impression that tessellation will be very important for the visual quality of upcoming games, but that the performance drop wouldn’t be as drastic. Is that possible with the current generation of DirectX 11 GPUs?

RB: Tessellation is a postprocessing effect which should depend on the implementation, but my technical knowledge on the subject is not on the required level for me to give you a clear answer. It’s a good standard that’s been used on the Xbox 360 for years and will only advance and be more and more used in the future, now that it’s an integral part of DirectX 11.


IHW: Intel’s plans for this year are rather interesting, especially since we’re expecting their first six-core CPU for the desktop platform in the following couple of months. What’s your response to this move?

RB: We are very glad that we were the first to have presented a native six-core CPU. It’s obvious that we’re transferring our experience from the server segment down to the desktop segment, since more and more multi-threaded applications appear there as well, thus making good use of the extra cores. Our own six-core CPU for desktop platforms should appear on the market in the first half of the year.

IHW: How much does it take for an average developer to implement multi-core support in their application? We’ve had multi-core CPUs for five years now, but most apps are still only optimised for single-core CPUs. Are software modifications for parallel processing really that hard to do or are application developers simply too lazy?

RB: That question is probably better off directed at the guys doing the actual programming. I really wouldn’t say that laziness is the issue, but that the problem of programming for multi-core processors actually is quite hard and being gradually resolved since decades ago. Over time, tools have progressed just as well and the task at hand is easier nowadays, but that doesn’t make it smooth. Graphics is much easier to implement multi-core processing in and recent driver revisions certainly show progress in that field as well.

IHW: We are very positive about the improvements brought by DirectX 11, as well as Windows 7, which is indeed a great OS. The problem is that, at the moment, not even DirectX 10 is exactly fully used by developers, and 10.1 fares even worse. Do you estimate that DirectX 11 would score a bigger success than DirectX 10 and 10.1 did?

RB: Definitely, titles boasting DirectX 11 support are appearing at a much faster rate than they did back when DirectX 10 appeared. The problem of DirectX 10 was that it was Windows Vista-bound, while DirectX 11 is supported by both Windows 7 and Windows Vista, which significantly increases the user base, which is the number one factor for developers. Another thing is that, when switching from XP and its DirectX 9 to Vista and its DirectX 10, users experienced a significant drop in performance. Now, the situation is the opposite – by switching to the new OS and the new version of DirectX, users are experiencing a performance increase. Being that there are already around 20 DirectX titles slated for this year, I expect DirectX 11 and Windows 7 to be far more successful than their predecessors.

IHW: At one moment, it seemed that PC as a gaming platform was fading away, but the future looks brighter for PC gaming now.

RB: Objectively speaking, there are fantastic gaming consoles on the market, but it’s the PC gaming market that gets the industry going. There are always enthusiasts for the latest hardware and newest technologies to create a new gaming experience. PC is exactly the platform to be used first for Eyefinity and DirectX 11 and that’s what keeps the industry pushing forward.

IHW: While travelling to the fair, we also saw announcements by Lenovo regarding  products based on AMD’s Vision platform. Last year, you weren’t exactly well represented on the notebook and netbook market. Is there something new in plan for this year or is it further development and advancement of the existing platform.

RB: One may expect those two categories to slowly merge into a single one, which makes the segment of ultrathin notebooks extremely interesting. As you’re aware of, last autumn we launched the second generation of our ultrathin platform, while we’re preparing the “Nile” platform for this year, due for presentation in a few months. The moment we present Fusion, we’ll have a remarkable product just perfect for this market segment.

IHW: Since more and more functions seem to be transferred to the CPU from the motherboard, they become progressively simpler and cheaper to manufacture. How is that fact looked upon by motherboard manufacturers, who have a hard time differentiating their own products from their competitors’?

RB: That depends on which segment we’re talking about. If we’re talking about notebook motherboards, reduction in the number of components is a good thing, since consumption, heating and space needed to place all the components to ensure functionality are all reduced. If we’re talking about the desktop segment, well, the market seems to have gone that way by itself already. As I already stated before, Fusion was not designed with enthusiasts and users who assemble their PCs by themselves in mind. For lower-end, fewer components mean a lesser price, which is a chief parameter for these products, which will become cheaper once the Fusion technology is implemented.

IHW: With great experience from both GPU and CPU side of business, what is more difficult to manufacture, an top performance CPU or GPU, and in some detail what kind of effort it takes?

RB: You should probably ask our friends at GlobalFoundries and TSMC. They can probably provide you with more specifics than I could on what’s involved. Both products truly do have their own unique complexities and I’ll leave it to the manufacturing experts to address that question.  What I can say is that GPU product cycles tend to run faster than on the CPU side. AMD is working to incorporate the GPU pace of innovation and execution across all of our products, we call this AMD Velocity. 

IHW: Thank you for your time.