Intel Developer Forum, Fall 2004
Paul Otellini
President and Chief Operating Officer
San Francisco, Calif.
September 7, 2004
Intel Developer Forum
Tuesday, September 7, 2004
Keynote by Paul Otellini
ANNOUNCER: Ladies and gentlemen, please welcome Paul Otellini.
(Applause.)
PAUL OTELLINI: Well, good morning. And let me add my welcome to all of you here at IDF.
I wanted to point out something about that video. Not the usual Intel opening video. There were not a lot of rock bands and things flying around. It was intended to be a bit more provocative. It wanted to put a seed in your mind about, where are we going, what's next.
And I think what's next is a very interesting metaphor for Intel Development Forums. It's about us coming together as an industry trying to figure out the best meld of technologies, the best implementation of technologies and how we can bring things to the marketplace to serve the information technology community that all of us work in every day.
So today I really wanted to focus on three things in my talk. First, I wanted to put things in a bit of a perspective around the industry. We all remember the Internet bubble. But I think now what we're seeing after the bubble is the beginning of a surge, a surge that has come about not by accident, but by some of the work we've been doing together on convergence, on creating a digital effect, and on mining a new market for our products in the emerging markets of the world.
And the second thing I wanted to talk about is platforms and how Intel is focusing its development efforts, its marketing efforts, its research efforts at the platform level, being defined as an integration of technologies tightly coupled to be able to bring new use models into the marketplace.
And then I wanted to wrap up with what I think are two new inflection points that we are facing as an industry that I think are very large opportunities for all of us in terms of expanding our markets and making our overall product portfolios better. And those two inflection points are quite simply the pervasive wireless broadband through WiMAX and the emergence of pervasive parallelism through dual-core and multi-core processors from many companies in the industry.
So let's go back a bit and talk about after the surge.
This is a chart that looks at the total microprocessor market as a surrogate for the computing market in billions of dollars. And it looks at it from 1995 through a forecast of 2005. And you can certainly see the dip there after the 2000 bubble.
But what's not so obvious is that after three long, hard years, we've seen growth come back into the marketplace, and, in fact, if these projections are to be believed, 2004 will actually be the new peak for microprocessor sales on a worldwide basis. I think that's a very interesting data point, and is one that's indicative of where the industry is going in terms of recovering.
But it's not just in computing. In the communications industry, if you look at all of the communications silicon sales from all vendors around the world, you can see a similar pattern where it dropped after 2000, stayed flat for a couple of years, and then, actually, in 2003, eclipsed the prior peak, and 2004 sets another new record.
So in both communications silicon and in microprocessors or computing silicon, we are seeing the surge after the bubble. And I wanted to talk a little bit about why we think that's happening.
There have been, in my mind, three primary growth drivers making this happen. The first is that convergence happened. We talked about this for IDF after IDF after IDF for many years. It's finally happened. Products are shipping in the marketplace, and I'll show you some examples of that.
The second one is a digital effect. The compound effect of all this data around the world, both entertainment and business data, going digital and what's happening around that.
And the third and perhaps most significant factor that is impacting our markets is this new marketplace that's being developed as a result of the emerging economies of the world coming online and buying products that are just like the kinds of products that the more mature markets have been purchasing for a number of years. The next 3 billion users.
Let's talk a little bit about convergence first. Last year, 2003, was the year that we introduced Centrino. We introduced it in the Spring and started shipping it over the course of the year.
If you look at a snapshot for all notebooks last year, only 10 percent of the notebooks that were shipped had Wi-Fi built into them.
After Centrino, the world has changed. In 2004 we expect 65 percent of all notebooks to have Wi-Fi, and next year and the year after, essentially going above 90 percent where it becomes pervasive.
The consequence -- where we work, how we work, the way we communicate on the Internet -- is never going to be the same again.
But this didn't just happen in notebooks. It's happening in handsets and phones as well.
In 2004, the chart on the left shows that this year is the first year that data-enabled phones will outship voice-only phones in terms of handsets around the world. I think this is a huge inflection point.
It says that PC like applications and services are coming to handsets for the first time and people are starting to find revenue streams around building those services and delivering them into the marketplace.
The chart on the right is equally interesting. It looks at the total number of bits of traffic that those networks are carrying. And you can see that voice -- the blue line on the bottom -- is approximately flat from 2001 to 2006.
The line that is already the larger of the two and the one that is growing at 56 percent compound annual growth rate is data traffic.
If you extrapolate this and essentially assume that revenue follows bits over time, that essentially makes voice for free and makes the data traffic, the data services traffic, the critical linchpin for network profitability going forward.
This is all happening because we're integrating these same capabilities into the devices that will communicate and compute at the same time.
Now, I think there's an interesting corollary to the digitization of these devices and that's one that actually tracks Moore's Law in terms of its rate of growth. And I've picked two data points to demonstrate that.
The one on the left is the total amount of data at Intel Corporation, so inside the walls of Intel, as measured in terabytes. And we look at 2001 to 2004. In those four years, the amount of data inside the company doubled every 18 months from 500 terabytes to 1 point -- to 3.4 thousand terabytes in 2004.
It's moving at the rate of Moore's Law, and the chart on the right suggests that the Internet is also moving at the rate of Moore's Law in terms of a good surrogate for that, which is static HTML pages, also doubling every 18 months.
So as computers, handsets and data-enabled devices get deployed, we're seeing the data behind them, the data that people will take advantage of, not just business data but increasingly entertainment data, also grow at the rate that Moore's Law has projected.
So what? What does that mean?
It means that you can start thinking about new thresholds that are out there for various industries. And I picked one here today, which is the movie industry.
Titanic is the most viewed film of all time. There were 700 million viewers of Titanic in seven years. A big number. Very high return.
If you start thinking about what we said last year at IDF, when we projected that by the year 2008 there would be 1.5 billion broadband-enabled PCs in the world, you can start thinking of a different kind of threshold for the entertainment industry.
And I think we're on a cusp. As the entertainment industry tries to take advantage or moves to take advantage of these broadband PCs, you can potentially get to the point where you can have a billion viewers in the first year for a new first-run movie.
This is a huge revenue opportunity for the industry at a much lower cost of distribution than they've ever seen before.
It's these kinds of scale economics that are driving the entertainment industry to move towards embracing Internet technology, and I'll show you some examples of that later on in the speech, and move their high premium content onto the Internet to be able to take advantage of the scale that we're deploying.
The third driving factor for growth are the next 3 billion users.
The chart on the left looks at the Internet users by major geography of the world. And at the highest level, the Internet use has gone from 150 million in 1998 to projected to be 1.5 billion people in 2010. This happens to be a Morgan Stanley data set.
But what you can see on there is that while every geography is growing, the bulk of the growth is in Asia and in what's called rest of world, Eastern Europe, Latin America, other parts of the world. And in fact, Asia alone is growing from 17 percent of Internet users in 1998 to 37 percent in 2010.
So a third of the users of our products are going to live in Asia-Pacific. How do we deal with that? How do we market to them? What kind of product would they need that may be different from the products we sell in Western Europe?
A consequence of this is that this growth is not taken for granted. We have to make sure that this growth happens.
One of the ways that Intel helps make sure that the growth happens for us and for everyone in the industry is we are seeding efforts on the street in cities and emerging countries. And the chart on the right shows how many -- we've gone a 10X coverage of cities in the last four years to over 1200 tier three, tier four cities in emerging markets.
Now, what are we doing there? Of course we're working on distribution systems. Of course we're working on marketing collateral and building brand. But more importantly for you, we're seeding the markets for your products over time.
We are educating through schools, individual students in terms of how to use computers. We've educated and trained over 1 million teaches around the world on integrating computer technology into their curriculum.
We are working with the communications industries in these countries to make sure that the broadband deployment is there for these computers as they're deployed and so forth.
Essentially ensuring that every impediment to this growth is taken out of the way and we can make sure we capitalize on this 10X growth in users.
Now, through all of this, all this growth in my mind is still driven by one fundamental thing, and that's Moore's Law. It is the fundamental enabler of our growth. And you may have noticed last week there was a press release from Intel about our first 65 nanometer products, and I wanted to show you the wafer that was used last week in that announcement.
And what this is, is a collection of very high capacity SRAM, static RAMs, the world's first -- these Intel's first fully integrated technology, fully functional product on 65 nanometers.
What this product does, is demonstrate the capability of the process for us to begin production deployment in 2005.
None of our competitors, to our knowledge, have yet demonstrated the kind of process and product level integration that is implicit in that technology.
We continue to develop and introduce new technology generations on target every two years. And each new generation of our technology doubles the transistor density and improves transistor performance generation to generation.
At Intel, Moore's Law is alive and well.
But Moore's Law has changed in the manifestation of the way we look at it, and I think the way our end customers look at it.
For, two decades or so in the computer industry, performance had a surrogate measure called megahertz. And then it became gigahertz.
If you go back in time and look at the magazine covers of a decade ago, they all talked -- actually, some of these go back to 90 MHz, so almost two decades ago, they talk about the kind of performance that was now being delivered into the marketplace by now adding one more megahertz.
Starting at fall IDF two years ago, we started talking about performance characteristics in a different fashion, and you're starting to see that now reflect itself in the way even magazines reflect on new product and new technologies. They talk about form factors, they talk about wireless, they talk about ease of use, and they talk about things like security or manageability increasingly.
And it's these transistors, it's how we use our silicon budget to bring these kinds of features into the marketplace that I think are increasingly what Intel is going to be about and what our end customers are all going to demand.
For us, this demands a fundamental shift inside of Intel in terms of the way we have looked at technology and think about technology. And at the highest level, it's simply looking at our product output as platforms. Not necessarily selling end-user products. We don't do that, as you know, but thinking about the products we design as a collection of technologies that have to be integrated into end-user products in a seamless fashion.
What you can think about is what you're seeing us go through is what I'll call, to use poor English, the platformization of Intel happening before your eyes.
On one vector you'll see us continue to drive pure performance, gigahertz over time, and you've seen that generation after generation after generation.
You've also seen us add increasing capabilities inside the microprocessors, like MMX, like hyper-threading over time. And we'll continue to do that.
But increasingly you'll see us add other features outside the microprocessor as well, in the chipset, where we have things like PCI Express or high definition audio now shipping into the marketplace.
And most recently in the last year you've seen us augment this overall set of platform capabilities with communication silicon and software that manages that communications protocol as part of our overall offering.
But we took it one step farther last year. We ended up taking the collection of the most highly integrated products for a form factor, the notebook, and created the Centrino Mobile Technology brand.
So we were able to take and move from marketing a processor generation after a processor generation to moving to market a new user model for computing, and I believe it's been very, very successful. Certainly the product has been very successful for us and I think those of you who are shipping notebooks based on this product would agree it's also been very successful for you.
This is a better way to think about our products. It's a better way to market our products. It's a better way to explain our technologies to end users over time, by integrating them and marketing at a level they can understand and appreciate.
We will not stop with the notebook. The next platform that you'll see from Intel is around the Digital Home. It's an integration of technologies, hardware and software, working with vendors and other partners in the industry that will enable much around the Digital Home that we've described, I think, also for the last couple of IDFs.
Digital Home is moving into reality. The Digital Living Network Alliance Specification 1.0 was delivered in June. That was on target. It took us a year to do that with the industry. And you'll see products in the marketplace this holiday season.
The EPC or entertainment PC, which we first showed at the Consumer Electronics Show in January of this year, will also be on starting and delivered into the marketplace starting now, some of our customers are shipping them now. And you'll see the volume grow over the second half of the year as we approach the holiday season as well.
And then the third key element of the Digital Home is really how we get protected content around the home in a seamless fashion. And the technology that we have been driving for the last couple of years for this is something which has the catchy name of DTCP-IP or Digital Transmission Content Protection over Internet protocol.
This is a very, very important technology. And I'm happy to say that products will be in the marketplace to enable this in the second half of this year and on the shelves for Christmas.
Now, we introduced this last year for premium content. And I thought it would be useful to bring up Kevin Corbett, who is the Desktop Product Group's Chief Technology Officer, and give you a more in-depth update on where we are with DTCP-IP.
Kevin.
KEVIN CORBETT: Hi. Thank you, Paul.
Well, we've made great progress with the teams on DTCP Over IP, as Paul just described. Last year, we announced the specification. And this year, at CES, seven studios endorsed DTCP Over IP as appropriate for their premium content on home networks. Today I'm really excited about a breakthrough in the Digital Home.
We have our first product here from Netgear up on stage that's implemented a DTCP Over IP-based solution to stream premium content from the PC to the Digital Media Adapter.
We've worked together with six different companies to do this. The six different companies we have worked with, you see on this foil here. Digital 5 and network works worked together with Netgear to build this product that made it DTCP Over IP-enabled. The premium services you see here, MovieLink and STARZ!, are Real Networks-based services that are able to download content to the PC in the Helix DRM and be able to play that content on the Digital Media Adapter. And then Sony Pictures has approved this overall solution as appropriate for protecting their content in an end-to-end solution.
So let me show you a little bit about this product.
So today up here we have the entertainment PC. This was developed with Intel and Tatung. It's a small, slim form factor, CE-like box, but it has all of the power and performance of a personal computer with it. It's got the Intel with hyper-threaded technology, an Intel 915G chipset. It delivers high-definition video, high-definition audio, and the ability to have wireless for easy home networking.
But today, what I want to do is I want to show you the content experience.
So running on this entertainment PC is Media Center Edition. Many of us have seen that. But today, integrated into Media Center Edition is the STARZ! movie service. This is STARZ! Ticket. Because this has been implemented with a 10-foot user interface, I'm able to navigate with a remote control into an easy-to-read ten-foot user interface that I can read from here. I'm able with a remote control to go down and select a movie I want to see, so I'll select "Maid in Manhattan." And I can go over here and begin to download it and play it. So while that's queuing up and downloading, this is progressive download. And you'll see it start to play here in a second. Okay, good, it's loading up. What that does is allows me to start watching the movie right away while the rest of the file starts to download. Now, while that's loading up and going through the credits, let me tell you a bit about the STARZ! movie service.
STARZ! Ticket is an Internet on-demand service that allows you to access 150 or so of the 700 movies that are in the STARZ! library. But you get to watch them when you want to on your entertainment PC, not when it's available on the seven or eight channels on your cable or satellite system. That's pretty compelling. I can now download and have the flexibility to pick whatever movie I want and watch it when I want, not when it's programmed.
Now, if this was last year, I'd have to stop because I could only play it on the entertainment PC or take it on a notebook.
But today, because I have the Digital Media Adapter that's been enabled with DTCP Over IP, I'm able to take the premium content here and bring it over to the Digital Media Adapter.
So I'll let this movie keep playing, because it's "Maid in Manhattan" and maybe I could play this movie over there if I want, but I really don't want to watch it. I'll let my wife watch it and I'll come on over here and watch a different movie.
So over here is the adapter I talked about. It's the Netgear adapter. It's implemented DTCP Over IP that allows it to take movies that are downloaded to the PC that have been encrypted in the Helix DRM from Real Technologies, and the Digital 5 software runs on the PC and on the adapter to translate that into DTCP Over IP and send it to this adapter. So that protects the content all the way from the PC to the adapter.
So then I'm able to bring up the Digital 5 interface here. I can go into my movies. I could select a Stars! movie that I just watched or in this case I've also downloaded some MovieLink movies. Let me scroll through those. And I will go to the "50 First Dates" and play that. This is another Sony Pictures movie. Notice how the "Maid in Manhattan" movie is still playing nice and cleanly over there. And now this new premium movie is loaded here.
What this is is the power of the Intel CPU that gives up the seamless performance in the Digital Home playing multiple streams of cross-wireless technology protected by DTCP Over IP.
So that's a great opportunity, great opportunity for all of you to implement this technology in any of your CE devices to take advantage of these services.
But you know what? We don't want to stop there. We have even more progress. What we're going to do is now not only working with Real Networks, but we also need to go work with Microsoft and Sony OMG and maybe even Apple to get their DRM technologies to be able to use DTCP Over IP.
And so I'm very happy today to talk about Microsoft has -- is going to support DTCP Over IP in the future version of Windows Media Player. And we're working very closely together with them to do that. What that means, then, is if you implement DTCP Over IP, you'll be able to take advantage of the services I showed you today from Reel take advantage of the Microsoft services and still be able to support the Cardea services that they support today but support both Cardea and DTCP-IP and have a rich experience. And as we move on to these other providers, we'll have even more progress.
So tomorrow, Bill Siu will show you a lot more details on this and tell you exactly how this all worked up here. So thank you, Paul.
PAUL OTELLINI: Kevin, that's great progress in the year, and I'm really happy to see that Microsoft has joined the support, and even happier to think that Apple may join the support. Maybe you can have them help to get a better name for this while you're at it.
KEVIN CORBETT: Always want more, don't you, Paul.
(Applause.)
PAUL OTELLINI: Now, while it's true that we are increasingly, as a company, focused on the platform, and putting platform level thinking and design and integration into everything we do, and you'll see this in the Digital Home, you'll see it in phones over time, you'll see it in the enterprise over time, at the end of the day, the most important thing that we do is invent and integrate new technologies at the core level of our products into the microprocessors and the core chipsets around them.
And this process of invention that you've heard about up to this point in time is really very evolutionary. And we thought what we would do today is have a video from some of our senior fellows, the Intel fellows, talk about what they think is revolutionary in the future in terms of the technologies that they want to bring into the marketplace.
Let's roll that video, please.
(Video plays.)
PAUL OTELLINI: What that video does is it describes what we do at Intel, embedding these ideas into silicon.
And what I want to do in the next section of the talk is focus on these technologies, the kinds of things we're bringing into the microprocessors and into the platform on one vector. And on another vector, talk about a more classical view of performance improvement in terms of cache memory.
The first technology I wanted to talk about is something called hyper-threading. This is part of a very long-planned and extensive rollout of moving our processors and the software that supports our processors into increasing degrees of parallelism day in and day out.
We conceived of that the concept of hyper-threading as a seed vehicle to move to parallelism in 1999. We announced it for the first time publicly in the fall of 2001 at IDF. And we introduced it in November of 2002.
If you look at the chart that's up here now, you can see that in 2002, even though we shipped it that year, we shipped very little units. They shipped in the latter part of the year and exited. But in 2004, as we exit this year, you see that 100 percent of our 32-bit server products are threaded now and about half of all of our performance segment client products will now be enabled with hyper-threading. What this is doing is providing not just performance boosts into the marketplace but has created a very large installed base of products that are thread-ready for all of you software developers out there.
Other technologies that we've been describing have to do with memory addressability. EM64T or extending our 32-bit line to 64-bit addressability is shipping now for servers and workstation on both Windows and Linux today. We will make it available broadly on our client-based silicon with windows 64-bit clients when that operating system ships.
In terms of other technologies, LaGrande Technology for reliability -- for security and Vanderpool technology, which is our virtualization capability for reliability, the combination those of two really gets enabled with Longhorn in 2006, even though we have continued to develop it, you'll see a demo of it in a few seconds, this technology, in order to be mainstreamed, needs a robust, reliable operating system environment that we believe will be coincident with the Longhorn deployment in '06.
Last fall, Louis Burns and I showed the first public demonstration of VT or Vanderpool Technology for virtualization. And that was a Digital Home scenario. We were running games and movies and so forth in this scenario.
As we look at this technology, though, I think it actually has value not just in the consumer environment, but also, increasingly, in the IT environment. And to help explain why that is so, I'd like to bring out Stacy Smith, who is our co-CIO at Intel, and he'll explain to you what Vanderpool does for him and his people over time.
STACY SMITH: Thanks, Paul.
PAUL OTELLINI: Good morning, Stacy.
(Applause.)
STACY SMITH: We're very excited about the opportunity for Intel's Vanderpool Technology to reduce our costs and improve the reliability and resilience that we see inside the enterprise.
One of the challenges that we face in IT is that while all computers start out with a standard load, those pesky users like to add things to that load. They'll add productivity applications, they'll download things off the Internet, and over time we end up with a very nonstandard environment.
So we've come up with a concept, the office of the future, that Art is going to help me demonstrate here where we've actually crammed four different PCs in his office, and you'll see it's a very small office just like what we have at Intel, but we're able to keep all of the different usage models separate this way.
So if you look at this red system here, this is Art's corporate applications. This is a standard load. Everyone in the company would have this series of applications. And this particular box to have the highest levels of security so that we can keep all of our proprietary information secure.
The yellow box here is Art's personal applications. This could be things like voice over IP or video conferencing so he can keep in contacts with friends and family but it could also be some truly personal applications. For instance, I have Madden 2005 on my computer so when Paul's staff meetings start to drag on a little bit I can keep myself occupied.
The green system here is Art's Linux operating system where he's doing some of his high-end design engineering activities, running a Cadence CAD application.
We have thousands of engineers inside of Intel today who have two computers in their office so that they can do their high-end processing using this Linux box and they can keep contact with the corporate system.
The blue box here is the one that information technology would own and operate. These are the IT manageability operations. So this would make sure that at any given point in time, Art has the latest security settings, is patched, and has the right virus protection.
For instance, last night, Art was on the Internet, probably going someplace he shouldn't have gone, and he downloaded an application that's brought a virus into the environment.
So what you should be seeing on the screen is that as he connected this computer today, the IT manageability system detected that there was a virus, it was able to immediately detect that system, notified Art that he needed to run a cleanup application on it and in the meantime Art was still able to continue doing work, which is what we pay him for, on his app system.
So we kind of like this model of four PCs in the office from an IT perspective, but we realize that it's not terribly practical.
In the future, instead of having these four PCs in his office, Art would be able to have one PC running Intel's Vanderpool Technology where all of this stuff would be partitioned using a single box and hardware. And I'm really pleased to show you that that's exactly what we're demonstrating today.
All of those applications that you saw running were running off of one system. So on one system he was running his corporate applications, he was running his personal applications in a different partition, he was running a Linux operating system running cadence CAD and we were running a management capability system all from a single system.
So, Paul, we're really excited about this.
(Applause.)
STACY SMITH: Thank you.
We think this can eliminate 10,000 calls a year to the TAC for security issues, and we believe this will save us millions of dollars in support costs and in the fact that we can eliminate that second system on design engineer's desks.
PAUL OTELLINI: Thank you for the demo, Stacy, and thanks, Art, for running it.
These technologies are going to address the kinds of problems that many of us running enterprises have which is security, reliability, affordability, those kinds of things.
There's another "ity" that needs to be addressed and it's manageability. And at IDF this week you'll see a number of announcements from Intel that have to do with what we're driving in terms of manageability.
You can think of this as another platform technology. And under the label here of IAMT or Intel Active Management Technology.
What is this and what problem are we trying to solve?
If you look at manageability as a cost, 80 percent of IT spending today on average is associated with keeping the business running. Essentially, the task of manageability for support and those kinds of things.
Gartner has a very interesting view on how you deal with this 80 percent. They suggest that you can take it to what they call zero management or zero cost management by in sourcing it to silicon. And of course with no surprise, we agree.
And what we're announcing at IDF this week are really three large initiatives. The first is the Intel cross-platform manageability program which is a broad program for manageability of platforms for notebooks up through servers.
The second thing is our first product in this area, IAMT, or the Intel Active Management Technology, which allows for out-of-band troubleshooting, it allows for a protected or secure environment to be able to keep certain types of data for reboot or diagnostics, virus free or available in an out-of-band situation for IT managers to access.
Now, this is available today on very high-end cards and some high-end servers, but what we want to do is make this part of every platform that has Intel architecture inside of it, make it very cheap, make it pervasive, and make it cross industry.
In order to do that, we recognize the need to have industry collaboration. We're also announcing this week the formation of a group to do a public specification that we expect to have out by Spring IDF of next year. There will be a draft process. It is meant to be a very inclusive process and we would encourage those of you who are interested in this to join the various threads in the session this week and really get into this and help drive it to reality.
The other area of performance I thought I would talk about that we haven't really discussed much in quite some time is cache. Cache is kind of boring. But it's also kind of important.
Cache memory is a great way to increase performance on the old-style applications, the integer-based application, the floating-point applications that need the access to that memory and that help drive the performance up when there is a larger cache memory.
You can see on this plot since the time we first integrated cache into our product lines with the 486, we've been driving more and more cache memory into the product line.
This year we began shipping two megabytes of cache inside of one processor, the Dothan processor, which is the current processor in the mobile Centrino processor lineup. And before the year is out, we'll also ship two megabytes of cache on our desktop processor into the marketplace for a performance sensitive SKU for those who need it.
Now, if you think about this, two megabytes of cache was the entire memory configuration of the PC in 1990, and now we're driving it through Moore's Law onto individual chips.
So that's where the technology is going in an evolutionary fashion. What I want to close with is talking about two large, I think, inflection points in technology deployment.
The first, as I said earlier, is pervasive wireless broadband and the second is the move towards parallelism. Both of these move on top of Moore's Law, and in fact, they need Moore's Law in order to be implemented.
Let's look first at connectivity. What I picked here are three data points. Today, in the middle- 2004, four years ago in 2000, and four years from now in '08.
In 2000, you can call that the era of narrowband. Dialup represented 90 percent of the connections on the Internet. Only 10 percent had some kind of rudimentary broadband.
Today, we've seen a flipover. 51 percent of the Internet connections today are fixed broadband. Only 49 percent are dialup.
Interesting enough, Wi-Fi now makes up about 8 percent offixed broadband connections as a means of accessing networks in a more friendly fashion or multiple-user fashion.
So what's next? Well, I think we're about to see an era of broader band. Dialup will continue to decline as Internet connections, 22 percent. But fixed broadband we think will be 70 percent. Wi-Fi will grow to 40 percent of those connections, and a new technology called WiMAX will start to be prevalent in the next four years.
Let's talk about why we think that's going to happen.
If you go back and look at the last three deployments of broadband technology, ISDN, cable and DSL together and then Wi-Fi, the orange bar, what you see on this chart is in the first five years of deployment Wi-Fi blows the others away in terms of millions of users, reaching 40 million users five years into it.
Why did that happen? It's because it's a standard. It's because it enabled a new usage model that is mobility, which the others didn't have before. And because quite simply, it was cheap. High integration, low cost made it much more pervasive.
Earlier, I showed the rate at which Wi-Fi was moving into notebooks. This chart takes that and does a projection out to 2008. And it's in units this time as opposed to percent, but it shows the same thing. Wi-Fi is essentially in almost every notebook in that time frame.
Starting in 2006, though, we have made a commitment that we will integrate, as an option, WiMAX silicon into the Centrino Mobile Technology platform, and we believe, as plotted by those orange bars, that you'll see the same kind of viral growth for WiMAX that we saw for Wi-Fi happen as a result of it taking off in a large number of PCs.
Now, beyond our projections, what's happening?
Well, a year ago there wasn't much. There was a WiMAX forum. There were ten members, including Intel Corporation. Standards weren't yet in place, and there were some pre-WiMAX trials, pre-standard trials happening, but really not a lot of momentum beyond that.
If you look, fast forward to today, it really has changed quite a bit. The WiMAX forum has 140 companies; not just silicon companies but telco and equipment manufacturers as well. IEEE has approved the 802.16 spec. It's in the marketplace. People are scrambling to deliver product around it. There are now 40 trials around the way in the world, China, Europe, Brazil, and the United States.
So if you fast forward again to what I think is going to happen in the next four or five years, I think it's not inconceivable that we are on the cusp of a WiMAX era. And in fact, I think that WiMAX could be to DSL and cable what cellular was to land line not that long ago. That is a disruptive, more convenient, lower cost technology that brings about pervasive utilization.
Now, we're not just talking about WiMAX. We're also driving product. I'm happy to show you today our first -- this is a development card, the chip in the middle is our 802.16 card -- chip, rather. This is now shipping and sampling out to our customers, and they are feverishly working on CPE equipment, or customer premise equipment, like this rack of products that's over here to my left for 802.16 WiMAX deployment starting in 2005. We're very, very excited about this. This product will be known as the Intel Pro Wireless 5116 broadband interface.
Let me shift to the second inflection point, which is parallelism. And in order to put parallelism in perspective, let me take you back a bit and look at the history of the PC, the history of the first IBM PC.
1990, Windows came out and brought the graphical user interface with it. In 1997, with the Pentium and the integration of PCs, you saw the first multimedia PC. And in 2003, the first PCs with Wi-Fi saw a convergence aspect in terms of the use model.
Each of these use model changes, interestingly enough, was associated with a 10X factor improvement in the base capability that was shipped at the time.
For the graphical user interface of Windows to be useful, we needed to integrate floating point and other technologies like the cache we talked about earlier into our microprocessors to give it a 10X performance increase from the first PC. And that's why you saw the adoption of a reasonable graphical user interface.
Multimedia MMX instructions was another 10X factor in terms of our ability to handle multimedia-type content and data streams in a processor, in a PC, that drove another use model change.
And without being specific, I think that you could easily argue that Wi-Fi is a 10X factor as well in terms of making PCs, notebooks, per se, that much more usable.
At the same time we're seeing these 10X factors happen on the PC, there is a history of parallelism on MP servers and workstations that is required for the kind of capabilities implicit in servers and workstations today. So we've seen parallelism move down to the workstation, certainly.
What I think we're about to see as we move to multi-core and dual-core processors is another 10X factor, this time, a real 10X factor in terms of performance as measured by things like gigaflops. It will usher in an era of personal parallel computing, the ability for all of us as computer users to have access to parallel processing capabilities.
Now, you might ask, why do I need that? Why parallelism?
Fundamentally, we still can't solve all of the everyday problems that our customers want us to solve with their computers. Answering the questions of what if or where is, how do I find, or running simulations in terms of "for example" or "how could this look?" those kinds of things are what I believe computer users really want to do with the machines to be able to take advantage of the power that we're driving.
Now, this is not science fiction, in my mind. And I'll show you two examples, one for the Digital Home and one for the office. But they both revolve around a common set of capabilities that parallelism best addresses. And they are recognition, they are mining, and they are synthesis.
These things are all the kinds of things that without a large stretch of the imagination you can think about. Think of it as googling with video capability on your desktop or on your notebook.
To do this, we need a 10X improvement in the kinds of compute power that's available today.
It's also, I think, equally true in the office. The same three aspects of recognition, of mining, and synthesis are very much required here.
These kinds of things in terms of interest rate modeling, for example, requires 50 gigaflops in terms of local compute power, things that people do today on servers and some high-end workstations.
Today's PC has five to seven gigaflops of performance. We need a 10X factor to be able to bring this kind of capability to everyone's computer.
So what are we doing about it? Intel has decided that we are going to drive parallelism. In fact, I believe that what we're looking at is moving from an era of how many chips in a computer to how many computers in a chip. Think about that in terms of the way you're going to deploy products going forward.
As I said, we began this in terms of the programming model with hyper-threading. I talked about the data a few minutes ago on that. In 2005, Intel will ship dual-core products into every one of our key segments of the marketplace: Desktop, servers, and mobile products in production next year.
In 2006, that rate of shipments starting in '05 will grow much like it did with hyper-threading, driving pervasiveness throughout the product line. Exiting 2006, we believe that over 40 percent of the desktop product shipments will be dual core, over 80 percent of our server products will be multi or dual core, and over 70 percent of our mobile products will be dual core as well. We are dedicating all of our future product designs to multi-core environments. We have bet on this in terms of our software environment, our ecosystem development, and our Intel Capital infrastructure around it. We believe this is a key inflection point for the industry.
Last spring at our analyst meeting, I showed the first dual-core wafer, dual-core product-based wafer at Intel. It was a Montecito wafer. That was the world's first billion-plus transistor product.
Today, we want to give you a demo of Montecito. We're still on A.0 silicon. And this is the product (holding wafer). And to talk about the product and give you a demo, let me bring out Abhi Talwalkar, who is the vice president and general manager of the Enterprise Platforms Group.
ABHI TALWALKAR: Good morning. I don't know if you told the audience that there's actually 1.72 billion transistors on that.
PAUL OTELLINI: Did you count them all?
ABHI TALWALKAR: I didn't. But my finance controller upset me one day and I had him count them.
With all seriousness, we're driving tremendous performance for microprocessors. And with Montecito, we're going to use every one of those transistors to enhance performance. The significant changes with Montecito from an architectural are dual core as well as hyper-threading in titanium. That piece of silicon in your hand has two cores and four threads, overall, four logical processors.
If you put four of those into a four-socket Itanium platform today, you will have a total of 16 processor threads or 16 logical --
PAUL OTELLINI: 4 equals 16.
ABHI TALWALKAR: 4 times 4 equals 16. You can see that up above you with the basic utility in the Windows environment that shows the number of logical processors. Pretty impressive in terms of benefits to multi threaded applications. And you can imagine the benefits for virtualization as well.
Let's talk about a real-world application. What I have here is the latest SGI Altix server. This is probably one of the most scalable server architectures in the world. This will scale from 64 processors to you can have a number of these systems interconnected to deploy thousands and thousands of Itanium processors.
Now, NASA is utilizing several of these systems today for a host of applications. But I wanted to share with you one of the applications that is receiving and seeing a tremendous amount of activity. Most of you have been watching all of the hurricanes hitting Florida. NASA utilizes this system to analyze weather, collect data, as well as analyze weather so they can effectively predict the weather between the next two to five days.
You can imagine the benefits that this has in terms of not only saving cost in terms of prevention of damages, but, more importantly, saving lives, especially if we can predict these things two to five days in advance?
Montecito is going to deliver big benefits. Just out of the chute without any software recompile, Montecito will deliver 1.5 to 2X performance gains. And I think what's even more exciting, Paul, is that we are demonstrating this functionality today on first silicon of Montecito.
PAUL OTELLINI: That's fantastic. It's great to see how these new products are coming out. I want to thank you very much for showing us that demo.
It's no accident that Abhi was showing a NASA application. It just so happens that there are two gentlemen here from NASA today I'd like to bring out and tell you a little bit more about how NASA intends to use this technology.
Let me introduce Walt Brooks and Ken Cameron.
(Applause.)
PAUL OTELLINI: Hi, Walt. I want to welcome you guys here today.
Walt is the head of super computing for NASA. And Ken is an astronaut and Space Shuttle commander for the agency as well. So we've got the agency well represented here. Walt, I have to ask you, when I think about NASA, I think about astronauts. I don't think about supercomputer CIOs. How does your job compare to an astronaut?
WALT BROOKS: Like most people who joined NASA, I thought about being an astronaut. I didn't make the cut, though.
These days building Project Columbia, which is going to be one of the world's largest supercomputers, I feel like I'm contributing in a major way. So it's close. But I still want to go to space.
PAUL OTELLINI: That is the new supercomputer cluster that's built upon Itanium 2 and the SGI Altix system.
WALT BROOKS: Absolutely. It's a 512 SGI based on Itanium 2 processors, a Linux operating system. We're going to deploy 10,000-plus of those processors, and we're doing it in record time. And we need to, because NASA's facing some of the most challenging problems out there.
Just to give you an example, in aeronautics, we will be using this to do stability and control, high lift, some things Boeing really worries about in the very competitive environment we have in air framing.
In space, we will be working on modeling the sun, working on weather and climate.
And then the area of exploration, as you've probably heard, we're headed on to the Moon and Mars, and we're working on the crew exploration vehicle and some scenarios.
But probably the most important thing we're working on, and Ken can probably speak to this, is getting the Shuttle flying again and getting it flying safely.
KEN CAMERON: That's right. As you know, the Space Shuttle astronauts depend on operational computers during the conduct of their mission. We can't get off the ground without them. But what's less obvious is that we depend on the engineering and research computational capability in order to prove out the technology before we fly.
We never actually launch a physical launch without having a lot of simulations to back it up. And the kind of simulations that this capability will bring forth will help ensure safety and mission success for future space fliers.
PAUL OTELLINI: That's exciting, Ken. But I'd be concerned because we want to get you up there as soon as possible. And some of these supercomputer deployments take a long time, what is the deployment schedule for project Columbia?
WALT BROOKS: It is incredibly fast. Part of it is because working with you last year we really proved out that building block. Some of the systems have taken two years to deploy. But probably more importantly it's another year before the users figure out thousand get their uses ported effectively. We are using this right now. Some of these things you just showed are being done in real time on the system, and we're working on the Space Shuttle right now, on the return-to-flight problems that Ken just mentioned. What we're talking about is maybe in four months we'll have all 10,000 processors deployed and working.
For us, it's the working on real problems, not just having a computer in the room that runs by itself with a bunch of computer geeks. We want the application guys on board and with us. And at that point, we should have the order of 60 teraflops.
That's good news for the astronauts, because the sooner we can get Columbia running, the sooner we can get the Space Shuttle flying and back up into space.
PAUL OTELLINI: I am glad Intel has had some part in that, but 60 teraflops is incredible. Where would that rank in terms of the world's fastest supercomputers?
WALT BROOKS: That's 50 percent faster than the fastest supercomputer on earth right now, which doesn't happen to be in the United States.
PAUL OTELLINI: That's fantastic, Moore's Law in action. We're honored to have you here today.
KEN CAMERON: Paul, we and the astronauts are very pleased to have Intel on our team.
PAUL OTELLINI: I'm glad we could be there. Thank you, guys.
(Applause.)
PAUL OTELLINI: I've shown you this chart at the last couple of IDFs, and I update it every time to show you how we are maintaining our commitment to give you base-level capabilities in very high volume, very quickly into the marketplace. And this tracks, I don't know, 15-year history now of technologies coming in and how fast they become essentially pervasive in the Intel product offerings.
The reason I wanted to update it was to show you that we expect to do the same kind of thing with these 10X inflection point technologies I talked about in terms of WiMAX and parallelism in the product line and give you the kind of squared wave change that you need in order to have a guaranteed installed base for your development products, both hardware and software.
What we talked about this morning really was essentially how we can all surf that surge. I postulated, showed some data that after the bubble, the industry is growing again, and you can see that in terms of the overall volume.
I think there's really two fundamental points I'd like to leave you with this morning in terms of what I would like you to do or what Intel would like to see you do to take advantage of the technology that's coming down the road.
The first is that, increasingly, all of us have to think about and design and market towards an increasingly digital planet. Our home markets are no longer sufficient, and increasingly, people want these products everywhere on earth.
We very often don't think of them first. I think it's important that we start turning our thinking around in terms of development, design and marketing, market requirements, maybe local market presence that all of us need to have to be able to address the needs of these market. They are different than our home markets here in the United States.
The second one are the two 10X factors that I believe are coming. They are pervasive and I think they are very, very fundamental in terms of changing connectivity and changing the kinds of problems that computers will solve in terms of wireless broadband and parallelism.
And that's really what IDF is all about. It's allowing all of us to work together to take those next steps and deliver better technology into the market to grow our collective businesses.
Thank you very much and enjoy the rest of the week.
(Applause.)
* Other names and brands may be claimed as the property of others.
|