Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Saturday, April 09, 2011

New models for Industrial Research (in reply to: The death of Intel Labs and what it means for industrial research)

Matt Walsh's post on "The death of Intel Labs and what it means for industrial research" must have struck a nerve with me, because I have spent a morning writing a long response.

BRIEF:

Intel's lablets have been shut down, not the labs. I helped started Intel's labs, but not the lablets. It's not clear how effective the lablets ever were. Same for the labs. I discuss models for research, including

(1) Academia far-out, and industry close-in (nice if it were true)

(2) Google's 20%

(3) IBM and HP's business group motivated research labs

(4) Some of my experience from Intel, in both product groups and research labs

(5) Open Source (if ever I can retire I would work on Open Source. But I have not yet managed to find a job that allowed me to work on Open Source.)

I think my overall point is that each of these models works, sometimes - and each is subject to herd mentality, deference to power, etc. Perhaps there is room for new ways of doing research, invention, and invention - a new business model.

Finally, I mention briefly, providing links to quotes, Intellectual Venture's website. With a disclaimer saying that I don't speak for IV, although obviously I have hope for its potential since I left Intel to join IV.


DETAIL:

Intel recently announced that it is closing down its three "lablets"
in Berkeley, Seattle, and Pittsburgh


So it goes. This might be unfortunate. None of the lablet work in my
field, computer architecture, has caught my eye, although I did enjoy
interacting with Todd Mowry's group in Pittsbugh on Log Basse
Architecture (I had come up with Log Based Microarchitecure at
Wisconsin).

However, it is wrong to say that the lablets reflects the death of
Intel Labs. I was involved with the creation of Intel Labs, circa
1995 inside Intel.

This was historically a hard sell, since Intel had been *created* by
refugees from the research labs of other companies. It was a
touchstone of Intel culture that Intel would never do ivory tower
research not relevant to product groups.

E.g. while campaigning for the creation of Intel Labs I created a
slideset that said "Intel must start doing our own research in
computer architecture, now that we have copied all of the ideas from
older companies." I am not sure, but it seems like Craig Barrett may
have seen these slides when he was quoted in the Wall Street Journal
''Now we're at the head of the class, and there is nothing left to
copy,'' Mr. Barrett was quoted as having said.


(Ironically, DEC used this to justify their patent infringement
lawsuit against Intel circa 1997 -- but when I created these slides I
had IBM in mind, not DEC, since I freely admit that much of my work at
Intel was built upon a foundation of IBM work on RISC and Tomasulo
out-of-order. Not DEC Alpha. Perhaps I should never have created
those slides, but it put the case pithily, it helped justify the
creation of Intel Labs. And *I* did not quote them to the WSJ.)

Ref: http://query.nytimes.com/gst/fullpage.html?res=9F02E3D61F39F937A25756C0A961958260&pagewanted=2

Matt says "Before the Labs opened, Intel Research was consistently
ranked one of the lowest amongst all major technology companies in
terms of research stature and output." Well, yes and no. The lablets
opened in 2001. MRL, the Microprocessor Research Lab I helped start,
opened in 1995, as did some of the other labs. When I was at AMD in
2002-2004 my AMD coworkers were already sating to me that Intel's MRL
work was the most interesting work being published in computer
architecture conferences like ISCA, HPCA and Micro. I.e. I think MRL
was picking up steam well before the lablets were created.

Actually, the lablets were part of a trend to "academify" Intel
Labs. E.g. around that time my old lab MRL was taken over by a famous
professor imported from academia, who proceeded to do short term work
on the Itanium - and over the next few years most of the senior
researchers who did not agree with Itanium left or were forced out.
Ironically, the academic created a much shorter term focus at MRL by
betting on VLIW - and then ultimately he moved out of Intel.

Now, don't get me wrong: the guys left over the lab formerly known as
MRL do good work. Chris Wilkerson has published lots of good
papers. Jared Stark accomkplished the most successful technology
transfer I am aware of, of branch prediction to SNB. Chris and Jared
are largely the guys whose work my former AMD coworkers admired.

But, such work at Intel is largely incremental, evolutionary. I
mentioned that the famous professor in charge of my old research lab
tried to play politics by favoring Itanium, even though his
researchers were opposed.

Annoyingly, from when I joined Intel in 1991 to when Intel Labs
started in 1995 computer architecture work inside Intel was pretty
much 10 years ahead of academia. Out-of-order execution like P6 did
not come from mainstream academia, who were busy following the fads
RISC and in-order. (OOO came from at that time not mainstream
academia like Yale Patt and Wen-Mei Hwu, but they became maiinstream
as OOO became successful.)

But companies like Intel rest on their laurels. Having defeated RISC,
Intel did not need to do any serious computer architecture work for,
what, 10 years? 16 years now?

Matt says: I am very concerned about what happens if we don't have
enough long-range research. One model that could evolve is that
universities do the far-out stuff and industry focuses on the shorter
term.
It is hard to justify the Bell Labs model in today's world,
though no doubt it had tremendous impact.


I share your concern. But my experience is that universities are not
necessarily good at doing the far-out stuff.

About my experience: I'm not an academic, although perhaps I should
have been one. I failed to complete my Ph.D. twice (first marriage,
then when my daughter got born). I've never had an academic paper
published (although I had a few rejected, that later got built in
successful products). But I made some useful contributions to the
form of OOO that is in most modern computers. You are almost 100%
likely to have used some of my stuff, probably in the computer
you are reading this on. At one time I was Intel's most prolific
inventor. I helped start Intel Labs.

I wasted too many years of my life on what I think is the next major
step forward in computer architecture to improve single threaded
execution - speculative multithreading (SpMT). I say "wasted" not
because I think that SpMT is a bad idea, but because I spent far too
many of those years seeking funding and approval, rather than just
doing the work. The actual work was only a few intense months,
embedded in years of PowerPoint and poltics. But even though SpMT has
not proven a success yet, a spin-off idea, Multi-cluster
Multithreading (MCMT), the substrate that I wanted to build SpMT on,
is the heart of AMD's next flagship processor, Bulldozer. 7 years
after I left AMD in 2004. 13+ years after I came up with the idea of
MCMT, at Wisconsin during my second failed attempt to get a PhD.

My last major project at Intel 2005-2009 has not yet seen the light of day,
but newsblurbs such as Intel developing security 'game-changer':
Intel CTO says new technology will stop zero-day attacks in their
tracks
suggest that it may.

Source: http://www.computerworld.com/s/article/9206366/Intel_developing_security_game_changer_

But in my last year at Intel, this major project, and a couple of
minor projects, were cancelled under me, until I was forced to work on
Larrabee, a project that I was not quite so opposed to as I was to
Itanium. Enough being enough, I left.

So: I am not an academic, but I have worked at, and hope to remain
working at, the leading edge of technology. I have tried to create
organizations and teams that do leading edge research, like MRL, but I
am more interested in doing the work myself than in being a manager.

The question remains: where do we get the ideas for the future? How
do we fund research, invention, and innovation?

Matt says One model that could evolve is that universities do the
far-out stuff and industry focuses on the shorter term.


"Could evolve"? Believe me, this is what every academic research
grant proposal said that I saw when I sat on an Intel committee giving
out research grants. It usually doesn't work - although once in a while it does.

Matt says: Google takes a very different approach, one in which
there is no division between "research" and "engineering."
This is
an interesting approach. Myself, I am not a very good multi-tasker - I
tend to work intensely on a problem for weeks at a time. I don't know
how well I could manage 20% time, 1 day a week, for new projects.
(Although I am supposedly doing something like this for my current
job, the 90% main job tends to expand to fill all time available to
it.) But it may work for some people.

Somebody else posts about industry research: IBM Research and HP
Labs don't really have an academic research mindset and haven't for a
long time thanks to business unit based funding.
Then goes on to say
Even within Intel Research, successful researchers (in terms of
promotion beyond a certain key point) also had to have some kind of
significant internal impact.
Which is, I suspect, why the famous
professor who ran MRL after I left emphasized the VLIW dead-end,over
the objections of his senior researchers.

My own vision for Intel's Microprocessor Research Labs that the best
technology transfer is by transferring people. You can't throw
research over the wall and expect a product group to use it. Instead,
I wanted to have people flow back andforth between product groups and
research. In part I wanted to use the labs as an R&R stop for smart
people in the product groups - give them a place to recharge their
batteries after an exhausting 5 to 7 year product implementation. A
place to create their own new ideas, and/or borrow ideas from the
academics they might interact with in the labs. And then go back to a
product group, perhaps dragging a few of the academics along with
them, when they align with the start of a new project. "Align with the
start of a new project" - this is important. Sometimes a project
finishes, and there is no new project for the smart guys coming off
the old project to join, because of the vagaries of project schedules.
All too often people jump ship off an old project too early, because
they want to get onto the sexy new project at the right time for their
career growth. By providing such a scheduling buffer, this thrashing
may be avoided - and the even worse happenstance, when a smart guy
leaves the company, because there is no new project for him at his
current employer, while there is at the new company. And, finally,
once in a while a new project flows out of the lab.

I am particularly sympathetic to the Anonymous poster who said How
about a totally different alternative model?
and then talks about
memes popular in the Open Source community, such as People do not
need to spend half of their life in formal schooling to start doing
cutting edge work.
But then he says:

Most academic research outside of the top 5-10 schools in any field
is not useful, even by academic standards.
I go further: MOST
academic work at MOST schools, even the top 5-10 schools, is not
useful. But oftyen the best academic work is at some little known
third or fouth tier school, and has trouble getting published.

Code is (far) more useful than papers. I am very sympathetic to
this. But (a) most engineers and programmers are not free to publish
either code or papers, limited by their employment agreements. And
occasionaly (b) the papers are a useful summary of the good ideas.

I look forward to the day when we can have Open Source computer
hardware. I don't say this facetiously: some of my best friends are
working on it. I would also, if I did not need an income.

Many of the people capable of contributing at a high level in
academia have the ability to start significant companies and create
genuine wealth.
Many, but not most. Not every good technical person
is a good business person.

Which leads in to my closing point: Not every good technical person is
a good business person. Not every inventor is capable of building a
company around his investions. Many of the most useful inventions
cannot justify a completely new and independent company: they need the
ecosystem of an existing product line, and the support of a larger
organization.

For many years my resume said that my gioal was to re-create Thomas
Edison's Invention Factory in the modern world. In 2009 I left Intel
for the second time, and joined Intellectual Ventures. (With whom I
had earlier worked on some inventions I had made in 2004, in the short
time between my leaving AMD and rejoining Intel, the only time in my
career that my ideas belonged to me, and not my employer.)

I'm not authorized to speak for Intellectual Ventures, but I can refer
you to some of the things on the IV website,
http://www.intellectualventures.com:

“An industry dedicated to financing inventors and monetizing their
creations could transform the world.” Nathan Myhrvold, Founder and
CEO

    We believe ideas are valuable. At Intellectual Ventures, we invest both expertise and capital in the development of inventions. We collaborate with leading inventors and partner with pioneering companies. ... We are creating an active market for invention... We do this by: * Employing talented inventors here at Intellectual Ventures who work on new inventions to help solve some of the world’s biggest problems. * Purchasing inventions from individual inventors and businesses ... * Partnering with our international network of more than 3,000 inventors and helping them to monetize their inventions.

Most of posters, the original blogger and the authors of the comments,
on this topic are interested in promoting research, invention, and
innovation. Sometimes you need to create a new economic or business
model. I hope it works.

Matt says: It is hard to justify the Bell Labs model in today's
world, though no doubt it had tremendous impact.


Somebody else once said to me that Bell Labs could have been kept running, avoiding its decline, based solely on its patent royalties.

Would that have been worth it? These were the people that gave us the
transistor. Information Theory. UNIX.






Finally, I must make the following disclaimer:

The content of this message is my personal opinion only. Although I am
an employee (currently of Quantum Intellectual Property Services,
working for Intellectual Ventures; in the past of companies such as
Intel, AMD, Motorola, and Gould), I reveal this only so that the
reader may account for any possible bias I may have towards my
employer's products. The statements I make here in no way represent my
employer's position on the issue, nor am I authorized to speak on
behalf of my employer.