Thursday, August 26, 2004

Graphics

In response to a previous post, Chandra asks:
Turing awards are typically given for work
that induce a paradigm shift. Not being very
knowledgeable about graphics, I would like to
know whether there have been any recent paradigm
shifts in graphics.
Hmm. I am not an expert, but here are some things:
  • The switch from vector to raster graphics
  • The use of specialized hardware (the graphics card) for rendering operations
  • (and this is far too soon to tell) the advent of programmability in graphics cards.
all of these are fairly recent. It would be fair to say that much of the modern state of graphics reflects innovation that happened starting in the late 80s and going forward, and considering that the most recent Turing Award was given for something invented in the late 70s....



As an aside, wouldn't it be nice to have RSS feeds for comments ? I don't have any illusions that people are falling over themselves to hear what I have to say, but the only easy way for me to respond to comments in a way that they can find out is to reply directly (not always possible) or post a new entry.

Haloscan used to do this, but blogger doesn't... yet....

3 comments:

  1. I am not sure the things you mention are pardigm
    shifts caused by innovation in graphics - they
    have more to do with Moore's law and innovations
    in computer architecture.

    Chandra

    ReplyDelete
  2. this in my view is debtable. graphics hardware is very specialized, and it was definitely not obvious that a dedicated processor for graphics made sense. UNC had been pushing this idea for a while with PixelPlanes in the late 80s. Many of the features we take for granted in modern graphics cards; things like the depth buffer, stencil buffer etc, are the result of innovation and graphics design from the early 90s.

    Keep in mind that the entire language of graphics as we see it today, expressed in terms of vertices, model transforms, rasterizers, texturing etc, were all developed over the past 20 years or so, and many of these ideas came from a small group. Now it is possible, (and a graphics expert might be able to shed more light here), that no one person (or small number of people) can be associated with some of the major foundational elements underlying the modern graphics card, but the fundamentals of the field are more than just a byproduct of Moore's Law.

    ReplyDelete
  3. But is there any important contribution from graphics to the regular computer user? You know, my computer would be as useful for me without 3d accellerator card. I would argue that the important breakthroughs in CS were in networking, operating systems and databases (i.e., search retrival), and considerably less so in graphics.

    Also, using dedicated computer for drawing graphics is a natrual result of Moore law, as mentioned before. Anyway, the idea of using dedicated hardware for things is as old as computers.
    And the idea of trying to use specialied hardware to do general purpose computing is also very old.

    I would argue (as the devil's advocate) that graphics is dead. If you look on Sigraph proceedings from the last few years, it seems to be mainly incramental and uninteresting work.

    ReplyDelete

Disqus for The Geomblog