You are viewing an obsolete version of the DU website which is no longer supported by the Administrators. Visit The New DU.
Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Reply #11: Back up and re-read the article. [View All]

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Science Donate to DU
Greyskye Donating Member (1000+ posts) Send PM | Profile | Ignore Wed May-26-10 08:21 PM
Response to Reply #10
11. Back up and re-read the article.
:banghead:

If you truly believe that this stochastic approach is unworkable, then you are advocating giving up the push to keep on track with Moore's Law. This is not something limited to Intel. This is a physical limit caused by what goes on when etching and doing lithography on a sub-nanometer scale in silicon. Every single semi-conductor maker on the planet is working under the same physical laws, and they all are in the same boat. Intel is in the news simply since they are usually one to two generations ahead of anyone else out there in the production world, and this is one of the solutions that they are evidently looking into.

The chip 'errors' that they are talking about in the linked article are practically random variables at the scale we're talking about. One of the ways you reduce these, is by using the DFM rules I mentioned earlier. Intel is looking into ways to further reduce these errors, in this case using stochastic programming.

Do you own a cell phone? Chances are good that I was on the team that developed the flash memory chip that your cell phone uses. And you know what? We built redundancy into the flash memory array, so that when one, or two, or 10 rows or columns in that memory array fail due to process issues in the fab, that failure is detected, those rows or columns are blocked off, and part of the redundant memory array structure kicks in instead. And you don't lose data. Which is what you care about in the end; having an inexpensive reliable product.

That is the sort of thing they're talking about, except on a larger, smarter scale. And if it can't be done, either through this method, or some alternative, microprocessors are going to stop getting smaller, faster, and cheaper at the rate that we've become accustomed to over the last 20 years. Sure, they'll keep getting faster and more powerful by hook or by crook; but if you still want them cheap, we've got to think outside the box. This is one of those out-of-the-box shots. If this doesn't work, then they will either find another way around this particular issue, or we're going to have to abandon silicon for something else, or someone will come up with an entirely new computing paradigm that doesn't have the failure rates associated with sub-nanometer technology.

They aren't idiots, as you call them. I think that this is an interesting approach that has a shot at success. But what the hell do I know; I've only given presentations at international semiconductor conventions. The last one I presented at was the International Cadence Usersgroup convention in Santa Clara. Do you have any credentials in this field?


Printer Friendly | Permalink |  | Top
 

Home » Discuss » Topic Forums » Science Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC