backtop


Print 24 comment(s) - last by vol7ron.. on Feb 17 at 2:24 PM

System could be useful for mission critical applications, such as combat robotics

Professor Peter Bentley of the University College of London and his colleague Christos Sakellariou aren't impressed with most everyday computers, which aren't very fault tolerant and can only multitask by flipping their cores between various sequential instruction streams in a program.

He describes in an interview with NewScientist, "Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program.  Nature isn't like that.  Its processes are distributed, decentralised and probabilistic. And they are fault tolerant, able to heal themselves. A computer should be able to do that."

So the pair set out to make a new hardware and a new operating system, capable of handling tasks differently from most current machines, which even if "parallel" deal with instructions sequentially.

The new machine has instruction set pairs that tell what to do when a certain set of data is encountered.  The instructions-data pairs are then sent to multiple "systems", which are chosen at random to produce results.  Each system has its own redundant stack of instructions, so if one gets corrupted, others can finish up the work.  And each system has its own memory and storage; so "crashes" due to memory/storage errors are eliminated.

Comments Prof. Bentley, "The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions."

The results will be presented at an April conference in Singapore. 

The team is currently working on coding the machine so that it can reprogram its own instructions to respond to changes in the environment.  That self-learning, combined with the redundant, pseudorandom nature of the system would make it quite a bit more similar to a human brain than a traditional computer.

Potential applications for such a system include military robotics, swarm robotics, and mission critical servers.  For example, if an unmanned aerial vehicle sustained damage or was hacked, it might be able to reprogram itself and escape errors thanks to the redundancy, allowing it to fly home.

The computer is somewhat similar to so-called "probabilistic" chip designs, which are being researched at other universities.

Source: New Scientist



Comments     Threshold


Except
By Ammohunt on 2/15/2013 2:45:30 PM , Rating: 5
When the problem it solves happens to be that humans are error prone and should be eliminated.




RE: Except
By Jeffk464 on 2/15/2013 3:16:00 PM , Rating: 2
I'm sure if you could read the shirt the boob picture relates to the article, as of now it just seems like a random boob picture.


RE: Except
By augiem on 2/15/2013 4:55:43 PM , Rating: 4
Never heard of BSOD?


RE: Except
By Motoman on 2/15/2013 5:24:23 PM , Rating: 3
Boob Shirt Of Death?


RE: Except
By Regected on 2/15/2013 9:32:57 PM , Rating: 2
Boob Showing Off Dailytech!


RE: Except
By Samus on 2/15/2013 11:53:09 PM , Rating: 2
IBM demonstrated 'uncrashable' computing with OS/2 by multi-threading the kernel into separate memory spaces. As soon as one became inconsistent it was eliminated, the memory space was recovered, and a kernel thread that passed integrity was re-loaded. Obviously performance was low, but it eliminated the need for ECC memory and other hardware checks since most integrity was done in software.

It was considered uncrashable and many airports and mission-critical operations worldwide still run it 20 years later.

I remember running OS/2 Warp and OS/2 Merlin server in the 90's and they were damn smooth OS's. Microsoft didn't license WIN32 compatibility, killing WINOS/2 when Windows 95 and Windows NT emerged, so most non-corporate environments lost interest and IBM canned the project. It was a big F#(^-you to IBM from Microsoft even though IBM licensed Microsoft's operating system a decade earlier.


RE: Except
By Motoman on 2/16/2013 8:05:22 PM , Rating: 1
When I was in college our main lab was OS/2. And the place I worked at right out of college, a major insurance company, was all OS/2.

The problem was that many things had to be run in Windows emulation, even back then. OS/2 was excellent in many ways, but it was too late, and they'd ceded the market to Windows.

If IBM hadn't let Microsoft do Windows first, OS/2 would be the one ruling the world right now.


RE: Except
By UpSpin on 2/15/2013 6:05:55 PM , Rating: 2
So it never ever happened to you that a smartphone app crashed? That your browser on your computer crashed? That a game crashed?

The operating systems are pretty stable nowadays. But they also crash and BSOD aren't totally gone. In the past they often crashed as soon as a program stopped working. Nowadays Windows crashes if a driver is not working properly, devices don't like each other and cause issues (all software related issues) or, as in this article described, hardware (like memory) malfunctions.

Most crashes get caused by software errors. Your smartphone apps crash because of bad coding, not because of the OS or hardware.
The idea mentioned in this article is to solve hardware related issues which occur in system which have to run for a very long time.


RE: Except
By Flunk on 2/15/2013 4:03:38 PM , Rating: 2
So who writes the computer software?

Seeing as almost all crashes are caused by software glitches I can't see this being important for most people. In scientific research, this would be great. For the rest of us, there is little point.


RE: Except
By dgingerich on 2/15/2013 4:18:29 PM , Rating: 2
Most crashes happen because software developers make things overly complicated instead of using the most simple method, thus increasing overhead and unintended consequences. They're taught to be this way in college. (I remember my English Comp 2 class and her "wallow in complexity" lesson. I was thinking "how absolutely stupid" the whole time, and dropped the class shortly afterward.) In fact, most of the problems with society can be traced to two sources: our mass media companies and our educational methods.

I don't watch TV, and I taught myself most of what I know about computers, math, and science, with the help of books, magazines, reputable internet sources, and forums. School was always of little use to me, and US TV has always been stupid and boring. So, I can see these things from the outside.


RE: Except
By UpSpin on 2/15/2013 5:24:00 PM , Rating: 2
Most crashes happen because software developers make their live too easy and don't think past their own program. Instead of checking the validity of inputs (user input, website parsing, ...), they just assume everything will be fine. Instead of including error handlers, they assume, that nothing will go wrong. Instead of a consistent coding style and good commenting, they are lazy, paste parts from other projects together, without thinking about it. Errors happen, but this does not have to cause a crash with a proper coding style.

The mentioned issues in this article are hardware related. Nothing which will be a problem for the majority of so called 'programmers'. Such hardware issues are relevant in scientific or embedded applications, in which the processor runs several years without any interruption.

I don't see any relation between how you learned hopefully proper coding, your opinion about school and TV, and this article. Just as you can watch poor TV channels you can read poor books, visit poor lectures, ..., but well, this has nothing to do with the article.


RE: Except
By augiem on 2/15/2013 6:07:43 PM , Rating: 3
quote:
US TV has always been stupid and boring


As opposed to? Over dramatic anime and wacky gameshows (Japan)? Women who are half silicone and half makeup by weight screaming at each other over some guy (Mexico)? Old rich men crashing cars and blowing things up (England)? US TV certainly does not have a monopoly on stupidity.


RE: Except
By dgingerich on 2/16/2013 11:06:37 AM , Rating: 4
As opposed to time travel adventures with some sense of a moral lesson behind them (British), old rich men crashing cars and blowing things up (British, that show is fun, stupid at times, yes, but massive fun, and the US version just plain sucks), and scifi space travel adventures with some sense of a moral lesson (Canadian).

US TV has cancelled some of the best shows they're ever had, like Firefly, or refused to follow up on many that were good and popular, like Star Trek. Yet they feed the selfish idiots with so called 'reality' shows, promoting selfishness, which is the seed, root, and trunk of all society's problems, and laziness, the ground in which such things grow. Or they push shows that openly belittle the smartest and most valuable traits people can have with shows like Big Bang Theory and King of the Nerds. Plus we have such "critically acclaimed" shows that demonstrate incredible levels of selfishness like Breaking Bad and Grey's Anatomy. The actively push the idea that each individual is the most important person in the world, which is so incredibly WRONG .

It's not so much a monopoly that US TV has on stupid shows, it the active development of stupidity and selfishness and active denial of morality, thoughtfulness, and hard work. There simply isn't anything on these days, or for the past several years, that has any redeeming value. This selfish attitude the media keeps pushing puts us all in conflict, playing wants of individuals at odds with each other and pushing the idea of taking away from someone else to give yourself more, instead of pushing the idea that by working together we can all move forward and advance a common good, making all of us far better off. What's worse is the public education system is actively teaching these same 'values', corrupting what responsible parents are trying to teach their kids.

We're all better off playing video games like Star Trek Online and World of Warcraft that promote working together to conquer common threats and think on a level of the common good.

OK, there are some shows on US cable channels, like Burn Notice, White Collar, and Mythbusters that have some redeeming value, but the major networks haven't had any redeeming value is decades.

Also, Japanese anime may be overdramatic at times, but they include important life lessons that the people of the US need to learn that simply aren't taught here, things that have taught me how to be much more effective as a person and satisfied with my life. They teach things that are integral to the Japanese culture, like "do your best" and "it can't be helped" and "make your enemies into your friends" that US children simply don't learn. Our country suffers for that lack.

I thank God that there are still some people in this country that know these things and teach their kids these things, but they are being overwhelmed by people who believe they have to take away from others to make themselves comfortable and the government has to meet their needs instead of actually WORKING FOR IT, all because of the media and educational indoctrination.


nice picture
By MadMan007 on 2/15/2013 3:37:27 PM , Rating: 2
New acronym for BSOD = Boob Screen Of Death




RE: nice picture
By Scratches16 on 2/15/2013 3:55:26 PM , Rating: 5
Hey, if I was shown boobs every time my PC crashed, maybe I wouldn't be so annoyed by it... lol


RE: nice picture
By Totally on 2/15/2013 10:16:18 PM , Rating: 2
Actually, if that happened I'd put together some malicious code that I could execute whenever I would want to see them.


wait a minute
By DanNeely on 2/15/2013 2:49:12 PM , Rating: 2
Proving software will never crash is equivalent to solving the halting problem; which is impossible to do on a Turing machine (ie any general purpose computer). Unless this computer is using some simpler computational model, in which case it won't be able to do a huge chunk of what modern computers do that funny smell is organic fertilizer from the local cattle ranch.




RE: wait a minute
By mik123 on 2/15/2013 4:12:33 PM , Rating: 2
They only claim that "crashes due to memory/storage errors are eliminated"

Oh, btw, Turing Machines don't "crash". They are ideal computational devices with unlimited resources.


Yeah, right.
By danjw1 on 2/15/2013 5:38:30 PM , Rating: 2
I remember several years ago when a Professor said he had created a provably correct program. Within hours others had shown bugs in it. I will believe it when most people not associated with the project actually agree that they have done as claimed.




RE: Yeah, right.
By vol7ron on 2/17/2013 2:24:16 PM , Rating: 2
Isn't that how the scientific community usually works?


Parallelism limited by number of 'systems'?
By UpSpin on 2/15/2013 5:51:47 PM , Rating: 2
So they say that the normal computer runs in a loop and polls the inputs and then processes, if necessary, the required task as fast as possible.
They now use something similar to well known interrupts, which get used in computers since ages. But instead of halting the current calculation to process the interrupt (current method), they send it to several systems (redundant) which do the processing at a random time (mimic nature). There's no 'main task', only several systems which interact with each other and build a working machine.33

But following remains unclear: What are those systems? Individual processors? But this means that the amount of tasks the computer can handle is limited by the amount of processors available. If the computer needs to do more, they won't be able handle it. So this approach looks great in theory but is impracticable. So they could use a more traditional way and store those systems in memory (just as in the article) and now use a pseudo random generator to select a system to get processed. But then the systems don't get processed parallel either but one after another, just in a random order. Even worse, they won't be able to process time critical inputs, because the systems get processed at a random time, which means, in the worst case, after a too long time.

So in short: I don't get it :-) It's redundant, it's independent of a main task, but how do they solve the above mentioned physical limitations. And if the random number generator crashes, the same redundant systems at the same time crash, or if there's a software error the computer will crash, too ^^




By Fritzr on 2/17/2013 10:05:53 AM , Rating: 2
Instead of "totally" random job selection, each processor uses a queue.

When a job is completed, notice is sent to all that received that job and abort or delete from queue is done as appropriate.

Time sensitive jobs get a priority code attached and go to the front of the line.

Multiple priorities get processed in the order received. Do or Die priority can be a separate code and be processed on receipt. Multiples can be processed by timeslicing unless they are "realtime" in which case additional Do or Die processes wait for the processor to be freed up.

Just some quick thoughts with about 3 minutes thought. I am sure the designers of this system have put at least 5 minutes of time into resolving the issues you mention. OS/2 could certainly handle your problems.


Sounds Familiar
By vol7ron on 2/15/2013 2:50:12 PM , Rating: 2
Seems similar to Google's MapReduce for databases, which NoSQL databases like Hadoop and Greenplum use to parallelize processing of large datasets (petabytes).

Essentially this sounds like its making a cluster on the same motherboard, or a localized server farm. Nothing new in concept, but nice if it makes its way into servers.




Not rocket science
By Beenthere on 2/16/2013 2:11:27 AM , Rating: 2
Current day computers are a poor excuse for what they could be primarily due to apathy. The proof is in the totally defective Windoze O/S's that have been forced on consumers thru illegal means. How could you ever expect a computer which is logic based to be reliable when forced to run a defective O/S that has millions of documented defects? Making the hardware more reliable is also possible if hardware makers actually cared and did proper validation.

In reality PC sales like all electronics is about making money, not about delivering reliable computers. The only reason current computers are unreliable is because consumers will buy this crap. Why waste time and money making a proper computer when you can sell crap for windfall profits? Sure better micro code can improve performance and it should be the basis for all computers, but don't expect to see it any time soon for commercial use as it's less profiatble than selling crapware.

This ain't rocket science and has been known for 30+ years.




"When an individual makes a copy of a song for himself, I suppose we can say he stole a song." -- Sony BMG attorney Jennifer Pariser














botimage
Copyright 2013 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki