Wiki Inventor Sticks a Fork in His Baby

Ward Cunningham, the creator of the wiki. Image: WikiCommons

Ward Cunningham, the creator of the wiki, is proud of his invention. “We changed the world together,” he says of those who contributed to his software development site C2, which spawned the online collaboration software that underpins Wikipedia and countless other services across the net.

But there is one thing about the wiki that he regrets. “I always felt bad that I owned all those pages,” he says. The central idea of a wiki — whether it’s driving Wikipedia or C2 — is that anyone can add or edit a page, but those pages all live on servers that someone else owns and controls. Cunningham now believes that no one should have that sort of central control, so he has built something called the federated wiki.

This new creation taps into the communal ethos fostered by GitHub, a place where software developers can not only collaborate on software projects but also instantly “fork” these projects, spawning entirely new collaborations.

Over the years, developers have written over 35,000 pages of content on C2, all of which reside on Cunningham’s server instead of on servers controlled by the author. When you contribute to someone else’s wiki, you risk losing all your changes if that site goes down. It also means you have to play by someone else’s edit rules.

There’s nothing stopping you from copying and pasting your contributions from a wiki, or starting your wiki if you don’t like someone else’s edits. But it can be hard to attract an audience. Cunningham says that in the early days of the wiki, many other people tried to start software development wikis but most of them didn’t get much traction. People wanted to contribute to C2, because that’s where the readers were.

The federated wiki is an attempt to solve these problems. As a starting point, he has built a new piece of software — dubbed The Smallest Federated Wiki — to demonstrate the concept. The radical idea of the wiki was to put an edit button on every page. The radical idea of the federated wiki is to put a “fork” button on every page.

Cunningham’s vision is that you will have your own wiki, perhaps several wikis. When you see a page on someone else’s federated wiki that you want to edit, you can click “fork,” and the page is copied into your own wiki where you can edit it. The owner of the original wiki can then decide whether to merge your changes into the original page.

Readers can still find a list of forks, so even if your changes aren’t accepted readers can still find your version of a page. The federated wiki concept does more than just help editors own their own data, Max Ogden, a Code for America fellow who has advised Cunningham, tells Wired. It enables dissent.

“Wikipedia forces you to give up your own perspective,” Ogden says. There are issues that no one will agree on, but with the federated wiki model, everyone can have their own version of controversial pages. “And they’re all linked together, so you can still explore them like a wiki.”

The similarities between the federated wiki and GitHub are not coincidental. “The radical code sharing that’s implicit to GitHub was an inspiration,” Cunningham says.

But is it too nerdy to catch on? To run The Simplest Federated Wiki, you’ll need your own web server, which Cunningham thinks is an important part of the project. Cunningham credits this philosophy to Network Startup Resource Center founder Randy Bush, who helped Cunningham set up the server that C2 ran on when it first launched in 1994.

When Cunningham met him in 1994, Bush was helping universities in developing nations set up web servers and connect them to the internet. It was Bush who first introduced Cunningham to the idea that everyone should have their own web server. Cunningham doesn’t think you necessarily need to have your own physical server under your desk anymore, but thinks it makes sense for people to control their own servers.

“If people don’t control their own infrastructure, they get needy,” he says. They’re at the mercy of service providers who can disappear, impose rules that constrain creativity and/or make it difficult to backup content that you’ve created. “It’s good to simplify things, but they shouldn’t be simplified in such a way as to make the user helpless.”

To make Federated Wiki easier to adopt, there’s a one-click installer to deploy a server to Amazon Web Services. But today, with even many of most tech savvy of operators choosing to post their content to walled gardens like Facebook, Tumblr, and Google+, Federated Wiki may be facing an uphill battle. But Cunningham is undaunted.

“The assumption is that we won’t be creative, but Facebook proves that everyone wants to have their own page, their own stream,” Cunningham says.

Amazon Blames Generators for Blackout That Crushed Netflix

Amazon said a backup generator failure caused its weekend outage. Photo: Flickr/4nitsirk

Amazon has published a more detailed explanation about the outage that knocked out a number of popular websites on Friday night, including Netflix, Instagram, and Pinterest. The culprit: a 20-minute power outage at a single Northern Virginia data center.

Problems started at 7:24 p.m. PDT when there was a “large voltage spike” on the grid used by two of Amazon’s data centers. When technicians tried to move to backup power, the diesel-powered generators just didn’t work properly at one of the data centers. “The generators started successfully,” Amazon now says, “but each generator independently failed to provide stable voltage as they were brought into service.”

Judging from Amazon’s explanation, the generators may have been powering up, but the switching equipment at the data center didn’t think they were ready for a switchover.

Then, to confuse matters more, the power went back on for a few minutes and then failed again, just three minutes before 8 p.m. Seven minutes later, the data center’s battery backups started to fail.

Then the data center went dark.

Continue Reading “Amazon Blames Generators for Blackout That Crushed Netflix” »

Google Shaman Explains Mysteries of ‘Compute Engine’

The Google Compute Engine was only announced on Thursday. But at least one software developer has already added the logo to a latte. Image: yukop/Flickr

Google started work on the Google Compute Engine over a year and a half ago, and it was all Peter Magnusson could do to keep his mouth shut.

Magnusson is the director of engineering for Compute Engine’s sister service, Google App Engine, and over the past 18 months, as he spoke at various conferences and chatted with various software developers about Google’s place in the world of cloud computing, he couldn’t quite explain how serious the company is about competing with Amazon’s massively popular Elastic Compute Cloud and other commercial services that seek to reinvent the way online applications are built and operated.

Google entered the cloud computing game back in 2008, when it unveiled Google App Engine, a service that lets outside software developers build and host applications atop the same sweeping infrastructure that runs Google’s own web services, such as Google Search and Gmail. Like Amazon’s cloud, this is a way of running online applications without setting up your own data center infrastructure. But it was difficult to tell whether the service was just one of those half-hearted Google experiments that would one day fall by the wayside. Though the service let you automatically accommodate an infinite amount of traffic — or thereabouts — it put tight restrictions on what programmers could and couldn’t do, and this seemed to limit its appeal.

Last fall, Google signaled its intent when it removed the “beta test” tag from Google App Engine and launched Google Cloud Storage, a separate service dedicated to housing large amounts of data. But all the pieces fell into place last week when the company uncloaked Compute Engine, a service that gives developers access to hundreds of thousands of raw virtual machines at a moment’s notice.

“Google Compute Engine gives you Linux virtual machines at Google-scale. You can spin up two VMs or 10,000 VMs,” said Urs Hölzle, the man who oversees Google’s vast infrastructure. “You benefit from the efficiency of Google’s data centers and our decade of experience running them.”

What this means is that developers and businesses can grab a vast amount of processing power and apply it to almost any task they want. Google is not only offering App Engine — a service that lets you build applications without having to worry about raw storage and processing power — it’s also giving you, well, raw storage and processing power. In other words, it’s going head-to-head with Amazon, the undisputed king of commercial cloud services that has long offered such raw resources as well as “higher level” services for building and running massive applications.

“We’re pairing Compute Engine with App Engine,” says Peter Magnusson. “But, increasingly, they will be able to work together.”

Google pioneered the art of the “cloud” infrastructure. But Amazon beat it to the idea of sharing such an infrastructure with the rest of the world. Six years after Amazon first offered its web services to outside developers and businesses, Google is still playing catch-up. But it’s intent on making up that lost ground.

Continue Reading “Google Shaman Explains Mysteries of ‘Compute Engine’” »

The Inside Story of the Extra Second That Crashed the Web

The leap second glitch that brought down several web operations on Saturday night can be traced to the Linux kernel. And it was fixed months ago. Sort of. Photo: foxgrrl/Flickr

When Saturday night’s leap second glitch hit Reddit, Jason Harvey didn’t realize it was the leap second glitch. He thought it was some sort of internet slowdown related to the massive Amazon cloud outage that brought down some of the web’s most popular services less than 24 hours earlier.

“It looked like the network was just moving really poorly,” says Harvey, one of the system administrators who oversee the operation of Reddit, the popular news aggregation and discussion site. “With Amazon going down, a network problem just made sense.”

But after about half an hour, Harvey and his team traced the problem to a group of their own machines running the open source Linux operating system. These servers had almost ground to a halt after failing to properly accommodate the “leap second” that was added to the world’s atomic clocks on Saturday night, as June turned into July.

Depending on how quickly the earth is spinning, the planet’s official time keepers periodically add an extra second to these clocks to keep them in sync with the planet’s rotation. This keeps us from drifting away to a place where sunsets happen in the morning, but it can cause problems with computing systems that plug into these clocks but aren’t quite agile enough to deal with that extra second.

In Reddit’s case, the problem could be traced to a glitch in the Linux kernel, the core of the open source operating system. A Linux subsystem called “hrtimer” — short for high-res timer — got confused by the time change, and suddenly sparked some hyperactivity on those servers, which locked up the machines’ CPUs.

Reddit was just one of several web outfits that were hit by leap second glitches just after midnight Greenwich Mean Time on Saturday, including Gawker Media and Mozilla, and these sorts of problems tend to pop up with every time there’s a leap second adjustment. In January 2009, for instance, the leap second reportedly caused problems with Sun Microsystems’ Solaris operating system and an Oracle software package.

“Almost every time we have a leap second, we find something,” Linux’s creator, Linus Torvalds, tells Wired. “It’s really annoying, because it’s a classic case of code that is basically never run, and thus not tested by users under their normal conditions.”

Continue Reading “The Inside Story of the Extra Second That Crashed the Web” »

Pages: 1 2 View All

Dell Continues Quest for Software Domination

Michael Dell is intent on pushing his company even further into the world of software. Credit: Dell/Flickr

Dell has agreed to purchase Quest Software — a Aliso Viejo, California-based outfit offering tools for managing a wide range of business software — as it continues to shift its operation toward the world of software.

The purchase — for approximately $2.4 billion — was announced on Monday morning.

Quest is something of an IT generalist, providing tools that help manage and monitor everything from databases, applications, and operating systems to virtualization software. Today, it claims over 100,000 customers, and Dell says the purchase will provide particular help with its own mid-sized customers.

Dell began life as a PC maker, but over the past several years, it has morphed into a company that deals with a wide range of business hardware and, yes, software. In February of this year, John Swainson — a former IBM and CA exec — joined Dell to run its new software group, and shortly thereafter, the company purchased SonicWall, a network security and data protection outfit, and AppAssure, which provides security for cloud services and other virtualized operations.

In April, the company bought three different software vendors in four days, including Wyse Technology, a desktop virtualization outfit, and Make Technologies, which helps businesses update legacy applications for use today.

“Given Dell’s thirst toward becoming a full scale IT company, if you’re heading in that direction — and we clearly are — then applications become very very key,” Suresh Vaswani, chairman of Dell India and executive vice president of the company’s services division, recently told Wired.

A Dell spokesman tells Wired that Quest will be “strong strategic fit” with Dell’s software group and provide additional tools for systems management, security, data protection,and workspace management. Quest also develops a portfolio of software for managing Microsoft’s Windows Server operating system, seeking to automate tasks and organize data.

“Quest’s suite of industry-leading software products, highly-talented team members and unique intellectual property will position us well in the largest and fastest growing areas of the software industry,” read a canned statement from Swainson.

Quest has a sales staff of 1,500 and employs about 1,300 software developers, and it pulls in almost $1 billion in annual revenue.

Last year, fellow IT giant HP has tried to completely shift the core of its business from hardware to software, but this backfired, resulting in departure of its CEO, other high-level defections and a sorts of other turmoil. Dell has taken a very different route, choosing to gradually augment its existing business with a wide range of software acquisitions.

Update: This story was updated to include the purchase terms.