|By Maureen O'Gara||
|February 20, 2009 10:30 PM EST||
Researchers from the San Diego Supercomputer Center (SDSC) at the University of California, San Diego have gotten a two-year $450,000 grant from the National Science Foundation (NSF) to explore new ways for academic researchers to provision and manage extremely large data sets on clouds using the Google-IBM CluE Cluster Exploratory Cluster and the open source Apache Hadoop programming environment. Seems the ever-increasing volume of scientific data is beginning to overwhelm current approaches to data management. IBM and Google are picking up the tab for running CluE.
- The Top 150 Players in Cloud Computing
- Dolphin Announces Open API With Over 50 Add-ons Including Dropbox and Wikipedia
- i-Technology 2008 Predictions: Where's RIAs, AJAX, SOA and Virtualization Headed in 2008?
- The Top 250 Players in the Cloud Computing Ecosystem
- Success, Arrogance, Rise and Fall
- Cloud People: A Who's Who of Cloud Computing
- Cloud Computing Expo 2009 West: Call for Papers Now Closed
- Cloud Expo Europe 2009 in Prague: Themes & Topics
- Cloud Expo New York Call for Papers Now Open
- The Top 100 Bloggers on Cloud Computing