Expanding Your Window of Opportunity

Joe Austin

Subscribe to Joe Austin: eMailAlertsEmail Alerts
Get Joe Austin: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Cloud Computing, Cloudonomics Journal, CIO, Cloud Hosting & Service Providers Journal

Blog Post

Petabyte Explosion: How CalTech Manages to Manage Billions of Files

How CalTech Manages to Manage Billions of Files

Managing billions of small files effectively requires a clear understanding of data flows and a system based on common Lego-like building blocks that provide services to application owners. This was the message at the September 29th, 2009 Peer Incite Research Meeting, where an industry practitioner, Eugene Hacopians, Senior Systems Engineer at the California Institute of Technology (Caltech), addressed the Wikibon community.

Caltech is the academic home of NASA’s Jet Propulsion Laboratory. As such it runs the downlink for the Spitzer Space Telescope, NASA's orbital space telescope, as well as 13 other missions, processes the raw data into images, and supports the needs of scientists visiting from locations worldwide.

The focus of this discussion was the activities of the Infrared Processing and Analysis Center (IPAC), which has evolved to become the national archive for infrared analysis from telescopic space missions. To be sure, Caltech’s needs are on the edge. The organization is the steward for more than 2.3 petabytes of data created from its 14 currently active missions. Caltech captures data from these missions and performs intense analysis in what it calls its ‘Sandbox’, a server and storage infrastructure that supports scientific applications that analyze the data. Once ‘crunched,’ the data is moved to an archive, using homegrown data movement software.

The team at Caltech had to design a cost-effective means of providing reliable data access to all this scientific data. As well, organizationally, the projects supported by Caltech had to be completely walled from each other from an accounting standpoint. Rather than implement a shared SAN infrastructure with onerous chargeback mechanisms, Caltech decided to use a common set of technologies that would support each of the projects.

The technological building blocks are: A Sun Solaris server running the ZFS file system, A QLogic 5602 FC switch, One-to-three Nexsan SATA Beast arrays. Caltech uses Nexsan’s Automaid spindown capabilities in its archive to reduce energy costs, using Level 1 (slowing the spin speed of the disk) and Level 2 (parking the heads after sufficient inactivity). It does not put the drives into sleep mode (Level 3) and has never had reliability problems associated with spinning down devices.

Caltech uses SAIC tape for long term archiving and last resort off-site disaster recovery. However, its own tests indicate that because of the huge number of small files involved, recovery from tape would take weeks or longer. This building block approach has allowed Caltech to use common configurations across its infrastructure.

Caltech derives four main benefits from this strategy:

  • 1.The infrastructure is architected for fast, simple, safe recovery from failure or data loss.
  • 2.The approach scales nicely in support of Caltech’s data growth, which occurs in large chunks of hundreds of TB’s and billions of tiles at a time.
  • 3.It streamlines staff training.
  • 4.The "Lego" building-block method allows Caltech to reuse infrastructure when it comes off maintenance, providing it with large numbers of spares and saving money.

Caltech uses a cascading refresh approach when new infrastructure is purchased, placing the newer equipment in support of the most critical parts of the infrastructure and migrating older equipment to less mission-critical areas. In this case, the archive is the most critical as it houses massive numbers of files that scientists access for their research and because it is regarded as a National Archive, which should be kept indefinitely. The Sandbox infrastructure is the least critical because data is quickly migrated off it into the archive.

Click here for the entire story:

More Stories By Joe Austin

Joe Austin is Vice President, Client Relations, at Ventana Public Relations. He joined Ventana PR in 2006 with more than 14 years experience in high-tech strategic communications. His media relations experience spans both broadcast and print, and he maintains longstanding relationships with editors and reporters at business, IT, channel, and vertical publications. Austin's relationship with the media includes marquee outlets including CNN, BusinessWeek, USA Today, Bloomberg, and the Associated Press for clients ranging from startups to billion-dollar enterprises. Experience includes working with Maxell, McDATA (Acquired by Brocade), Center for Internet Security, Securent (Acquired by Cisco), Intrepidus Group/PhishMe, FireEye, Mimosa Systems, Xiotech, MOLI.com, EMC/Rainfinity, Spinnaker Networks (Acquired by NetApp), ONStor, Nexsan, Asigra, Avamar (Acquired by EMC), BakBone Software, Dot Hill, SANRAD, Open-E and others. With more than a decade of strategic planning, media tours, press conferences, and media/analyst relations for companies in the data storage, security, server virtualization, IT outsourcing and networking arenas, Austin's domain expertise assists in positioning clients for leadership. Austin was recently recognized as a “Top Tech Communicator” for the second year in a row by PRSourceCode. The editorial community – represented by more than 300 participating IT journalists – rated each winner based on best overall performance and recognized those who added the most value to their editorial processes in terms of responsiveness, reliability, and overall understanding of editorial needs.