Center Software Suite:
Providing Tools and Services for
Integration of Distributed Resources
The Grid Research, Integration, Deployment
and Support Center (GRIDS) was formed in late 2001 as part of the
National Science Foundation
Middleware Initiative (NMI), with the goal of creating a stable
middleware infrastructure to permit seamless resource sharing across
virtual organizations. NMI and GRIDS grew out of workshops and white
papers that identified the need for production-quality software
based on open-source and open-standard approaches.
The GRIDS Center Software Suite includes
components chosen for their combined benefits, working together for
maximum utility in which the whole is greater than the sum of its
parts. Through NMI, the GRIDS releases occur
regularly in April and October of each year. Stable NSF funding of NMI
means users can rely upon this software for timeliness, quality and
a high degree of interoperability to make the most of on-line
resources at your institution and beyond. GRIDS is the
preferred distribution for
major e-science projects such as GriPhyN, NEESgrid, TeraGrid, and
others. The current version can be downloaded as part of
NMI Release 5.1.
Since the inception of GRIDS, an emerging "cyberinfrastructure"
has begun to take shape. A blue ribbon panel commissioned by
NSF issued a
report in 2003 that called for substantial
investment in a new generation of collaborative tools for science
and engineering research. The committee's
chair, Dan Atkins of the University of Michigan, has said that "Grid
middleware is a very critical component. NMI and GRIDS address
important needs not just by providing stable tools, but also by
defining processes for the collaborative development of software for
science and engineering." The panel's 14 months of inquiry showed
that prior ad hoc efforts to develop infrastructure had been in
danger of becoming "balkanized," according to Atkins, with many
differing research communities developing independent -- and often
incompatible -- solutions to similar problems of interoperability
and resource sharing.
Corporate IT vendors have at the same time begun
to champion the Grid. IBM, Oracle, Sun Microsystems, Platform
Computing, Avaki and others are committed to developing products and
services based on the open standards embodied by GRIDS components.
Just as the World Wide Web was
initially the sole province of researchers, Grid technologies are on
the verge of moving beyond science and engineering to have
widespread influence on businesses and mainstream computing users.
For examples of real-world Grid applications
and links to project web sites, see
GRIDS Center Software Suite
As part of NMI, GRIDS develops and seeks standard components and
- Authentication, authorization, policy
- Resource discovery and directory
- Remote access to computers, data,
GRIDS also promotes integration of these
components with end-user tools (conferencing, data analysis, data
sharing, distributed computing, etc.), with campus infrastructures,
and with commercial technologies. The center's goals are to help
define, develop, deploy, and support integrated software supporting
21st Century science and engineering applications. The result will
be a national infrastructure that can be used by application
communities to explore full-scale, meaningful Grid applications.
At the core of GRIDS is packaged,
open-source software aimed at the national research, education, and
Globus Toolkit ®. The de facto
standard for Grid computing, this open-source software is a
modular "bag of technologies" that simplifies collaboration across
dynamic, multi-institutional virtual organizations. It includes
tools for authentication, scheduling, file transfer and resource
Condor. Whereas high performance
computing (HPC) is often measured in operations per second, Condor emphasizes high throughput computing (HTC) to deliver
processing capacity over longer periods of time -- days, weeks,
months and beyond. The GRIDS suite includes Condor-G, an enhanced version of the core
Condor software that is optimized to work with Globus Toolkit for
managing Grid jobs.
Network Weather Service (NWS).
This distributed system periodically monitors and dynamically
forecasts the performance that various network and computational
resources can deliver over a given time interval, using a
distributed set of performance sensors for instantaneous readings.
GSI-OpenSSH. This modified version
of OpenSSH adds support for Grid Security Infrastructure (GSI)
authentication. GSI-OpenSSH provides a single sign-on remote login
capability for the Grid.
Grid Packaging Tools (GPT).
This collection of packaging tools is built around an XML-based
packaging data format that provides a straight forward way to
define complex dependency and compatibility relationships between
software packages. GPT was used to create all of the Grids Center
Software Suite bundles and is a pre-requisite for installing them.
MyProxy. This credential repository lets Grid
users retrieve a proxy credential on demand, without worrying
about managing private key and certificate files. MyProxy can
improve security and flexibility, so job submissions will not fail
due to expired credentials.
MPICH-G2. This Grid-enabled implementation of the
Message Passing Index (MPI) standard is based on the popular MPICH
library. It works with the Globus Toolkit to link multiple
machines running MPI applications, possibly with different
GridConfig. These tools
manage the configuration of GRIDS software components. GridConfig provides
an easy way to generate and regenerate configuration files in
native formats, and to ensure configuration consistency.
GridSolve. This program uses the remote
procedure call (RPC) protocol to create a client/agent/server system for remote
access to Grid-enabled hardware and software.
PyGlobus. This tool permits users
to access the Globus Toolkit from Python, a high-level scripting language.
UberFTP. This is an interactive client
for GridFTP, which is part of the Globus Toolkit
GridPort. This enables development of portals and applications on top of underlying distributed and grid computing infrastructure to facilitate computational science. GridPort provides a comprehensive set of capabilities for using distributed computing resources via a consistent API that presents streamlined access to backend grid services from diverse grid computing technologies.
(New in NMI-R5)
DataCutter. This framework is designed to provide support for
processing of large scientific datasets in heterogeneous
environments. It supports a filter-stream programming model for
executing application-specific data processing, enabling
combined use of task- and data-parallelism. (New in NMI-R5)
STORM DataCutter. This services-based middleware is
designed to support data select and transfer operations on large and distributed
scientific datasets. The objective of STORM is to enable execution of select
queries on datasets stored in files distributed across a network. (New
AppLeS Parameter Sweep Template. APST is a tool that
schedules and deploys parameter sweep applications on the Computational Grid.
Its purpose is to schedule and deploy parameter sweep applications on the
Computational Grid. Common examples include all kinds of Monte-Carlo simulations
and parameter-space searches. (New in NMI-R5)
INCA. Ths generic framework automates testing,
verification, and monitoring of functionality common to a set of Grid systems.
(New in NMI-R5)
SRB Client. The SDSC Storage Resource Broker (SRB) is
client-server middleware that provides a uniform interface for connecting to
heterogeneous data resources over a network and accessing replicated data sets.
SRB, in conjunction with the Metadata Catalog (MCAT), provides a way to access
data sets and resources based on their attributes and/or logical names rather
than their names or physical locations. (New in NMI-R5)
Also packaged with the GRIDS software is a
tool from another NMI project team, called "EDIT" (Enterprise and
Desktop Integration Technologies):
KX.509 and KCA. This provides a
bridge between Kerberos and PKI infrastructure. It is included to enable the PKI-based security infrastructure of the
Globus Toolkit to integrate with Kerberos-based authentication
implemented at university campuses.
These GRIDS Center Software Suite
components were chosen by the NMI leadership for their collective
value in creating and managing computational Grids that facilitate
use of powerful on-line resources. For technical assistance, use the
GRIDS Center federated
Bugzilla. For more information, use
Primary funding for GRIDS is from the National Science Foundation
(NSF) Middleware Initiative program 03-513. GRIDS software
developers wish also to acknowledge support from:
Directorate for Computer and Information Science and Engineering (for Globus Toolkit,
Condor-G, NWS, MyProxy, GSI-OpenSSH, GPT and GridConfig)
U.S. Department of
Energy (for Globus Toolkit,
Condor-G, NWS and MPICH-G2)
Research Projects Agency (for Globus Toolkit and Condor-G)
NASA (for Globus
Toolkit, MyProxy and GSI-OpenSSH)
e-Science Program (for Globus Toolkit)
Royal Institute of Technology (for Globus Toolkit)
IBM (for Globus
(for Globus Toolkit and Condor-G)