iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week

Home > iSGTW - 21 April 2010 > Feature: Autonomic ecosystems

The rise of autonomic computing

The instrumented oil field, seen above, can be thought of as a “cyberecosytem,” in which feedback loops are use to optimize oil extraction. Image courtesy Manish Parashar

How do you maximize the amount of oil you can extract from an oil field?

One way is to use the Instrumented Oil Field, or IOF, an application coordinated by the Center for Subsurface Modeling at the University of Texas and used by a consortium of academic and industry researchers. It uses a network of sensors embedded underground to monitor the state of the reservoir while the oil is being extracted.

The IOF calculates deposits of oil which can be safely extracted and economically viable, while classifying other parts that cannot be reached as “bypassed oil.” However, if the model relies purely on fixed initial conditions, up to 60% can be inaccurately deemed unreachable.

One way around this problem is to use autonomic computing, whose goal is to build systems and applications which manage themselves by responding to the data. They configure and adapt themselves in real time, analogous to the structure of a self-regulating biological ecosystem.

 IOF is just one example of the successful use of applying autonomics. In a lecture given during the EGEE 5th User Forum last week, Manish Parashar, the founding director of the Center for Autonomic Computing and The Applied Software Systems Laboratory at Rutgers University, said that this approach can also be applied to complex grid infrastructures — which are similarly complex, and becoming so intricate that they are not achieving their full potential.

When cloud and multicore are added to the mix, the resulting complexity can even hamper rather than help scientists’ efforts to build their experiments.

And as science becomes more and more data-driven, the computer systems and infrastructures being implemented are increasingly complex. While distributed computer networks are being used to investigate biology, treating grids as cyber-ecosystems in their own right — both at the level of software and hardware — could change how scientific applications are developed and even how science itself is done.

An example of autonomics being used is in the modelling of teh dynamics within a simulated combustion. As the simulation progresses,  the computational requirements change with time because the areas of particular interest become more clearly defined. Autonomic computing helps refine the application while the experiment goes on and the middleware can reconfigure the timing and level of partitioning within the system. Image courtesy Manish Parashar

Best use

Parashar stresses that autonomics is more than just adaptive coding. Rather than just trying to optimize code and dealing with failures when they occur, systems based on an autonomic framework can react to failures and adapt to different requirements. For example, you may want your grid to optimize performance normally, but at other times you may want to prioritize high reliability or security. 

By automating the process, based on human directed policies, the cyber-ecosystem can manage its own infrastructure. It makes the process far more flexible.

Parashar suggests three practical motivations for autonomics. First, the volume and complexity of data produced within grid structures can be time consuming for administrators to absorb and react to. Second, the costs of hardware and power requirements are growing. Third, as we increasingly rely on our e-infrastructures their impact is huge and small failures can have drastic effects.

Autonomics also offers a potential way for grids and clouds to be combined effectively because it can help decide what to run where, when and how. For example, clouds can be used as an accelerator within the computing infrastructure, to reduce runtime for applications based on budgeting requirements of the user. Alternatively, clouds can be used as an automated failsafe mechanism in the event of a failure within the grid infrastructure.

For Parashar, e-infrastructures like EGI and Teragrid provide an ecosystem which offers new mechanisms for scientists to build experiments. “It’s about decreasing the gap between the user and the tools we build,” says Parashar. 

“To me it just seems pragmatic and obvious.”

—Seth Bell, for iSGTW


 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR


NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing


Enter your email address to subscribe to iSGTW.


 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum


January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2


February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III

More calendar items . . .


FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map