iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW 29 July 2009 > Feature - Burning down the house

Feature - Burning down the house (with FireGrid)


Fire at a warehouse. Image courtesy London Fire Brigade

About noon on 29 October 2008, a fire broke out in a small apartment in the town of Watford, UK.

The fire had been deliberately started in a sofa in the living room. With no intervention, the sofa soon started to burn fiercely, allowing the fire to spread to a nearby table and TV.

The temperature within the room increased to such a high level that the walls and other furnishings burst into flames, a phenomenon known as “flash-over” that would have been potentially fatal to anyone in the apartment. As the flash-over occurred, a ball of fire coursed down the hallway, creating a plume of flame that curled around the front door.

At this point, the fire was calmly extinguished.

The fire was actually a test, run in a state-of-the-art “burn hall facility” — essentially a large hanger in which the apartment rig was assembled — at the Building Research Establishment (BRE), Watford. It has an elevated viewing area for the audience, and an experienced team of fire engineers who coordinated the setup and execution of the experiment in a predictable and safe manner.

The fire had burned for a total of just under an hour. But it had taken only a couple of minutes for the temperature to increase dramatically, from a level tolerable for humans to over 700 degrees C — the point of flash-over. It was a vivid demonstration of the hazards faced by firefighters in their daily work.

On hand were observers from Edinburgh’s supercomputing center (EPCC), who were gathering information as part of the FireGrid project. The project’s goal is to monitor such tests, gather data, use the information to predict what would happen next, and relay this valuable information to emergency services. The aim of this particular experiment was to demonstrate the FireGrid architecture to project partners and stakeholders.

The FireGrid architecture consists of three layers:
• a data acquisition and storage layer (yellow) for capturing
and storing live sensor data;
• an agent-based command-and-control layer (blue) to allow fire responders to interact with data, computing resources and the grid to perform high-level decision-making;
• an HPC resource layer (red) for deploying computational
models.

These three layers are connected together by grid middleware (green). 

Click on image for large-format, PDF version.

Faster than real time

For the experiment, the entire apartment rig bristled with sensors, measuring quantities such as temperature, smoke levels, air flow and gas concentrations. The readings from these sensors were aggregated into a central database, from where they could be accessed by the different agents within the FireGrid system.

A key element of FireGrid is its predictive capability, and this functionality was delivered using a fire/structure/egress code called K-CRISP, which ran faster than real time (or what its developers called “super-real time”) on a remote high performance computing (HPC) system, and was accessed over the Internet using grid protocols.

A user interacted with FireGrid via a purpose-designed interface called a Command, Control, Communication and Intelligence (C3I) computer. C3I gave access to the information generated by a combination of live data and predictions from K-CRISP, and was also able to accommodate user requests for specific information. The C3I included a fire alarm agent, which automatically launched the remote K-CRISP simulation when a fire was detected.

Another C3I agent provided this simulation with the latest sensor readings every 30 seconds. The C3I condensed the myriad of data to a simple graphical form for each room, displaying current and future hazard levels using colored blocks, looking ahead within a 15-minute window. These hazards included smoke, structural collapse and flash-over.

K-CRISP was employed to predict both structural collapse and flash-over. To improve the reliability of model predictions, real-time data was assimilated into the computation at regular intervals. To ensure that the simulations were completed rapidly enough to be of use to firefighters in predicting what would happen next, K-CRISP was parallelized following the Task Farm paradigm, in which each slave ran a serial simulation of the fire, and where interprocess communication was handled via the file system.

For redundancy, K-CRISP was ported to two HPC systems: ECDF, the University of Edinburgh’s research computer cluster; and HPCx, one of the UK’s two national supercomputing facilities. Access to the HPC resource in urgent computing mode is of particular importance to FireGrid to ensure the simulation can be launched as quickly as possible, generating information about the likely evolution of the fire incident in a timely manner.

Observing the experiment was Paul Jenkins, of the London Fire Brigade Fire Engineering Group, who said the demonstration proved that grid-based sensors and fire models can be used together predictively. The demonstration showed that simple, accurate predictions of fire performance can be made dynamically. Jenkins went on to say that while there is still a very long way to go, the demonstration showed the potential for further development of intelligent and interactive environments and expert systems and that, in time, FireGrid may be part of an emergency response.

Gavin J. Pringle and Mark Beckett, EPCC. This article previously appeared in EPCC News

Tags:



Null
 iSGTW 22 December 2010

Feature – Army of Women allies with CaBIG for online longitudinal studies

Special Announcement - iSGTW on Holiday

Video of the Week - Learn about LiDAR

 Announcements

NeHC launches social media

PRACE announces third Tier-0 machine

iRODS 2011 User Group Meeting

Jobs in distributed computing

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid’s blogosphere

 Mark your calendar

December 2010

13-18, AGU Fall Meeting

14-16, UCC 2010

17, ICETI 2011 and ICSIT 2011

24, Abstract Submission deadline, EGI User Forum

 

January 2011

11, HPCS 2011 Submission Deadline

11, SPCloud 2011

22, ALENEX11

30 Jan – 3 Feb, ESCC/Internet2

 

February 2011

1 - 4, GlobusWorld '11

2, Lift 11

15 - 16, Cloudscape III


More calendar items . . .

 

FooterINFSOMEuropean CommissionDepartment of EnergyNational¬†Science¬†Foundation RSSHeadlines | Site Map