March 2015, Vol. 242, No. 3
Features
Pipeline Integrity: Its An Imperative, Not A Choice
The late 19th century’s most brilliant businessman, J.D. Rockefeller, was an oil tycoon who discovered that the best way to take advantage of his country’s growing thirst for oil was to control distribution. Although the pipeline that Rockefeller built in the 1870s wouldn’t look like much compared to today’s sophisticated pipeline networks, it was an engineering feat that helped his company ensure oil got to the clients who needed it most.
Rockefeller knew that distributing oil was perhaps more important than even drilling for it. And every inch of that first pipeline of the Standard Oil Co. could be fraught with problems, including saboteurs who didn’t like the company’s huge share of the oil market. At one point Rockefeller had to hire armed Pinkerton guards to watch over the many miles of his pipeline. He couldn’t hire a guard to cover every inch of the pipeline, just the parts he thought were most susceptible to sabotage. Perhaps this was the first example of predictive maintenance in the oil industry.
A history lesson, to be sure. But the essential issues affecting the nascent oil and gas industry in 1870 are as relevant and pressing today. That is, how do oil enterprises minimize risk, allocate resources most effectively, and remain mobile? This very night, plenty of engineers will have the same kind of sleepless night their forbears in the 1870s had, wondering if they’ve thought about every risk that could affect their oil company’s operations. (They never get credit when bad things don’t happen.)
Modeling Meets Automation
Until recently, these troubleshooters were gathering and analyzing data to monitor their pipeline networks. Sometimes it was a lot of data, but they couldn’t analyze all of it. So they developed predictive models to keep tabs on their assets. Because it took considerable effort to bring together all the relevant data and organizational divisions to collaborate, the predictive models tended to be limited in their coverage of the pipeline networks and contributing parameters. Today, for instance, a typical integrity engineer responsible for maintaining a network spends 80% of his time collecting information and making it usable. That leaves less than 20% of his time for analyzing information and making decisions.
Think about how time-consuming it is to create predictive models and the sheer proportion of the pipeline network that’s covered by predictive models. The opportunity for true integrity management – risk-based integrity management processes – can be severely limited. Therefore it becomes critical to automate data organization and the risk-based integrity model so that users can focus their energies on analysis and taking corrective actions.
Information technology is revolutionizing the oil and gas industry. What oil enterprises are now doing is leveraging a combination of data standardization, technology, and automation. With more automated predictive models and risk-based models, integrated with the required data sources and captured in standardized data formats, oil companies have the ability to scale their solutions to cover a higher percentage of their pipeline assets.
Addressing All That Compliance
Oil and gas enterprises now operate in an environment of stringent regulations. With a spate of high-impact incidents and spills in recent years, regulators all over the world like the Pipeline and Hazardous Material Safety Administration (PHMSA) , the National Energy Board in Canada, the European Commission for Energy in the European Union and other agencies in other oil-producing countries are moving toward a more stringent prescriptive set of regulations (like the 49 CFR 192 and 195 regulations in the U.S.) compared to an earlier philosophy of self-regulation aligning to a broad set of guidelines.
The regulators expect the enterprises to prove their compliance with hard data and also preserve and verify records related to changes in operating conditions and the reasoning behind the decisions. Traceable, verifiable, and accurate recordkeeping is expected from the enterprises to help everyone respond effectively in case of an emergency as well as to give a more accurate view of the infrastructure.
The regulatory agencies are adding more pipeline inspectors for better enforcement and investing in educating the public on how they can contribute to overall safety. A key challenge for most enterprises is to use the limited maintenance budget effectively to comply with increasing regulations. Sometimes more stringent internal process requirements are in play, creating an incentive to invest in predicting failures in an entire pipeline network. There are also priority areas to address where proactive corrective actions may minimize the risk in the overall pipeline network.
Correct solutions enable a structured approach toward scalability and minimizing risk. They give an enterprise the ability to cover all its data and analyze it holistically. Doing so reduces the risk in the overall network and allows for more fact-based decisions. And that translates into creating a more efficient organization. Rather than focusing on how much data is available, enterprises should question how much they leverage available data for predictive modeling.
Unless you organize and automate, you cannot use all of it. Enterprises should focus on managing more granular data with automation and software technologies. These solutions can help enterprises move toward strict compliance to regulations and internal process requirements and effective risk management by delivering scalable and configurable predictive models and risk models.
The PIM Advantage
We advocate creating a robust pipeline integrity management (or PIM) program. It’s a combination of a structured and integrated approach, the right kind of risk mitigation, and plenty of data-enabled maintenance capabilities.
Why the imperative? In the United States alone we face aging infrastructure, delayed maintenance, and a hyperactive mergers and acquisitions scene that truncate response timeframes and leadership perspectives, resulting in heightened risk profiles for assets. A spate of recent incidents has forced the entire oil and gas industry to call into question its risk-mitigation systems. How does an enterprise ensure that the top 100 riskiest places as predicted by its model are indeed the top 100 actual riskiest places along the pipeline?
According to PHMSA, 10,537 incidents over the course of 20 years have caused 398 fatalities, 1,640 injuries, property damage totaling $5 billion and 2.67 million spilled barrels of oil – an environmental nightmare. All the offending enterprises had an active risk-management program that failed in identifying and correcting the issues leading to these incidents.
Companies must use these advanced solutions while ensuring their modeling includes more data. They are taking more granular asset data into consideration and including additional dimensions. The models are also becoming more quantitatively based. Most risk models, engineering and algorithmic calculations are focused on structural failures. It is critical for enterprises to include more detailed parameters around commercial risks, such as potential order delays, associated revenue loss and penalties, health and safety risks, and the monetary and brand impact of incidents.
It is also important to consider detailed parameters around operational risk and the ability of an enterprise to address a failure or an incident of a particular kind. Plus, it’s important to know how to effectively integrate with SCADA systems to leverage the operational data as well.
The good news on the solutions front is that we’re adopting standard data models such as PODS (Pipeline Open Data Standards) and PODS Spatial to help standardize data capture for the integrity management system and meet a host of regulatory maintenance requirements. Rules engines help enterprises model the various regulations and conditions to be monitored and, if need be, trigger action. The enterprise receives all that data on powerful and easy-to-use visualization tools.
Personalized dashboards for each role help users focus on their relevant data for their decision-making. Mobile solutions don’t only capture data in the field, they also enforce compliance with procedures by making users follow the process steps and perform computations to alert users in the field. A trip to a job site doesn’t mean missing even a byte of computing power.
An effective, modern PIM framework can overcome an array of industry issues. For example, ask yourself if your company’s framework is driven by standardized business processes. Is it based on the very best integrity management processes, providing the scale to increase coverage of assets on inspection and predictive modeling? Does the enterprise have a budget to investigate evolving technologies like drones?
Better integration between the integrity management system, inspection systems, SCADA system and work and asset management systems means there is limited latency in data capture and more energy is spent on analysis and fact-based decision-making.
Guidelines For Creating Your Program
Every company needs a program that addresses not only today’s specific challenges, but long-term business priorities. When customizing your company’s program, here are some guidelines to keep in mind.
Digitization of data captured in the field should always be a top priority because it forms the first stage of enhancing and streamlining your recordkeeping.
Information comes to your enterprise from a host of other sources, too, such as the pipe book, design specifications, failure and repair history, work and asset management system, SCADA systems, training and certifications data of the field crew and delivery schedules. This dizzying array of sources needs to be integrated into a unified piece of data architecture such as the aforementioned PODS.
Although this seems to be a daunting effort, there are tools, technologies, and services available to accomplish it in a cost-effective and phased manner. This needs to be done to provide a solid information platform for an enterprise planning to grow its pipeline asset base and prepare itself for impending explosions – of likely data, that is.
Once all of that data is presented in a seamless piece of architecture, the next stage is about understanding the impending risks at various aggregated levels of the infrastructure. The enterprise needs powerful analytics on top of the data architecture to extract measurable insights and key performance indicators regarding pipeline integrity. Analytics can also help the user to assess the overall state of the infrastructure and to identify and plan mitigation.
Scale up now. That’s the imperative. The technology to automate data gathering and integration, develop sophisticated predictive and risk-based models, is only getting better with more investments by operators and solution providers in this area.
Finally, effective PIM strategies must include an interactive visualization of the data related to the pipeline. That presentation can be either on a GIS map or through visualization that creates a virtual environment of the complete infrastructure using 3D and augmented reality. It’s particularly convenient for an organization when many types of information can be overlaid onto this visualization layer such as risks, past history of inspections, tracking of the ongoing scheduled inspections, incident history, anomalies, delivery schedules and operational feeds from SCADA.
Advances in information technology allow us to view oil and gas pipelines as potent business assets that enhance the overall economic viability of the companies that own them. The pipelines might carry toxic fluids, but in the end these materials are the lifeblood of our society. That’s why enterprises need mature business strategies that leverage the enormous amounts of information and communication emerging from the field. Such solutions are enabling tremendous opportunities for pipeline operators to innovate and focus on business value creation – all while complying with a slew of complicated regulations.
A free-flowing pipeline giving off the right data means more seamless and efficient operations in the decades to come.
Authors: G.V. Ganesh is an industry principal in the energy practice at Infosys and the head of the energy products and solutions team where he is responsible for various offerings addressing data management and integrity challenges for upstream and midstream operators. He has led business consulting practice in supply chain, sourcing and procurement and led multiple client engagements. He has conceptualized and managed multiple supply chain collaboration and analytics products.
Preeti Pisupati is a research analyst at Infosys. She has over 12 years’ experience in the oil and gas industry vertical and relevant analytics and inspection domains. She is focusing on determining the solutions which meet the safety, regulatory compliance, risk assessment and integrity management plan and process needs. She has worked on multiple NDT inspection technologies and pipeline inspection data and developed advanced business functionalities using integrated work and asset management systems, GIS, mobile and 3D technologies to make them more relevant to pipeline integrity engineers and managers.
Comments