Name:
Shiju Nair, Sr. Engineer, Digital IA Technologies, Audit and Assurance
Ahmed Abujarad, SVP, Audit and Assurance
Darsan Krishnan, Manager, Group Quality Assurance and Excellence, Audit and Assurance
Ankur Ranka, Manager, Group Companies Audit, Audit and Assurance
Niladri Das, Head, Group Specialized Audit Division, Audit and Assurance
Abhishek Raghavendra, Senior Auditor, Audit & Assurance
Mohammed K Al Mansoori, Manager Group Digital Commercial & Financial Solutions
Antonio Rivas , Senior Specialist Group Digital Advanced Analytics Solutions
Country: United Arab Emirates
Organization: Abu Dhabi National Oil Company (ADNOC)
Established in 1971, Abu Dhabi National Oil Company (ADNOC) is a diversified group of energy and petrochemical companies that employs more than 50,000 people and is a major contributor to the GDP of the United Arab Emirates (UAE). ADNOC operate across the entire hydrocarbon value chain with different fully integrated Group Companies providing services in the fields of exploration and production; oil refining and gas processing; chemicals and petrochemicals; refined products and distribution; maritime transportation; and support services including sales and marketing, human capital, legal, finance and IT. ADNOC has been named the UAE’s most valuable brand for a second consecutive year starting from 2019, 28.6% increase over the previous year and 145% increase since the launch of its transformation strategy in 2017, making it the fastest-growing brand in the Middle East and the first UAE brand to surpass $10 billion in value
Awards Categories:
The Audit Analytics team at ADNOC HQ works on a wide domain of Auditing services across 16+ group companies to support the Internal Auditors by augmenting data analytics to critical audits that has the potential to unlock value through analyzing large sets of data. One such large scale audit analytics and predictive modeling was done on the Oil production and allocation of reservoirs across ADNOC Operating Companies.
Key objectives of this audit were mainly to identify key risks, and review adequacy and effectiveness of the existing controls that mitigate risks over the established processes for the measurement and back allocation of crude oil production. In this context, the scope included, but was not limited to, the following:
Some of the analytics to be performed on data were related to the following:
The analytics team along with internal audit subject matter experts had jointly analyzed the OPCO production data for last 2 years for testing effectiveness of the associated controls. All transactions in the system related to oil production had to be extracted and transformed in required format for analytics to Identify control gaps and recommending improvements.
Around 10 million records of source transactions from past 2 years were required to be extracted and processed to perform analytics which was effort intensive and time-consuming exercise unless we had used Dataiku. Data engineering and performing advanced analytics through Dataiku’s inbuilt clustering and predictive algorithms, critical insights on cost optimization and revenue enhancement potentials were identified and presented to the Management. Customized statistical models were coded using Python recipe. Without Dataiku and linked BI visualization tools, this wouldn’t have been possible to deliver the engagement with the achieved depth and quality.
100+ recipes, 100+ data transformations, 10+ models on clustering, forecasting and time series were designed. Dataiku recipes and DSS steps are very easily configurable which allowed to quickly understand the patterns and anomalies on data effectively. Scheduling of workflows in automation node, all tests were packaged to execute the results for 1-2 months to understand the accuracy of model. The control and revenue optimization opportunities were proactively managed with business counterpart to provide assurance on the entire audit exercise.
Analytics 1:
Inactive Well Aging analysis was achieved by sorting and inspecting the inactive wells according to their downtime. Complete visibility of inactive strings enabled production function to exert proactive efforts to restore the string at the shortest possible time based on activation prioritization.
Analytics 2:
Machine learning model-driven analysis was performed to predict the Mean time between failures (MTBF) between inherent failures of a strings. MTBF calculated as the arithmetic mean (average) time between failures of a system for the past 2 years was used to train the model and predict the probability of a string to be inactive. MTBF of strings should be monitored on periodic basis to analyze performance of strings to take appropriate action to address issues impacting the availability of strings.
Benefits are multi-faceted, few of them are:
Dataiku with its wide range of features has benefited significantly to accomplish the project journey, notable benefits are highlighted below:
Quick data preparation - Dataiku was able to process twice faster data transactions of 1 million rows with 75 data fields to prepare and design multiple analytics workflows. Visual recipes are great feature and non-coders can also easily understand and utilize the tool for any data processing activity.
GUI based data wrangling - Data wrangling capabilities of Dataiku has helped in cleansing, validation and enriching of data before processing. AI driven visual preview is a great feature which helps team in verify the results are as intended for process steps employed against data fields.
Visual Statistics and processing -Exploratory data analysis on categorical and numerical field including the distribution of values, top values, outliers, invalids, and overall statistics were of great support during initial data analysis with Dataiku and it helped to reduce statistical analysis and coding time by ~450 manhours.
Auto ML of Machine learning Models - Dataiku’s AutoML feature combined with hyperparameter selection, feature engineering process and result interpretation using various charts produced to evaluate dependencies allowed us to identify best fit algorithm during the Inactive well prediction model building process. Auto retrain feature and versioning of trained model are being used to continuous optimization of models.
End-to-End Automation - Dataiku’s great feature of automating and scheduling data pipelines with managed dependencies, has supported 16 Business team (each team consists of 20-25 people) to consume the results into the BI platform to quickly understand the exceptions and take preventive measures. Around 15 scenarios were scheduled at different set frequency for end-to-end processing of data.
Centralized data repository with shared access - The ability to connect to multiple data sources, the team was able to quickly connect and ingest data from five diverse source systems into the central repository.
Dataiku Training - In coordination with Dataiku, we were able to train 14 auditors (one in each Operating Company) for using the Dataiku tool to build data pipelines and make effective use during audit activities.