Pre-conference workshops will be offered at AdCONIP 2022. Please see the list below.
- Workshop 1: Process Data Analytics and Network or Flowsheet Reconstruction
- Workshop 2: Making reinforcement learning a practical technology for industrial control
Workshop 1: Process Data Analytics and Network or Flowsheet Reconstruction
The following topics will be discussed in this workshop. Each topic will be accompanied by one or more industrial case study to convey the utilitarian value of the learning, discovery and diagnosis from process data.
- Overview of the broad analytics area with emphasis on its use in the process industry. Basic definitions and introduction to supervised and unsupervised learning: simple regression, classification and clustering; Data visualization methods (in the temporal as well as the spectral domains).
- Multivariate methods for data analysis: Principal Component Analysis (PCA) / Singular Value Decomposition (SVD) and its variants for steady-state model identification and reconstruction of conservation networks.
- Alarm data analysis: Detection and removal of nuisance alarms; root-cause analysis of alarms and alarm floods.
- Causal discovery and network reconstruction: Causality concepts and definitions; Methods for detecting cause-effect links and reconstructing graphical / network models from data.
We are currently at the cusp of the fourth industrial revolution (4IR) or Industry 4.0 that is poised to reshape all the sectors of economy and society with an unprecedented depth and breadth. Emerging technologies including complex organization and systems, smart sensing, industrial robotics, industrial wireless communications, industrial Internet-of-Things (IIoT), Internet-of-Moving-Things (IoMT), industrial cloud, industrial big data and cyber-physical systems (CPS) have become hotspots of research and innovation globally.
Process data analytic methods rely on the notion of sensor fusion whereby data from many sensors and alarm tags are combined with process information, such as physical connectivity of process units, to give a holistic picture of health of an integrated plant. The fusion of information from such disparate sources of data is the key step in devising methodologies for smart strategies for process data analytics.
In the context of the application of analytics in the process industry, the objective in this workshop is to introduce participants to tools, techniques and a framework for seamless integration of information from process and alarm databases complemented with process connectivity information. The discovery of information from such diverse and complex data sources can be subsequently used for process and performance monitoring including alarm rationalization, root cause diagnosis of process faults, hazard and operability (Hazop) analysis, safe and optimal process operation. Such multivariate process data analytics involves information extraction from routine process data, that is typically non-categorical (as in numerical process data from sensors), plus categorical (or non-numerical or qualitative and binary) data from Alarm and Event (A&E) logs combined with process connectivity or topology information that can be inferred from the data through causality analysis or as obtained from piping and instrument diagrams of a process. The later refers to the capture of material flow streams in process units as well information flow-paths in the process due to control loops.
Highly interconnected process plants are now common and the analysis of root causes of process abnormality including predictive risk analysis is non-trivial. It is the extraction of information from the fusion of process data, alarm and event data and process connectivity that should form the backbone of a viable process data analytics strategy and this will be the main focus of this workshop. Representing process behaviour using networks is visually appealing and easy to understand. Process flowsheets and first-principles knowledge have been used to represent the interconnectivity among different unit operations and in process simulation and optimization. Analogously, other forms of networks derived from measured data are useful in applications such as fault diagnosis, monitoring and control. Finally, for efficient and informative analytics, data analysis is ideally carried out in the temporal as well as spectral domains, on a multitude and NOT singular sensor signal time-trends to detect process abnormality, ideally in a predictive mode.
The emphasis in this workshop will be on tools and techniques that help in the process of understanding data and discovering information that will lead to predictive monitoring, reconstructing network representations from data and diagnosis of process faults.
Typical process data analytic methods require the execution of following steps:
- Data quality assessment including outlier detection and noise filtering
- Data visualization and segmentation
- Process and performance monitoring including root cause detection of faults
- Alarm data analysis
- Data-based process topology discovery and validation
Desired prerequisites for attendees
Basic knowledge of statistics and linear algebra
The intended audience for this workshop would be industrial practitioners of control including vendors working in the area of on-line data logging and archiving, graduate students with interests in statistical learning and data science and academics.
Time: 4.5 hours (half day) on 7th August 2022.
- 8:00AM Registration and introduction of speakers and participants
- 8:10AM Introduction to process data analytics (SLS)
- 9:15AM Coffee break
- 9:30AM Alarm data analytics (SLS)
- 10:00AM Reconstructing conservation networks from data (SN)
- 11:15AM Causal discovery and network reconstruction from data (AKT)
- 12:30PM Questions + General discussion
Sirish Shah (UAlberta)
Sirish L. Shah is Emeritus Professor at the University of Alberta where he held the NSERC-Matrikon-Suncor-iCORE Senior Industrial Research Chair in Computer Process Control from 2000 to 2012. The main area of his current research is process and performance monitoring, system identification and design, analysis and rationalization of alarm systems. He has co-authored three books, the first titled “Performance Assessment of Control Loops: Theory and Applications”, a second book titled book titled “Diagnosis of Process Nonlinearities and Valve Stiction: Data Driven Approaches” and a more recent brief monograph titled, “Capturing Connectivity and Causality in Complex Industrial Processes”.
Shankar Narasimhan (IIT Madras)
Shankar Narasimhan is the M.S. Ananth Institute Chair Professor in the Department of Chemical Engineering at IIT Madras. He obtained his Bachelor’s degree from IIT Madras in 1982 and PhD degree from Northwestern University, USA in 1987. His major research interests are in the areas of Data Mining, Process Design and Optimization and Fault Detection and Diagnosis (FDD). He is the co-author of several important papers and a book on Data Reconciliation and Gross Error Detection. He has held visiting positions at the Centre for Automatic Control in Nancy, France, Purdue University, Clarkson University and Texas Tech University in USA and the University of Alberta in Canada. He has also spent summer internships at Engineers India Ltd., R&D Centre in Gurgaon, Honeywell Technology Solutions Ltd., R&D Centre at Bangalore, and ABB Global Services Ltd., Bangalore as part of high-level industry-academia interactions. He is the co-founder of Gyan Data Pvt. Ltd. in 2011, which specializes in using data analytics for manufacturing excellence and GITAA Pvt. Ltd., in 2018, which offers training in advanced data analytics, machine learning and artificial intelligence. He is a Fellow of the Indian National Academy of Engineering.
Arun K. Tangirala (IIT Madras)
Arun K. Tangirala holds a Bachelors in Chemical Engineering and a Doctoral degree in Process Control from the University of Alberta. He is a Professor at the Department of Chemical Engineering, IIT Madras. His research is concerned with multi-disciplinary problems of causality analysis, network reconstruction, control loop performance monitoring, multiscale identification, sparse optimization (compressive sensing)-based identification, systems biology and modern applications of data science. He is a recipient of several prestigious teaching & research awards and international fellowships. In addition, he has held visiting appointments at the University of Delaware, Technical University of Munich and Tsinghua University. He was awarded the Young Faculty Recognition Award in 2010 and the 2014 Institute Research and Development Award by IIT Madras. He is the author of a comprehensive classroom text on “Principles of System Identification: Theory and Practice”. He is currently the Editor-in-Chief of the Journal of Institution of Engineers India: Series E (Chemical and Textile Engineering), an Associate Editor of the ASME Journal of Dynamics, Measurement and Control and an Associate Editor of Control Engineering Practice. He is also an active member of ASME, IEEE, AIChE, CSChE and is a faculty associate of the Robert Bosch Centre for Data Science and Artificial Intelligence at IIT Madras.
Workshop 2: Making reinforcement learning a practical technology for industrial control
Reinforcement learning (RL) is an emerging technology in process systems engineering (PSE) [1,2]. The objective in RL is to generate an optimal “policy” in a stochastic environment . This general formulation makes RL appealing for both control and operational decision-making tasks, notably, without a system model . Despite the enthusiasm surrounding RL, there are also reasons to be skeptical of its viability. For example, RL does not have strong stability or constraint satisfaction guarantees, and it is notoriously data-hungry. Recent work at the intersection of RL and PSE strives to mitigate these issues and ultimately make RL more reliable, scaleable, and interpretable [4–7]. This workshop aims to engage academics and industrial practitioners in both the machine learning and controls communities with a lively discussion on the challenges and opportunities surrounding real-world RL.
By the end of this workshop, the attendees will:
- Learn the foundations of reinforcement learning and its relation to control-theoretic concepts.
- Understand how reinforcement learning can address the needs of industrial practitioners.
- Obtain a solid understanding of current challenges and opportunities in reinforcement learning research for process systems engineering applications.
The following topics will be discussed in this workshop.
- General introduction
- Foundations of reinforcement learning
- Relationship to more familiar control-theoretic concepts
- Prior art in industry
- Discuss the needs in industry and the potential impact of reinforcement learning
- Discuss challenges of deploying reinforcement learning algorithms in the process industries
- State of the art in deep reinforcement learning for process systems engineering
- Series of individual presentations touching on the following themes:
- Stability and constraints in reinforcement learning
- Sample efficient and robust learning techniques
- Reinforcement learning with partial knowledge of the system
- Controller architectures and system integration
- Other topics generally geared towards the challenges and opportunities in RL for process systems engineering
- Conclusion & Panel Discussion
- Items (1) and (2) will be 2 presentations roughly 45 min–1 hr long
- Item (3) will cover 5 presentations lasting roughly 30 min–45 min.
- Due to the hybrid format and some speakers presenting remotely, we have not yet assigned time slots. This will be a daylong workshop with breaks for coffee and lunch.
The expected audience is researchers, graduate students, and industrial practitioners primarily with a controls background who are interested in practical aspects of deploying reinforcement learning techniques.
- Nathan Lawrence, University of British Columbia, Canada (firstname.lastname@example.org)
- Philip Loewen, University of British Columbia, Canada (email@example.com)
Philip Loewen, University of British Columbia, Canada (firstname.lastname@example.org)
Philip D. Loewen received the Ph.D. degree in mathematics from The University of British Columbia (UBC), Vancouver, BC, Canada. He was involved in post-doctoral research with the Centre de Recherches Mathematiques, Montreal, QC, Canada, and the Electrical Engineering Department, Imperial College London, London, U.K. He returned to UBC in 1987, where he currently serves as a Professor of Mathematics. His current research interests include optimal control, optimization, convex and nonsmooth analysis, and engineering applications.
Thomas Badgwell, Collaborative Systems Integration, USA (email@example.com)
Thomas A. (Tom) Badgwell is the Chief Technology Officer at Collaborative Systems Integration, an Austin-based startup providing systems integration services and software products for Open Process Automation (O-PAS) based systems. He earned a BS degree from Rice University and MS and PhD degrees from the University of Texas at Austin, all in Chemical Engineering, and he is registered as a Professional Engineer in Texas. Tom’s career has focused on modeling, optimization, and control of chemical processes, with past positions at Setpoint, Fisher/Rosemount, Rice University, Aspen Technology, and ExxonMobil. He is a Fellow of the American Institute of Chemical Engineers (AIChE) and a past Director of the Computing and Systems Technology (CAST) Division, from which he received the Computing Practice Award in 2013. He has served as an Associate Editor for the Journal of Process Control and as a Trustee of the Computer Aids in Chemical Engineering (CACHE) Corporation.
Jay Lee, Korea Advanced Institute of Science and Technology, Korea, South Korea (firstname.lastname@example.org)
Jay H. Lee obtained his B.S. degree in Chemical Engineering from the University of Washington, Seattle, in 1986, and his Ph.D. degree in Chemical Engineering from California Institute of Technology, Pasadena, in 1991. From 1991 to 1998, he was with the Department of Chemical Engineering at Auburn University, AL, as an Assistant Professor and an Associate Professor. From 1998-2000, he was with School of Chemical Engineering at Purdue University, West Lafayette, and then with the School of Chemical Engineering at Georgia Institute of Technology, Atlanta from 2000-2010. Since 2010, he is with the Chemical and Biomolecular Engineering Department at Korea Advanced Institute of Science and Technology (KAIST), where he was the department head from 2010-2015. He is currently a Professor, Associate Vice President of International Office, and Director of Saudi Aramco-KAIST CO2 Management Center at KAIST. He published over 180 manuscripts in SCI journals with more than 13000 Google Scholar citations. His research interests are in the areas of system identification, state estimation, robust control, model predictive control, and reinforcement learning with applications to energy systems, biorefinery, and CO2 capture/conversion systems.
Biao Huang, University of Alberta, Canada (email@example.com)
Biao Huang received his Ph.D. degree in Process Control from the University of Alberta, Canada, in 1997. He held MSc degree (1986) and BSc degree (1983) in Automatic Control from the Beijing University of Aeronautics and Astronautics. He is currently a Professor with the University of Alberta, IEEE Fellow, and Fellow of the Canadian Academy of Engineering. His research interest includes Process Control, Process Data Analytics and Machine Learning. He is the Editor-in-Chief for IFAC Journal Control Engineering Practice, Subject Editor for Journal of the Franklin Institute, and Associate Editor for Journal of Process Control.
Panagiotis Petsagkourakis, Illumina, England (firstname.lastname@example.org)
Panos received his chemical engineering degree (silver medal award- summa cum laude) from the National Technical University of Athens (Greece) in 2015. He then joined the University of Manchester and the School of Chemical Engineering and Analytical Science for to pursue his PhD degree in 2015. In February 2019, he joined University College London as a Research fellow for the EPSRC project on cognitive chemical manufacturing. He also joined Imperial College London as visiting researcher. Panos joined the L&SE Young Members Forum in 2019 as university representative.
Ehecatl Antonio del Rio Chanona, Imperial College London, England (email@example.com)
Antonio del Rio Chanona is head of the Optimisation and Machine Learning for Process Systems Engineering group at the Department of Chemical Engineering, Imperial College London. Antonio received his MEng from UNAM in Mexico, and his PhD from the University of Cambridge where he was awarded the Danckwerts-Pergamon Prize for the best doctoral thesis of his year. He received the EPSRC fellowship to adopt automation and intelligent technologies into bioprocess scaleup and industrialization and has received awards from the International Federation of Automatic Control (IFAC), and the Institution of Chemical Engineers (IChemE) in recognition for research in areas of process systems engineering, industrialisation of bioprocesses, and adoption of intelligent and autonomous learning algorithms to chemical engineering. Antonio’s main research interests include Reinforcement Learning, Data-Driven Optimization, Control and Hybrid Modelling.
Mario Zanon, IMT School for Advanced Studies Lucca, Italy (firstname.lastname@example.org)
Mario Zanon. I received my B.Sc. in Industrial Engineering from the University of Trento in 2008 and my M.Sc. in 2010 in Mechatronics and in General Engineering from the University of Trento and the Ecole Centrale Paris respectively in the context of a dual degree agreement. I obtained my Ph.D. in Electrical Engineering from the KU Leuven in 2015 under the supervision of Prof. Moritz Diehl. From November 2015 until December 2017, I have been a postdoc researcher at Chalmers University of Technology under the supervision of Prof. Paolo Falcone. From January 2018 until November 2021 I have been assistant professor at the IMT School for Advanced Studies Lucca, where I became Associate Professor in December 2021.
Sebastien Gros, Norwegian University of Science and Technology, Norway (email@example.com)
Sebastien Gros received his Ph.D degree from EPFL, Switzerland, in 2007. After a journey by bicycle from Switzerland to the Everest base camp in full autonomy, he joined a R&D group hosted at Strathclyde University focusing on wind turbine control. In 2011, he joined the university of KU Leuven, where his main research focus was on optimal control and fast NMPC for complex mechanical systems. He joined the Department of Signals and Systems at Chalmers University of Technology, Göteborg in 2013, where he became associate Prof. in 2017. He is now full Prof. and Head of the department of Engineering Cybernetic, NTNU, Norway and affiliate Prof. at Chalmers. His main research interests includes numerical methods, real-time optimal control, reinforcement learning, stochastic optimal control, Markov Decision Processes, and the optimal control of energy-related applications. He is currently focusing on the optimization of smart houses with ambitious and unique experiments.
- Panagiotis Petsagkourakis & Ehecatl Antonio del Rio Chanona will give joint presentations.
- Mario Zanon & Sebastien Gros will give joint presentations.
 Rui Nian, Jinfeng Liu, and Biao Huang. A review on reinforcement learning: Introduction and applications in industrial process control. Computers & Chemical Engineering, page 106886, 2020.
 Joohyun Shin, Thomas A. Badgwell, Kuang-Hung Liu, and Jay H. Lee. Reinforcement Learning –Overview of recent progress and implications for process control. Computers & Chemical Engineering, 127:282–294, 2019. ISSN 00981354. doi: 10.1016/j.compchemeng.2019.05.029.
 Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
 Panagiotis Petsagkourakis, Ilya Orson Sandoval, Eric Bradford, Dongda Zhang, and Ehecatl Antoniodel Rio-Chanona. Reinforcement learning for batch bioprocess optimization. Computers & Chemical Engineering, 133:106649, 2020.
 Mario Zanon and S ́ebastien Gros. Safe reinforcement learning using robust mpc. IEEE Transactions on Automatic Control, 66(8):3638–3652, 2020.
 Haeun Yoo, Boeun Kim, Jong Woo Kim, and Jay H. Lee. Reinforcement learning based optimal control of batch processes using Monte-Carlo deep deterministic policy gradient with phase segmentation. Computers & Chemical Engineering, 144:107133, 2021. ISSN 00981354. doi: 10.1016/j.compchemeng. 2020.107133.
 Nathan P Lawrence, Michael G Forbes, Philip D Loewen, Daniel G McClement, Johan U Backstrom, and R Bhushan Gopaluni. Deep reinforcement learning with shallow controllers: An experimental application to PID tuning. Control Engineering Practice, 121:105046, 2022.