Nations all over the world depend on oil and gas for fuelling of cars, generation of electricity and other domestic purposes. Oil exploitation and exploration with associated spillage have been increasing at alarming rates [1, 2]. During oil exploration, production, storage and transport activities, crude oil and products spill onto land and waterways. Oil spillage data are large, noisy and complex, and have some level of uncertainty associated with them. Statistical approaches, although offer precise methods for quantifying the inherent uncertainty that results from a particular sample or an overall population, they lack the ability to handle large, complex and noisy dataset and perform limited search during pattern extraction from databases. Conventional, database query methods produce limited and unreliable results desirable for effective decision-making.
Information Technology (IT) and advance modelling tools and techniques continue to help the society limit and manage disasters. The effectiveness of an oil spill response system and the robustness of a recovery plan are highly contingent upon an IT infrastructure that enhances information management, cooperation and coordination during disasters [3, 4]. Computer Systems, Decision Support Systems, KD and DM are some core tools that can assist in many aspects of oil spillage risk management. Where traditional analysis techniques fail to uncover hidden patterns from large and diverse datasets, knowledge discovery techniques succeed [5, 6]. KD and DM aims at extracting useful information and patterns from huge amount of data for prediction and modeling .
NN and DM tools offer ideal solutions to a variety of classification tasks such as speech, character and signal recognition as well as risk assessment and treatment. Although, gradient-based search techniques such as back-propagation are currently the most widely used optimization techniques for training NN, it has been shown that these gradient-based techniques are severely limited in their ability to find global solutions in a feasible computational time . GA is a heuristic method used to find approximate solutions to complicated problems through application of the principles of evolutionary biology. The major strength of GA is that, bad proposals or noisy data do not affect the end solution negatively as they are simply discarded . FL is a superset of Boolean Logic (BL), which handles the concept of partial truth [10, 11].
While FL performs inference mechanisms under cognitive uncertainty, NN use learning, adaptation, fault tolerance, parallelism and generalization to process data. Hence, to enable systems deal with cognitive uncertainty in a human like manner, one may incorporate the concept of FL into NN. Human operators can enhance NN by incorporating their knowledge with fuzzy membership functions, which are fine-tuned by a learning process. GA is a powerful tool for structure and weights optimization of NN. It is therefore useful to fuse NN, FL and GA techniques for offsetting the demerits of one technique by the merits of other techniques .
For full text: click here
(Author: Oluwole Charles Akinyokun, Udoinyang Godwin Inyang