IIQIE is the sole EOQ Competence Center Agent in Israel
 

The work presents the new charting technique for complete process analysis from both the stability and capability points of view. Stability evaluation is performed using the true control region on a two-dimensional graph of the measures of the central tendency and spread of the process. The region boundary represents the locus of points with the same probability density function value and is characterized by a one-dimensional statistic, depending on the standardized sample average and standard deviation. The chronological record of the statistic can be used for testing the statistical behavior of the process, a procedure characterized by reduced adjustment errors compared with the conventional Shewhart charts. Capability assessment is based on time-sequence analysis of the quality loss point estimator, which value depends on the sample variance and the squared distance from x-chart centerline to the sample average. The proposed approach allows to apply the charting technique for simultaneous analysis of both the statistical and quality behavior of the process and, thereby, optimize process control.
Safety, Security and Reliability of complex systems are the three interacting and most important risk related factors. In many cases of failure event, the Security function assumes charge, and manages the failure event and its resolution. But does the Security function consistently apply the optimal failure resolution methods? We propose that several organizational functions, including Information Security (IS), should analyze, manage, and resolve each case of failure in a coordinated effort, based on the failure classification and prioritization, and then apply appropriate Corrective Actions (CA). Such coordination may result in applying a CA that is sub-optimal by Security standards, yet optimal from the organization's perspective. In this paper we present an innovative composite methodology for identifying, prioritizing and selecting failures and incidents for appropriate treatment. The methodology is based on organizational priorities, knowledge and considers the analyses results of End Effects (EE), solutions and CAs.
Integrated Failure Mode and Effects Analysis (IFMEA) is an interdisciplinary methodology for product and process improvement. The methodology employs the fundamentals of artificial intelligence and knowledge mine acquisition to develop a comprehensive decision making environment. The benefits of IFMEA include identification of controls and elimination of potential failures.
Prognostic systems are expected to provide predictive information about the Remaining Useful Life (RUL) for equipment and components. During the last ten years, numerous RUL prediction models have been developed. These methods usually treat completed time-series only, i.e. full statistics before the item fails. Under actual operating conditions occasionally number of failed items is too small, and therefore application of uncompleted (suspended) time-series is necessary, and using Semi-Supervised methods instead of Supervised is required. In this paper, we propose an approach based on regression and classification models we have introduced in the past [1, 2]. These models consider monitoring data (time-series) as inputs and RUL estimation as output. Significant difference of this model is using suspended time-series to estimate optimal RUL for each suspended time-series, so they can be used for initial model training. This article describes the procedures that have been developed and applied successfully for Suspended Time-Series using. Several models based on modification of the SVR and SVC methods (Support Vector Regression and Support Vector Classification) are proposed for consideration. Number of uncompleted time-series used for training and cross-validation is proposed as additional control parameter. Suggested methodology and algorithms were verified on the NASA Aircraft Engine database http://ti.arc.nasa.gov/tech/dash/pcoe/ prognostic- data-repository/). Numerical examples based on this database have been also considered. Experimental result shows that the proposed model performs significantly better estimations than pure supervised learning based model.
This paper presents an approach and the presented solution of the questions raised in the IEEE PHM 2012 Conference Challenge Competition. What was given (known) is the real run-to-failure data of 6 bearings only from the three groups exposed to different operating conditions. One should use this data to estimate the Remaining Useful Life (RUL) of the given set of 11 test bearings. The main feature of the presented data is significant loss of trendability (i.e. "non-trendability") of the defined significant parameters' behavior (horizontal and vertical vibration), thus avoiding the use of well-known supervised learning RUL prediction models. New models have been developed and used; further the Cross-Entropy method has been used for control parameter optimization based on the Cross-Validation procedure. The presented solution has been recognized as a "Winner from Industry" in the above mentioned Competition. The achieved results demonstrate the effectiveness of the approach for the RUL estimation for systems parameters with the non-trendability behavior.
The main FMEA (Failure Mode and Effects Analysis) objective is the identification of ways in which a product, process or service fail to meet critical customer requirements, as well as the ranking and prioritization of the relative risks associated with specified failures. The effectiveness of prioritization can be significantly improved by using a simple graphical tool, as described by the authors. Evaluation of the adequacy of correction actions proposed to improve product/process/service, and the prioritization of these actions, can be supported by implementing the procedure proposed here, which is based on the evaluation of correction action feasibility. The procedure supports evaluation of both the feasibility of a corrective action implementation and impact of the action taken on failure mode.
The reliability growth process, applied to a complex system under development, involves surfacing failure modes, analyzing the modes and their causes, and implementing corrective actions (fixes) to detected failures. In such a manner, the system reliability is grows and its configuration is going to be mature with respect to reliability. The conventional procedure of the Reliability growth implies evaluation of two principal parameters of the Non- Homogeneous Poison Process (NHPP) related to the failure rate only. In addition to the Reliability aspect, the Availability factor, and, as the result, the Availability growth (not only Reliability growth) is extremely important for many systems. Yet because the standard NHPP does not take into account the repair rate parameters, the practitioners are awaiting for a long time for an expanded procedure for the Availability Growth tracking. This paper suggests a model and a numerical method to evaluate these parameters, establishing consequently the Inherent Availability Growth model, i.e. considering only corrective maintenance time due to failures. The model can be further generalized for Operational and Achieved Availability by taking in account the preventive maintenance, administrative and logistics times as appropriate.
This is a co-publication by University of Minho, University of Coimbra and National Observatory of Human Resources from Portugal, which aims to assess, analyse and rank the different countries performance in terms of Quality.
Main question for Reliability Software Analysis is following – "Is Software ready for release?" This analysis may be performed at the expected time of Finish of Testing and at the Intermediate point of testing. If answer for this question will be negative, it is useful to predict expected time of testing finish to satisfy required value of "Bug Amount" after testing finish, at the field. So, second question may be "When Software will be ready for release?"
logo Web Design