PHM Data Challenge
2013 PHM Society Conference Data Challenge (now closed)
View Running Leaderboard
The winners for this year’s Annual Data Challenge were announced. The finalists were chosen based on top five scores obtained according to the scoring procedure outlined here. The finalists were asked to submit a detailed paper for technical evaluations of their technical approach. Top two teams were selected as winners from the five finalists and have been invited to submit their papers to the International Journal of Prognostics and Health Management(IJPHM). The third, fourth, and fifth place teams were also invited to submit papers to IJPHM and/or present their work at the conference. Some presentations from top teams are below:
- A Bayesian Approach for The Maintenance Action Recommendation 2013 PHM Data Challenge –
by Team: Athena
- Application of Event Based Decision Tree and Ensemble of Data Driven Methods for Maintenance Action Recommendation –
by Team: mud
- Data Analysis Methods, Challenges, and Lessons Learned, for Remote Monitoring Diagnosis and Maintenance Action Recommendation –
by Team: Predict_DS_YC_EL
Data Challenge Announcement
|The PHM Data Challenge is a competition open to all potential conference attendees. This year the challenge is focused on maintenance action recommendation, a common problem in industrial remote monitoring and diagnostics. Participants will be scored on their ability to accurately recommend confirmed problem types and not make any recommendations for historical nuisance cases.
This is a fully open competition in which collaboration is encouraged. The teams’ may be composed of any combination of students, researchers, and industry professionals. The results will be evaluated by the Data Challenge committee and all teams will be ranked. The top scoring teams will be invited to present at a special session of the conference and first and second place finishers will be recognized at the Conference Banquet event.
Questions may be asked and additional information can be found on the competition forum.
Collaboration is encouraged and teams may be comprised of one or more students and/or professionals. The team judged to have the first and second best scores will be awarded prizes of $300 and $100 respectively contingent upon:
- Having at least one member of the team attend the PHM 2013 Conference
- Presenting the analysis results and technique employed at a special session within the Conference program
- Submitting a peer-reviewed Conference paper. (Submission of the challenge special session papers is outside the regular paper submission process and follows its own modified schedule.)
The top entries will also be encouraged to submit a journal-quality paper to the International Journal of Prognostics and Health Management (ijPHM).
The organizers of the competition reserve the right to both modify these rules and disqualify any team for any practices it deems inconsistent with fair and open practices.
Teams may register by contacting the Competition organizers (firstname.lastname@example.org, email@example.com) with their name(s), a team alias under which the scores would be posted, affiliation(s) with address(es), and contact phone number (for verification).
PLEASE NOTE: In the spirit of fair competition, we allow only one account per team. Please do not register multiple times under different user names, under fictitious names, or using anonymous accounts. Competition organizers reserve the right to delete multiple entries from the same person (or team) and/or to disqualify those who are trying to “game” the system or using fictitious identities.
There are 4 data sets. Due to proprietary concerns we cannot provide a detailed description of the data and the domain. If you have any questions, please get in touch with the organizers (firstname.lastname@example.org, email@example.com).
|Train – Case to Problem.csv (2.25 kb)
This file contains the different problems associated with each case. A case can either be created by an automated system or manually by an engineer. The problem is a number that specifies a particular maintenance action that should be implemented to correct the symptom/problem.
|Train – Nuisance Cases.csv (80.43 kb)
This file contains a set of cases that were not instructive enough to be acted on. These cases should be examined to determine what features are not useful for classifying problems. For a bit of context, these cases were mostly created by automated systems and were presented to an engineer who determined that the symptom was not sufficient to notify the customer of the identified problem.
|Train – Case to Events and Parameters.csv (960.13 mb)
This file contains all of the event codes and parameters that are associated with all of the cases in the previous files. This file should be used to train/develop the recommender. This data was generated from an industrial piece of equipment. Anytime a specific condition is met onboard, the control system generates a specific event code, which effectively captures the type of conditions met to generate the code, and takes a snapshot of all of the parameters that are measured onboard.
|Test – Case to Events and Parameters.csv (653.75 mb)
This file contains all of the event codes and parameters for all of the test cases. This file should be used to generate the file that will be used to evaluate your recommender that is discussed later.
Submitting Results & Performance Evaluation
A submission will be composed of a CSV file whose name is the team alias. If the alias is “example”, then the filename will be “example.csv”. The CSV file should have two columns-
- Case – The test case ID read from the “Test – Case to Events and Parameters.csv” file.
- Problem – The inferred problem ID. If the recommender does not generate an output, the value MUST be “none”.
Since the test set contains actual and nuisance problems, each submission will be measured in terms of the number of useful outputs. In equation form –
Score = Num.Outputs – Num.Incorrect Outputs – Num.Nuisance Outputs
Please Note that the number of cases with confirmed problems and nuisance test cases will be balanced. What this means is that if you decide not to attempt to squelch (i.e. output “none”) nuisance outputs that the best score you would be able to attain would be zero.
|Team ID||# Outputs||# Incorrect Outputs||# Nuisance Outputs||Score|
Last updated: 15 August 2013
- Competition Closed: 14 August 2013
- Prelim Winners Announced: 15 August 2013
- Winning Papers Due: 14 September 2013
- Winners Announced: 21 September 2013