Are you sure you want to log out?
Background: Classification of fractures in Orthopaedics is the process by which related groups of fractures are organized based on similarities and differences. A good fracture classification should provide a reliable and reproducible means of communication. Different observers (reliability) or the same observer on repeated viewings (reproducibility) presented with same material (example a radiograph) must agree on the classification a high percentage of the time.1 Though the reliability and reproducibility of AO/OTA classification system is said to be high, studies in distal radius and distal tibia has demonstrated that its observer reliability falls off significantly. Reliable classification of fractures is important for treatment allocation and study comparison. The AO/OTA comprehensive classifications was developed with clear definitions of various types, groups and subgroups and with the goal to classify fracture in an internationally uniform and consistent fashion to allow standardization of research and communication among colleagues. This study therefore, was designed to address some of the problem of AO/OTA classification system which is that it has not been subjected to significant reliability and reproducibility test not only globally but also in our sub-region. Objective: To evaluate the level of intra-observer reproducibility and inter-observer reliability of AO/OTA 2018 classification system for diaphyseal tibia fractures in adults among surgeons with different levels of experience at National Orthopaedic Hospital Enugu xii Materials and methods: A total of 80 radiographs of consecutive patients with fresh tibia fractures were selected and classified by three observers with different levels of experience. All three observers independently reviewed and classified the images according to the AO/OTA 2018 system. To determine the intra-observer agreement, theobservers reviewed the same set of radiographs after an interval of 4 weeks. The inter- and intra-observer agreements were determined through Cohen’s kappa coefficient analysis. Results: For the Intra-observer reproducibility of each of the observers, the study showed a perfect agreement (k =1.000) for fracture localization, but moderate to substantial agreement for the morphology of the fracture (k = 0.41- 0.80) Then, for inter-observer reliability, the study showed that there was also a perfect agreement for fracture localization among all the raters (k = 1.000) and mostly substantial agreement for morphology of the fractures in the first and second classifications. (k = 0.61- 0.80). No statistically significant association was observed when comparing the level of experience of the raters with the intra-observer reproducibility.