ohpd 2013 02 s0095

Savithra ORIGINAL andARTICLE Nagesh Have CONSORT Guidelines Improved the Quality of Reporting of Randomised Controlled ...

0 downloads 56 Views 497KB Size
Savithra ORIGINAL andARTICLE Nagesh

Have CONSORT Guidelines Improved the Quality of Reporting of Randomised Controlled Trials Published in Public Health Dentistry Journals? Prakash Savithraa/Lakshminarayan Shetty Nageshb Purpose: To assess a) whether the quality of reporting of randomised controlled trials (RCTs) has improved since the formulation of the Consolidated Standards of Reporting Trials (CONSORT) statement and b) whether there is any difference in reporting of RCTs between the selected public health dentistry journals. Materials and Methods: A hand search of the journals of public health dentistry was performed and four journals were identified for the study. They were Community Dentistry and Oral Epidemiology (CDOE), Community Dental Health (CDH), Journal of Public Health Dentistry (JPHD) and Oral Health and Preventive Dentistry (OHPD). A total of 114 RCTs published between 1990 and 2009 were selected. CONSORT guidelines were applied to each selected article in order to assess and determine any improvement since the publication of CONSORT guidelines. The chi-square test was employed to determine any statistical significant difference in quality of reporting of RCTs before and after the publication of the CONSORT guidelines. A comparison was also done to determine any statistically significant difference in quality of reporting of RCTs between the selected journals. Results: Title, abstract, discussion and conclusion sections of the selected articles showed adherence to the CONSORT guidelines, whereas the compliance was poor with respect to the methodology section. Conclusion: The quality of reporting of RCTs is generally poor in public health dentistry journals. Overall, the quality of reporting has not substantially improved since the publication of CONSORT guidelines. Key words: CONSORT, public health dentistry, reporting of RCTs Oral Health Prev Dent 2013;11:95-103 doi: 10.3290/j.ohpd.a29359

E

very clinician’s practice should be based on sound scientific evidence. Such high-level scientific evidence comes from systematic reviews, which are considered to be much more valid when they include properly conducted and reported individual RCTs. The randomised controlled trial is a robust technique of wide applicability. The key principle in RCT is that ˖eligible, consenting individuals are randomly

a

Assistant Professor, Department of Public Health Dentistry, Bapuji Dental College and Hospital, Davangere, India.

b

Professor and Head, Department of Public Health Dentistry, Bapuji Dental College and Hospital, Davangere, India.

Correspondence: Dr. P. Savithra, Department of Public Health Dentistry, Bapuji Dental College and Hospital, Davangere, India. Tel: +91984-420-6996. Email: [email protected] Presented at the XV National Conference of the Indian Association of Public Health Dentistry and IV PG convention held on 5th December 2010, Mysore, India.

Vol 11, No 2, 2013

Submitted for publication: 30.03.12; accepted for publication: 16.05.12

allocated between the treatments of interest’ (Newcombe, 2000), which yields the most reliable conclusions. Nevertheless, there are many other issues relating to conducting and interpreting a study which greatly affect its validity. Well-designed and properly executed RCTs provide the best evidence on the efficacy of health care interventions, but trials with inadequate methodological approaches are associated with exaggerated treatment effects (Chalmers et al, 1983; Khan et al, 1996). Biased results from poorly designed and reported trials can mislead decision making in health care at all levels, from treatment decisions for the individual patient to formulation of national public health policies. Many reviews have documented deficiencies in reports of clinical trials. Proper randomisation eliminates selection bias and is the crucial component of high-quality RCTs. The report of an RCT should convey to the reader in a transparent manner why the study was under-

95

Savithra and Nagesh

taken and how it was conducted and analysed. Lack of adequately reported randomisation has been associated with bias in estimating the effectiveness of interventions. To assess the strengths and limitations of an RCT, readers need and deserve to know the quality of its methodology. This goal can only be achieved through complete transparency from authors. Adhering to the guidelines affects the execution of a study as well as how it is reported. The CONSORT (Consolidated Standards of Reporting Trials) guidelines are designed to ensure that in all respects the study has been carried out satisfactorily. There is a large body of literature related to the assessment of quality of reporting of RCTs in various specialty journals such as periodontics (Antczak et al, 1986; Montenegro et al, 2002), prosthodontics (Prihoda et al, 1992; Dumbrigue et al, 2001; Jokstad et al, 2002), implantology (Esposito et al, 2001), orthodontics (Harrison, 2003) and dentistry in general (Sjögren and Halling, 2002). However, to our knowledge there is a paucity of studies assessing the quality of RCTs adhering to the CONSORT statement in public health dentistry journals. Hence, the present study attempts to assess a) whether the quality of reporting of randomised controlled trials (RCTs) has improved since the formulation of the CONSORT statement and b) whether there is any difference in the reporting of RCTs between the selected public health dentistry journals.

MATERIALS AND METHODS Selection of journals The journals ‘Community Dentistry and Oral Epidemiology’ (CDOE), ‘Community Dental Health’ (CDH), ‘Journal of Public Health Dentistry’(JPHD) and ‘Oral Health and Preventive Dentistry’ (OHPD), the most recent of the four journals to begin publication, were selected for the study. We hand searched each issue of these four journals to identify all papers that reported RCTs between January 1990 and December 2009.

Selection of RCTs To be included, a report had to describe the assignment of participants to intervention as randomised using the following terms: ‘random’, ‘randomly’,

96

Total number of articles published in the selected journals between 1990 and 2009, N=2913

After screening full text articles N=120

Excluded articles = 6 (Economic analyses of published RCTs)

RCTs selected for study N=114

Fig 1  Flow chart of screening of selected articles.

‘randomised’ or ‘randomisation’. Where difficulties arose, the decision on inclusion was made through consensus with a second reviewer. In total, we screened 2913 articles, out of which there were 120 RCTs after screening for title and abstract. Later we excluded 6 articles as they were economic analyses done on RCTs; hence, there were 114 articles for consideration in the study (Fig 1). There were 49 RCTs in CDOE, 17 in CDH, 14 in JPHD, and 34 in OHPD. Photocopies of all the RCTs included in the study were obtained for scrutiny. Data were extracted from all eligible studies to assess their quality using CONSORT guidelines. Screening and quality evaluation were not performed blind to author/affiliation of study, since this has not been shown to result in meaningful differences (Dumbrigue et al, 2001). The assessment of quality of RCT reporting was done using the CONSORT 2010 checklist, where the 25 items were converted into 37 questions under the same headings (Table 1). Each included trial was assessed using the scoring system ‘Yes’, ‘No’ and ‘Not applicable’. We determined whether reporting adhered to all the items in the CONSORT checklist from introduction to funding. After all the RCTs were checked for compliance using a worksheet, the data were entered into an Excel sheet for further analysis. Statistical analysis was performed using SPSS version 11 (Chicago, IL, USA). The chi-square test was used to compare the compliance of the items in the checklist before and after introduction of the CONSORT guidelines and

Oral Health & Preventive Dentistry

Savithra and Nagesh

Table 1 Modified CONSORT checklist 2010 Topic

Item no. Description

TITLE AND ABSTRACT

1

How participants were allocated to interventions (e.g. ‘random allocation’, ‘randomised’, or ‘randomly assigned’)

INTRODUCTION Background

2

Scientific background and explanation of rationale

METHODS Participants

3

Eligibility criteria for participants

4

Settings and locations where the data were collected

Interventions

5

Precise details of the interventions intended for each group and how and when they were actually administered

Objectives

6

Specific objectives and hypotheses

Outcomes

7

Clearly defined primary and secondary outcome measures

8

When applicable, any methods used to enhance the quality of measurements (e.g. multiple observations, training of assessors)

9

How sample size was determined

Sample size Randomisation – Sequence generation

10

Explanation of any interim analyses and stopping rules

11

Method used to generate the random allocation sequence

12

Including details of any restrictions (e.g. blocking, stratification)

Randomisation – Allocation concealment

13

Method used to implement the random allocation sequence (e.g. numbered containers or central telephone), clarifying whether the sequence was concealed until interventions were assigned

Randomisation – Implementation

14

Who generated the allocation sequence

15

Who enrolled participants

16

Who assigned participants to their groups

17

Whether or not participants blinded

18

Those administering the interventions blinded

19

Those assessing the outcomes were blinded to group assignment

20

How the success of blinding was evaluated

21

Statistical methods used to compare groups for primary outcome(s)

22

Methods for additional analyses, such as subgroup analyses and adjusted analyses

23

Flow of participants through each stage (a diagram is strongly recommended). Specifically, for each group report the numbers of participants randomly assigned, receiving intended treatment, completing the study protocol and analysed for the primary outcome

Blinding (masking)

Statistical methods RESULTS Participant flow

24

Describe protocol deviations from study as planned, together with reasons

Recruitment

25

Dates defining the periods of recruitment and follow-up

Baseline data

26

Baseline demographic and clinical characteristics of each group

Numbers analysed

27

Number of participants (denominator) in each group included in each analysis

28

Whether the analysis was by ‘intention-to-treat’, state the results in absolute numbers when feasible (e.g. 10/20, not 50%)

Outcomes and estimation

29

For each primary and secondary outcome, a summary of results for each group, and the estimated effect size and its precision (e.g. 95% confidence interval)

Ancillary analyses

30

Address multiplicity by reporting any other analyses performed, including subgroup analyses and adjusted analyses, indicating those pre-specified and those exploratory

Adverse events

31

All important adverse events or side effects in each intervention group

DISCUSSION Interpretation

32

Interpretation of the results, taking into account study hypotheses, sources of potential bias or imprecision and the dangers associated with multiplicity of analyses and outcomes

Generalizability

33

Generalisability (external validity) of the trial findings

Overall evidence

34

General interpretation of the results in the context of current evidence

Registration

35

Registration number and name of trial registry

Protocol

36

Where the full trial protocol can be accessed, if available

Funding

37

Sources of funding and other support, role of funders

Vol 11, No 2, 2013

97

Savithra and Nagesh

also to compare compliance of the items between journals in order to determine if any statistically significant difference exists in reporting.

RESULTS The present research strategy identified 114 RCTs. The compliance of the reported trials with the 37 items in the CONSORT guidelines varied widely. The compliance was better for certain items and poor for a few other items of the CONSORT guidelines. Good levels of compliance with CONSORT were found in the reporting of title, abstract, introduction, methods used to enhance quality of measurement and sample size determination, in reporting dates defining the periods of recruitment and number of participants in each group as well as the discussion, generalisability and overall evidence sections, all of which showed statistically significant differences between journals (P < 0.05) (Figs 2 and 5). Some of the items, i.e. study set-

tings, details of intervention, defining outcome measures, blinding of investigator, statistical methods used and interpretation of results were adequately reported, but did not show any statistical significance between the journals (Figs 3 and 4). Reporting of randomisation methods was poor. The least or no reporting was observed in the explanation of interrim analysis and stopping rules, methods used to implement random allocation sequence, who generated the allocation sequence and who enrolled participants, evaluating the success of blinding, describing protocol deviation, intention to treat analysis and registration of trial (Figs 3 and 4).

Compliance before and after formulation of CONSORT guidelines The quality of reporting showed better results after CONSORT guidelines in terms of title, abstract and discussion sections. Results were significant for

100 90 80 70

CDH

60

CDOE

50 40

JPHD

30

OHPD

20 10 0 1

2

3

4

5

6

7

8

9

10

Fig 2    Adherence to items 1 to 10 of CONSORT guidelines for all 4 journals examined. x-axis: items number in CONSORT checklist; y-axis: percentage (applicable to all graphs). Underlined items showed statistical significance (P < 0.05).

40 35 30 25

CDH

20

CDOE

15

JPHD

10

OHPD

5 0 11

98

12

13

14

15

16

17

18

19

20

Fig 3    Adherence to items 11 to 20 of CONSORT guidelines.

Oral Health & Preventive Dentistry

Savithra and Nagesh

item 2, which asked if authors reported scientific background and rationale for conducting studies (P = 0.009), item 9 on how the authors arrived at that particular sample size (P = 0.003) and item 27, which asked about reporting number of participants in each group (P = 0.009). The results with respect to these items showed a statistically sig-

nificant improvement in the journal Community Dental Health (Fig 6). Reporting of items 1 (using randomisation term, P = 0.001), 5 (details of intervention, P = 0.03), 9 (sample size determination, P = 0.001), 11 (random allocation method, P = 0.024) and 26 (baseline demographic details, P = 0.014) showed statistically significant differ-

100 90 80 70

CDH

60 CDOE

50 40

JPHD

30

OHPD

20 10 0 22

21

23

24

25

26

27

28

29

30

Fig 4    Adherence to items 21 to 30 of CONSORT guidelines. Underlined items showed statistical significance (P < 0.05).

100 80 CDH 60

CDOE JPHD

40

OHPD

20 0 32

31

33

34

35

36

37

Fig 5    Adherence of items 31-37 with CONSORT guidelines. Underlined items showed statistical significance (P < 0.05).

100 80 60 40 20 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

before 1997

Vol 11, No 2, 2013

1998 and after

Fig 6    Compliance of articles with CONSORT guidelines before and after 1997 for Community Dental Health. Underlined items showed statistical significance (P < 0.05).

99

Savithra and Nagesh

ences in trials published in Community Dentistry and Oral Epidemiology after the CONSORT guidelines were published (Fig 7). In the Journal of Public Health Dentistry, reporting of item 1 (using a randomisation term, P = 0.02) and item 11 (methods used to generate random allocation sequence, P = 0.02) showed a statistically significant differ-

ence after the CONSORT check list was formulated (Fig 8). As shown in Fig 9, Oral Health and Preventive Dentistry showed 100% adherence to the CONSORT checklist in reporting of items 2 (scientific background), 5 (details of intervention), 7 (defining outcomes), 33 (generalisability) and 34 (interpretation of results in the current context).

100 80 60 40 20 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

CDOE before 1997

CDOE 1998 and after

Fig 7    Compliance of articles with CONSORT guidelines before and after 1997 for Community Dentistry and Oral Epidemiology. Underlined items showed statistical significance (P < 0.05).

100 80 60 40 20 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

JPHD before 1997

JPHD 1998 and after

Fig 8    Compliance of articles with CONSORT guidelines before and after 1997 for the Journal of Public Health Dentistry. Underlined items showed statistical significance (P < 0.05).

100 80 60 40 20 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

OHPD before 1997

100

OHPD 1998 and after

Fig 9    Compliance of articles with CONSORT guidelines for Oral Health and Preventive Dentistry. Underlined items showed 100% adherence to CONSORT checklist.

Oral Health & Preventive Dentistry

Savithra and Nagesh

Some of the items showed a reverse trend. Some items showed differences in reporting but without statistical significance. A few items were not reported before CONSORT was published, but have started reporting them after CONSORT checklist formulation. A few other items were reported earlier but have been completely omitted since the CONSORT guidelines were published. The results of these findings are presented in Table 2. The present study revealed that the overall adherence to the items in the CONSORT checklist is 27.02% in reporting RCTs published in journals related to public health dentistry.

DISCUSSION The results of this study revealed that reporting of scientific background, standardisation, sample size determination, defining dates, number of participants in each group, generalisability and interpretation of results in the current context were adequate, which is in agreement with other studies (Anttila et al, 2006; Al-Namankany et al, 2009). It also indicates that reporting of items related to title, abstract, introduction and discussion are adequate. Furthermore, the present results also suggest that reporting of randomisation, allocation concealment, blinding, withdrawals and dropouts were poor, which is in line with the results of various studies published in other disciplines (Montenegro et al, 2002; Sjögren and Halling, 2002; Anttila et al, 2006; Al-Namankany et al, 2009; Pandis et al, 2010). Such a comparison was done with the results of studies published in specialty journals other than public health dentistry, because to the best of our knowledge, no similar studies have been conducted in public health dentistry. In contrast to our study reports, results of a study conducted by Mills

et al (2004) on RCTs published in pharmacology revealed that reporting of allocation concealment, withdrawals, dropouts and description of adverse events was adequate.

Rationale for the research question The results of RCTs provide the highest quality of evidence and therefore it is of paramount importance to rigorously design, conduct and report in accordance with the CONSORT guidelines. Despite several decades of educational efforts, RCTs still are not being reported adequately (Hotopf et al, 1997; Adams, 1998; Dickinson et al, 2000). Overwhelming evidence now indicates that the quality of reporting of randomised controlled trials is less than optimal. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic errors seriously damage the credibility of RCTs, which claim elimination of systematic error as their primary hallmark. Systematic error in RCTs reflects poor science, and poor science undermines ethical standards. This was the rationale behind the present research question.

Why CONSORT guidelines? To ensure better reporting many guidelines are available; the CONSORT statement is one of them. It comprises a checklist and flow diagram for reporting an RCT; for convenience, the checklist and diagram together are simply called CONSORT and are primarily intended for use in writing, reviewing or evaluating reports. We chose the CONSORT statement as it was the most popular and widely recommended by many journals for reporting RCTs.

Table 2 Trends by journal in reporting RCTs before and after the formulation of CONSORT guidelines Findings

Community Dental Health

Community Dental and Oral Epidemiology

Journal of Public Health Dentistry

Reverse trend

Items 4, 8, 17, 33, 34, 37

Items 8, 17, 18, 25, 26, 27, 28

Items 18, 29

Marked difference without statistical significance

Items 1, 2, 3, 21, 29, 31

Items 7, 37

Items 26, 33, 37

Not reported earlier but started reporting after CONSORT checklist

Items 6, 9, 11, 16, 19, 23, 28

Items 15, 16, 19, 31

Items 3, 6, 9, 12, 13, 16, 17, 23

Previously reported but completely omitted after CONSORT checklist

-

Item 22

Item 11

Vol 11, No 2, 2013

101

Savithra and Nagesh

The use of CONSORT seems to reduce (if not eliminate) inadequate reporting of RCTs (Moher et al, 1992; Egger et al, 2001; Moher et al, 2001). Potentially, the use of CONSORT should have a positive influence on how RCTs are conducted. CONSORT encourages transparency when reporting the methods and results so that reports of RCTs can be interpreted readily and accurately. Diligent adherence by authors to the checklist items facilitates clarity, completeness and transparency of reporting. Since the publication of CONSORT, several evaluations of its effectiveness have been published. Evidence suggests that journal endorsement of CONSORT has improved the quality of reporting of RCTs (Moher et al, 1992; Egger et al, 2001).

CONCLUSIONS In conclusion, the quality of reporting of clinical trials is generally poor in the public health dentistry journals considered here. The overall quality of reporting of trials has not substantially improved since the publication of CONSORT in key items as required by guidelines. There seems to be ample room for improvement of the RCT reporting quality. In order to assure accuracy in designing, conducting and transparent reporting of RCTs, we recommend that investigators as well as journal reviewers and editors not only endorse but also strictly verify the application of CONSORT guidelines.

ACKNOWLEDGEMENT Why were these four journals selected? The impact factor is a measure of the frequency with which papers appearing in the journals are subsequently cited by other papers. The impact factor is a handy if inexact estimate of the extent to which scientists and scholars pay attention to a journal’s content (Giannobile et al, 2010). Often, the journal impact factor is viewed as a measure of scientific quality (Garfield 1972, 1976), and it has also been stated that trials published in lower impact journals are less likely to follow CONSORT guidelines (Montané et al, 2010). Thus we chose the major journals in the field of public health dentistry which have a good impact factor.

Why was hand searching of articles performed? Hand searching was preferred to electronic searching of databases for RCTs, as it has been shown to identify more trials compared with electronic searching (Gludd and Nikolova, 1998); for us, hand searching was possible. According to the CONSORT guidelines, some of the important features that experimental design and reporting must account for are sample size calculation and power analysis, randomisation, blinding, reporting of effect size, confidence intervals, statistical significance, subgroup analysis and confounding/stratification (Pandis et al, 2010). The quality scores of RCTs in major dental journals are considered suboptimal in key CONSORT areas. Low quality scores have also been reported in other areas of biomedical research.

102

Our sincere thanks to Dr. C.S. Bhagyajyothi for her help during the study.

REFERENCES 1. Adams TB. Content and quality of 2000 controlled trials in schizophrenia over 50 years. BMJ 1998;317:1181–1184. 2. Al-Namankany AA, Ashley P, Moles DR, Parekh S, Assessment of the quality of reporting of randomized clinical trials in pediatric dentistry journals. Int J Pediatr Dent 2009;19: 318–324. 3. Antczak AA, Tang J, Chalmers TC. Quality assessment of randomized control trials in dental research. II. Results:periodontal research. J Periodontal Res 1986;21: 315–21. 4. Anttila H, Malmivaara A, Kunz R, Autti-Ramo I, Makela M. Quality of reporting of randomized, controlled trials in cerebral palsy. Pediatrics 2006;117:2222–2230. 5. Chalmers TC, Celano P, Sacks HS, Smith H. Bias in treatment assignment in controlled clinical trials. New Engl J Med 1983;309:1358–1361. 6. Dickinson K, Bunn F, Wentz R, Edwards P, Roberts I. Size and quality of randomized controlled trials in head injury: review of published studies. BMJ 2000;320:1308–1311. 7. Dumbrigue HB, Jones JS, Esquivel JF. Control of bias in randomized controlled trials published in prosthodontic journals. J Prosthet Dent 2001;86:592–596. 8. Egger M, Juni P, Bartlett C. Value of flow diagrams in reports of randomized controlled trials. JAMA 2001;285: 1996–1999. 9. Esposito M, Coulthard P, Worthington HV, Jokstad A. Quality assessment of randomized controlled trials of oral implants. Int J Oral Maxillofac Implants 2001;16:783–792. 10. Garfield E. Citation analyses as a tool in journal evaluation. Science 1972;178:471–479. 11. Garfield E. Significant journals of science. Nature 1976;264:609–615. 12. Giannobile WV, Burt BA, Genco RJ. Clinical research in oral health, publication of research findings. Hoboken, NJ: Wiley-Blackwell, 2010:321–340.

Oral Health & Preventive Dentistry

Savithra and Nagesh 13. Gludd C, Nikolova D. Quality assessment of reports on clinical trials in the journal of hepatology. J Hepatol 1998;29:321–327. 14. Harrison JE. Clinical trials in orthodontics II: assessment of the quality of reporting of clinical trials published in three orthodontic journals between 1989 and 1998. J Orthod 2003;30:309–315. 15. Hotopf M, Lewis G, Normand C: putting trials in depression: a systematic review of methodology. J Epidemiol Community Health 1997;51:354–358. 16. Jokstad A, Esposito M, Coulthard P, Worthington HV. The reporting of randomized controlled trials in prosthodontics. Int J Prosthodont 2002;15:230–242. 17. Khan KS, Daya S, Jadad AR. The importance of quality of primary studies in producing unbiased systematic reviews. Arch Intern Med 1996;156:661–666. 18. Mills E, Loke YK, Wu P, Montori VM, Perri D, Moher D, Guyatt G. Determining the reporting quality of RCTs in clinical pharmacology. Br J Clin Pharmacol 2004;58:61–65. 19. Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001;285: 1992–1995.

Vol 11, No 2, 2013

20. Moher D, Schulz KF, Altman DG, The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol 2001;1:2. 21. Montané E, Vallano A, Vidal X, Aguilera C, Laporte J-R. Reporting randomised clinical trials of analgesics after traumatic or orthopaedic surgery is inadequate: a systematic review. BMC Clin Pharmacol 2010;10:2. 22. Montenegro R, Needleman I, Moles D, Tonetti M. Quality of RCTs in periodontology. A systematic review. J Dent Res 2002;81:866–870. 23. Newcombe RG. Reporting of clinical trials in the JO – the CONSORT Guidelines. J Orthod 2000;27:69-70. 24. Pandis N, Polychronopoulou A, Eliades T. An assessment of quality characteristics of randomized control trials published in dental journals. J Dent 2010;38:713–721. 25. Prihoda TJ, Schelb E, Jones JD. The reporting of statistical inferences in selected prosthodontic journals. J Prosthodont 1992;1:51–56. 26. Sjögren P, Halling A. Quality of reporting randomized clinical trials in dental and medical research. Br Dent J 2002;192: 100–103.

103