Tag Archives: industry sponsored trials

Introduction to Diagnostic Reasoning

by Michael Turlik, DPM1

The Foot and Ankle Online Journal 2 (10): 5

This is the first of four articles written for podiatric physicians to help them understand and apply the results of diagnostic studies to their practice. This article deals with how clinicians arrive at a diagnosis and how to interpret results from a diagnostic trial. An article from the foot and ankle literature will be used to illustrate the concepts discussed in the publication.

Key words: Evidence-based medicine, industry sponsored trials.

Accepted: August, 2009
Published: October, 2009

ISSN 1941-6806
doi: 10.3827/faoj.2009.0210.0005


The method of arriving at a diagnosis can be a simple or a very complex process. This depends upon the clinician’s knowledge, experience, clinical presentation of the diagnostic problem, prevalence of the disease and diagnostic studies employed. Podiatric physicians always encounter some degree of uncertainty in practice whether it is the true effect of a therapeutic intervention or the diagnosis of a patient’s condition. After collecting information needed to make a diagnosis there usually is some information threshold after which additional information becomes irrelevant and treatment begins. (Fig.1) There are two basic ways by which clinicians arrive at a diagnosis. They are, pattern recognition / categorization or a probabilistic diagnostic reasoning / hypotheico-deductive approach. [1,2]

Figure 1 Threshold model of decision making.*  

*Reproduced with permission from Center for Evidence-Based Medicine, http://www.cebm.net/index.aspx?o=1043

Pattern recognition approach or Categorization

This approach is used by experts making a common diagnosis in their field of expertise. The use of this method varies widely among clinicians and is based upon the thoroughness of their knowledge base and their experience. When using this approach podiatric physicians are able to quickly evaluate the clinical scenario and place it in some familiar combination of signs and symptoms and rapidly make the diagnosis. This type of diagnostic reasoning does not involve the generation of multiple hypotheses which are tested and it is unlikely they use the same reasoning process as novice clinicians. For example, Mr. Jones a 52 year-old obese white male who presents to the office complaining of a two-week history of heel pain which began insidiously and is localized to the plantar medial aspect of his heel.

He relates the pain is worse after periods of inactivity, arising from a sitting position or when first bearing weight on the heel after sleeping. Physical examination reveals no redness, no edema, no deformity, but tenderness to palpation of the plantar medial heel. Even the inexperienced podiatrist would be able to make the diagnosis of mechanically induced heel pain given this scenario. This diagnosis does not require any type of diagnostic studies for most podiatric physicians and the diagnosis is thought to be clinical in nature. [3] The pretest probability of mechanically induced heel pain in this scenario is very high and likely to be greater than 90%. In addition, the podiatrist will recognize that the chance of a bone tumor of the calcaneus producing this clinical picture is close to 0%. The percentage of patients with the disease in the specified population at any point in time is defined as prevelance / pretest probability. Pretest probabilities which are extremely low or extremely high usually will not benefit from further diagnostic testing. (Fig. 1)

Probabilistic Diagnostic Reasoning or Hypotheico-Deductive

When clinicians face an atypical presentation of a common condition or something more challenging for their specialty, clinicians will switch from pattern recognition to probabilistic diagnostic reasoning.

As a result of the clinical encounter the clinician will generate a short list of diagnostic hypotheses with an estimate of the probability for each possibility. This list will guide subsequent efforts in data collection. The pretest probability for this type of diagnostic inquiry usually lies in an intermediate range rather than at the extremes. (Fig. 1) Therefore, diagnostic studies may be very helpful in distinguishing between the different hypotheses and result in restructuring and reprioritizing diagnostic possibilities as further information is obtained. For example, a 50-year-old neuropathic diabetic male presents with a one week history of progressive redness, swelling and pain about a recurrent plantar ulcer successfully treated with oral antibiotics and local wound care in the past. Physical examination reveals a mildly obese afebrile male with palpable pulses and lack of protective sensation bilaterally. The ulcer under the first metatarsophalangeal joint (MPJ) measures 1.5 cm in diameter and exhibits a red base. There is minimal drainage on the dressing without odor. The important question which needs to be answered in this scenario is; does this patient have osteomyelitis? Pretest probability (prevalence) in this case varies from 20-66% based upon the study location referenced. [4] Higher pre-test probabilities are seen in tertiary care hospital settings, lower in outpatient primary care settings. The range of pretest probabilities in this case differs from the earlier example of mechanically induced heel pain because it is in the intermediate category indicating that some further diagnostic test(s) is (are) necessary. (Fig. 1)

The test and treatment thresholds (Fig. 1) are not static but dynamic. They vary with the invasiveness and cost of the test, the consequences of misdiagnoses of the disease process, and the efficacy and expense of the treatment. (Table 1) For example, in the case of a diabetic patient with a pedal ulcer referenced above, the test threshold would be lower for using a metal probe to evaluate the ulcer for osteomyelitis than for performing a bone biopsy. Since mechanically induced heel pain is a benign self limited condition which responds to non surgical care the treatment threshold would be lower than the treatment threshold for osteomyelitis in a diabetic patient. A comprehensive explanation of how to calculate test treatment thresholds using decision tree analysis is provided for the interested reader. [5]

Table 1 Variations in test / treatment threshold.2

The information gained from the results of a diagnostic study change the pretest probability, the revised estimate of prevalence is termed the posttest probability. The magnitude of the change is a function of the strength of the diagnostic intervention on the pretest probability. The direction of the posttest probability can either be higher or lower than the pretest probability depending upon the results of the diagnostic study used. The strength of the diagnostic intervention may be presented in many ways; the most clinically useful is a likelihood ratio.

Assessing the Performance of a Diagnostic Test

The question that podiatric physicians must answer after ordering a diagnostic test is; based upon the results of this test how probable is it that my patient has a diagnosis of ___? To answer this question it is necessary to construct a 2 x 2 table (Table 2) from a study of the intervention to determine the strength of the test. Some measures of probability derived from a 2 x 2 table are the following:

Sensitivity: the proportion of the patients with the disease who test positive.
TP / (TP + FN)

Specificity: the proportion of the patients without the disease who test negative.
TN / (TN+FP)

Positive Predictive Value: proportion of patients with a positive test who have the disease
TP / (TP+FP)

Negative Predictive Value: proportion of patients with negative test who do not have the disease
TN / (TN+FN)

Positive Likelihood Ratio: how much the odds of the disease increase when a test is positive.
sensitivity / 1-specificity

Negative Likelihood Ratio: how much the odds of the disease decrease when a test is negative.
1-sensitivity / specificity

Table 2   2 x 2 diagnostic table.

The higher the sensitivity of the test is the better its ability to detect disease due to a low false negative rate. Diagnostic tests with a high sensitivity 95-99% are used when there is an important price for missing a serious disease which is treatable. High sensitivity tests are usually used early in the workup of the disease and if positive are followed up with a test which has a high specificity. If a test with a high sensitivity is negative the podiatric physician can be comfortable in ruling out the disease process. The mnemonic SnOut refers to a diagnostic test with a high sensitivity.

Diagnostic tests which have a high specificity are used to identify those patients who do not have the condition of interest. A highly specific test will rarely miss people as having the disease when they do not. These types of tests are most useful to confirm a diagnosis which has been suggested by a test which is highly sensitive. Highly specific tests are particularly useful when false positive results can cause harm to the patient physically, psychologically, or fiscally. If the test results are positive it is very helpful to the podiatric physician in confirming the disease process. The mnemonic Spin refers to a diagnostic test with a high specificity.

It is not possible to have a test which is highly specific and sensitive when dealing with data collected over a range of values.

When the test is measured over a continuum of values changing the artificial cutoff point causes a change in the sensitivity and specificity. Sensitivity can only be increased at the expense of specificity. Sensitivity and specificity are not clinically useful measures and do not answer the question of probability of having or not having the disease under evaluation. [6]

Predictive values are another measurement of test efficiency which can be derived from a 2 x 2 table. [7] Predictive values derived can be used to gain information regarding the probability of disease in patients. As a test’s sensitivity increases so too does the negative predictive value. As a test’s specificity increases likewise positive predictive values increase. Unlike sensitivity and specificity predictive values are influenced by disease prevalence. Predictive values vary with disease prevalence in a nonlinear manner. [8] Therefore using predictive values derived in outpatient primary care setting will be misleading when applied to a tertiary care setting since the prevalence is usually different. This is a major limitation to the podiatric physician when using predictive values in clinical practice. In order to be clinically useful they should be employed in as similar a practice setting they were derived from.

A third method of determining test efficiency from a 2 x 2 table is to generate likelihood ratios. [6] Likelihood ratios are not apt to be influenced by disease prevalence provided disease spectrum remains the same for a different prevalence. [9] Likelihood ratios are expressed as odds rather than proportions. Sensitivity, specificity, as well as predictive values are expressed as proportions. Likelihood ratios are the preferred method of expressing test efficiency in evidence-based medicine publications. Likelihood ratios combine sensitivity and specificity of a diagnostic study which allows you to intuitively determine the odds of which the pretest probability will change based upon a positive or negative test result. Pretest probability X likelihood ratio = posttest probability. Using likelihood ratios to modify pretest probabilities to determine the posttest probability cannot be done directly.

Since likelihood ratios are expressed as odds rather than proportions they must be converted prior to application. This can be done using mathematical conversions, internet calculators, or a nomogram.

How best to estimate the prevalence of disease? Clinical observations and experience are often inaccurate. A better estimate arises from reviewing the medical literature on the subject and/or evaluation of large computerized databases. Pretest probability is not a constant but varies with the clinical environment. Prevalence is increased when patients are passed through a filter from a primary care source to a tertiary care facility.

In order for a podiatric physician to correctly utilize a diagnostic study he or she will need to estimate the prevalence of the disease in their patient population, the likelihood ratio of the test employed and the rigor of the study used to determine the test’s accuracy. In a recent systematic review of electrodignostic techniques currently in use to evaluate tarsal tunnel syndrome (TTS) [10] the authors conclude that due to the poor quality of the studies sensitivities and specificities reported could not be combined in a summary statistic. In addition, prevalence for TTS could not be determined. The author’s conclusions limit the usefulness of electrodiagnostic studies in the evaluation of TTS.

Diabetes and Pedal Osteomyelitis

A recent article [11] appraises the published literature concerning the various diagnostic options for evaluating infected diabetic foot ulcers for the presence of osteomyelitis. The gold standard in each study was bone biopsy. A summary of the authors findings limited to higher quality studies is presented in Table 3. The highest likelihood ratios are found for ulcer area > 2 cm2 and erythrocyte sedimation rate (ESR) > 700 mm/hr. Unfortunately, these test also have very large 95% confidence intervals which indicates that the results of these studies are not very precise. Tests which have a narrower 95% confidence interval are magnetic resonance imaging (MRI), probe to bone and abnormal radiograph. The probe to bone test based upon its cost, and adverse effects should be the first test under taken by the podiatric physician when evaluating an infected diabetic pedal ulcer for the presence of osteomyelitis.

Table 3 Likelihood Ratios for Studies used to Evaluate Diabetic Osteomyelitis.11  (*Confidence Intervals)

The likelihood ratio for the probe to bone test cited in Butalia’s review is a composite of three different studies. One of which is Lavery’s study. Lavery and colleagues evaluated the accuracy of a probe to bone test for osteomyelitis in patients with diabetic foot ulcers. [4] They expressed their results in terms of sensitivity, specificity, positive and negative predictive values. They did not report likelihood ratios.

In the results section the authors report information which could be used to construct a 2 x 2 table (Table 4). Using an online diagnostic calculator [12] likelihood ratios can be calculated. The value for a positive test was 9.4 and a negative test 0.14. When likelihood ratios are greater than one this increases the chance of the disease being present, likelihood ratios less than one decrease the chance of the disease being present. Likelihood ratios of > 10 or < 0.1 generate large conclusive changes. Likelihood ratios between 5-10 and 0.1-0.2 are associated with moderate changes in probability. The likelihood ratios calculated from Lavery’s study [4] are associated with moderate/large changes in diagnostic probabilities. The pretest probability (prevalence) from Lavery’s study [4] is 12%.

Table 4 Results of probe to bone test.
*modified from Diabetes Care 30: 270, 2007

Using an on line calculator [13] or a Likelihood Ratio Nomogram (Fig. 2) the posttest probability can be calculated for a positive test to be 56.4% and a negative test 1.87%. A negative test should fall below the test threshold therefore effectively ruling out the condition. (Fig. 1) A positive test in this scenario still remains in the intermediate range for this prevalence and indicates further testing is necessary. If the prevalence were higher for example, 60% which is the prevalence seen in some studies in tertiary care centers [14] the posttest probability for a negative result would be 17.4% and a positive result would be 93.4%.

Figure 2 Likelihood Ratio Nomogram.*
*reproduced with permission from Center for Evidence-Based Medicine
http://www.cebm.net/index.aspx?o=1043

These results indicate that further testing for a positive test is likely unnecessary while a negative test may fall within the intermediate range and may require further testing, opposite the results using the prevalence from Lavery’s study.

The above example demonstrates the use of likelihood ratios for diagnostic studies evaluating a dichotomous outcome. Likelihood ratios can also be used with continuous test results as interval likelihood ratios. [15]

How believable is the likelihood ratio derived from a study?

The quality of the evidence derived from a diagnostic study is a function of the studies ability to minimize bias. [16] The best study design for diagnostic tests (Level 1) is an independent, masked comparison with a reference standard among an appropriate population of consecutive patients. Just as with randomized controlled trials, diagnostic studies are separated into different levels of evidence (Table 5), with the less rigorous (more biased) studies over estimating test effectiveness. [17] The largest effect of overestimation occurs from studies which include non-representative patients or studies which apply different reference standards for positive and negative test results. The smallest overestimation occurred when blinding was not adhered to during the study. The following article in this series will discuss how to critically appraise a diagnostic study for validity.

Table 5  Levels of evidence for diagnostic studies.7

References

1. Elstein A, Schwartz:. Clinical problem solving and diagnostic decision making: a selective review of the cognitive research literature. In: Knottnerus JA (Ed). The Evidence Base of Clinical Diagnosis. London, England: BMJ books, 179 – 195, 2002.
2. Richardson WS, Wilson M: The process of diagnosis. In: Guyatt G, Bhandari M, Tornetta P, Schemitsch EH, Sprint Study Group: Users guides to the medical literature. New York, New York: McGraw-Hill, 399 – 406, 2008.
3. Cole C, ,Seto C, Gazewood J: Plantar fasciitis: Evidence-based review of diagnosis and therapy. Am Fam Physician 72: 2237 – 2242, 2005.
4. Lavery L, Armstrong DG, Peters EJG, Lipsky BA: Probe-to-Bone Test for Diagnosing Diabetic Foot Osteomyelitis. Diabetes Care 30: 270 – 274, 2007.
5. Pauker S, Kassirer J: The threshold approach to clinical decision making. NEJM 302: 1190 – 1116, 1980.
6. Deeks J, Altman D: Diagnostic tests 4: likelihood ratios. BMJ 329: 168 – 169, 2004.
7. Altman D, Bland JM: Statistics notes: Diagnostic tests 2: predictive values. BMJ 309: 102, 1994.
8. Predictive values http://www.poems.msu.edu/InfoMastery/Diagnosis/PredictiveValues.htm. Accessed 8/25/2009. Accessed 09/09/2009.
9. Montori V, Wyer P, Newman T, Keitz S, Guyatt G: Tips for learners of evidence-based medicine: 5. The effect of spectrum of disease on the performance of diagnostic tests. CMAJ 173: 385 – 390, 2005.
10. Patel A, Gaines K., Malmut R., Park T, Del Toro D, Holland N: Usefulness of electrodiagnostic techiques in the evaluation of suspected tarsal tunnel syndrome: An evidence-based review. Muscle and Nerve 32: 236 – 240, 2005.
11. Butalia S, Palda V, Sargeant R, Detsky A, Mourad O: Does this patient with diabetes have osteomyelitis of the lower extremity? JAMA 299: 806 – 813, 2008.
12. Likelihood Ratio Calculator http://araw.mede.uic.edu/cgi-alansz/testcalc.pl Accessed 3/8/2009.
13. Post-test probability of disease calculator. http://homepage.mac.com/aaolmos/Posttest/posttest.html Accessed 3/9/2009.
14. Grayson ML, Gibbons GW, Balogh K, Levin E, Karchmer AW: Probing to bone in infected pedal ulcers. A clinical sign of underlying osteomyelitis in diabetic patients. JAMA 273: 721 – 723, 1995.
15. Mayer D: Essential Evidence-based Medicine. Cambridge, England: Cambridge University press, 233 – 236, 2004.
16. Moore A, McQuay H: Systematic reviews of diagnostic tests. In: Bandolier’s Little Book of Making Sense of the Medical Evidence. London, England: Oxford University press, 236 – 242, 2006.
17. Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen JHP, Bossuyt PMM: Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 282: 1061 1066, 1999.


Address correspondence to: Michael Turlik, DPM
Email: mat@evidencebasedpodiatricmedicine.com

Private practice, Macedonia, Ohio.

© The Foot and Ankle Online Journal, 2009

Evaluation of Clinical Practice Guidelines

by Michael Turlik, DPM1

The Foot and Ankle Online Journal 2 (9): 5

Clinical practice guidelines are defined and their use is explained. Two published guidelines dealing with heel pain are evaluated for validity using a common readily available validated instrument which can be accessed on the internet.

Key words: Evidence-based medicine, industry sponsored trials.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License.  It permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©The Foot and Ankle Online Journal (www.faoj.org)

Accepted: August, 2009
Published: September, 2009

ISSN 1941-6806
doi: 10.3827/faoj.2009.0209.0005


Clinical practice guidelines (CPG) are systematically developed documents which are published to provide specific recommendations to standardize the process of diagnosis and treatment of common clinical disorders for clinicians, patients and healthcare administrators. They are derived from the best available evidence and current best practice. Authors of CPG’s gather appraise and combine evidence similar to systematic reviews however, unlike systematic reviews CPG’s provide actionable recommendations at a clinical level. It is hoped that by publishing guidelines that there is a decrease of ineffective care with a corresponding increase of effective care. Outcomes of CPG’s would be: increased consistency, higher quality care and more predictable healthcare processes. CPG’s are developed by various methods, by diverse stakeholders for different purposes. The process of developing a CPG should be transparent and minimize bias. The recommendations for patient care made should be clear and understandable. Each recommendation should reference the source of the information used to generate the recommendation.

The information cited by the guideline developers for each recommendation should be appraised for its quality and graded for its strength. Primary studies used to make the recommendations in the CPGs may be valid however, the strength will be graded lower if they demonstrate a small imprecise effect or have substantial risks and costs. Using the information provided by the authors of the guideline regarding study quality and grade the podiatric physician will be able to have confidence in the recommendations generated. CPGs are not flawless or a substitute for clinical judgment. CPG’s should not be considered as mandatory statements since not all recommendations may be linked to high quality evidence. In addition, not all podiatric physicians may function in the same environment with the same patients used to develop the CPG’s. CPG’s may be valid but not relevant to the specific clinician or patient.

If CPG’s are used by podiatric physicians they should have an understanding of how to critically analyze the guideline for validity, interpret the results and determine its generalizability to their unique situation. Systematic errors in the development of the CPG can distort the recommendations away from the truth.

What can go wrong; incorrect search strategies can result in loss of important papers, inadequate appraisal and synthesis of the papers found can result in incorrect recommendations, wording, format or structure which is confusing can lead to misunderstanding. Finally, it is important to the podiatric physician to determine who paid for the study, if there was any conflict of interest involving the authors and what was done about it. The purpose of this paper is to provide instruction for podiatric physicians in evaluating CPG’s. Two different CPG’s dealing with a common podiatric complaint heel pain [1,2] will be compared and contrasted for validity and relevance.

Evaluating clinical practice guidelines

There are several published instruments in use to evaluate clinical practice guidelines. The Conference on Guideline Standardization [3] (COGS) developed and published an 18 item instrument to evaluate the validity of clinical practice guidelines. The Grading of Recommendations Assessment, Development, and Evaluation working group4 (GRADE) was begun in 2000 to develop a common, sensible and transparent approach to grading the quality of evidence and strength of recommendations used in clinical practice guidelines. The AGREE instrument5 was developed 2001 by an international group of researchers and policymakers. The AGREE instrument will be used in this article to critically analyze the clinical practice guidelines referenced earlier.

The AGREE website5 provides the instrument to evaluate a CPG and a training manual for the instrument. The AGREE instrument consists of 23 separate items grouped into six quality domains (Table 1) which measure both internal and external validity of CPG’s. It is a validated generic instrument which can be used to evaluate new, existing and revised CPG’s. The AGREE instrument evaluates the process not the content of the CPG.

Table 1 AGREE quality domains.

AGREE Evaluation

Scope and purpose

This domain consists of three separate questions (Q) which evaluate the overall aim of the guideline, the specific clinical questions and target patient population. Answers to these questions allow the podiatric physician to determine if the CPG is relevant to his or her practice setting (generalizability).

Q1 The overall objective (s) of the guideline is (are) specifically described.

The American College of Foot and Ankle Surgeons (ACFAS) heel pain guideline [1] does not explicitly state an overall objective. The American Physical Therapy Association (APTA) guideline [2] is one of a series of guidelines produced by APTA. General purposes for the series of guidelines are explicitly stated.

Q2 The clinical question (s) covered by the guideline is (are) specifically described.

The development of foreground questions utilizing the PICO (Patient, Population, Problem, Intervention, Exposure, Comparison and Outcome) technique have been described elsewhere. [6] The ACFAS heel pain guideline1 does not explicitly state clinical question (s) to be covered. The APTA guideline [2] specifically describes two different tasks which it hopes to accomplish.

Q3 The patients to whom the guideline is meant to apply are specifically described.

Neither the ACFAS heel pain guideline [1] nor the APTA guideline [2] explicitly state the patients in whom the guideline is meant to apply.

Stakeholder Involvement

This domain is composed of a series of questions which focus on the extent to which the guideline represents the views of its intended users. The answers to these questions will help the podiatric physician in determining the CPG’s relevance to his or her clinical practice.

Q4 The guideline development group includes individuals from all relevant groups.

The guideline development group should be diverse to include various stakeholders; end users, policy makers and consumers. [7] It is of interest to the podiatric physician if the guideline development group includes a podiatrist.

The ACFAS heel pain guideline [1] was developed by podiatrists with membership in the ACFAS. No other groups were involved. The APTA guideline [2] was developed principally by physical therapists some with advanced degrees as well as, an orthopedic surgeon specializing in foot and ankle care.

Q5 The patient’s views and preferences have been sought.

It is important in evidence-based medicine to include the values and concerns of patients. [8]

There is no evidence that the ACFAS heel pain guideline [1] or the APTA guideline [2] included patient’s views and preferences in developing the CPG.

Q6 The target users of the guideline are clearly defined.

From a podiatric physician’s viewpoint it is important to consider whether the target user of the guideline specifically lists podiatry.

The ACFAS heel pain guideline [1] does not define target users of the guideline. The authors of the APTA guideline [2] define the target users as orthopedic physical therapy clinicians, students, residents, academic instructors, clinical instructors, fellows and interns. Podiatric physicians are not mentioned in the APTA guideline as authors, reviewers or intended recipients.

Q7 The guideline has been piloted among targeted users.

The ACFAS heel pain guideline [1] has not been piloted among targeted users. The APTA guideline [2] authors provide a detailed and comprehensive explanation of the review process. The guideline was reviewed by multiple varied healthcare practitioners for feedback prior to being finalized.

Rigor of development

This domain relates to the process to gather and synthesize the evidence, the methods to formulate the recommendations and to update them. The answers to these questions will help the podiatric physician in determining the internal validity of the CPG.

Q8 Systematic methods were used to search for the evidence.

An earlier publication has covered this topic in some detail. [9] The ACFAS heel pain guideline [1] provided no information regarding the search strategy in the development of the CPG. The authors of the APTA guideline [2] discussed why a systematic search could not be utilized in the development of the CPG.

Q9 The criteria for selecting the evidence are clearly described.

The ACFAS heel pain guideline [1] does not provide any criteria for selecting the evidence used in the development of the CPG. The APTA guideline [2] in the methods section described the criteria which were used to select the evidence used in the CPG.

Stakeholder Involvement

This domain is composed of a series of questions which focus on the extent to which the guideline represents the views of its intended users. The answers to these questions will help the podiatric physician in determining the CPG’s relevance to his or her clinical practice.

Q4 The guideline development group includes individuals from all relevant groups.

The guideline development group should be diverse to include various stakeholders; end users, policy makers and consumers. [7] It is of interest to the podiatric physician if the guideline development group includes a podiatrist.

The ACFAS heel pain guideline [1] was developed by podiatrists with membership in the ACFAS. No other groups were involved. The APTA guideline [2] was developed principally by physical therapists some with advanced degrees as well as, an orthopedic surgeon specializing in foot and ankle care.

Q5 The patient’s views and preferences have been sought.

It is important in evidence-based medicine to include the values and concerns of patients. [8]

There is no evidence that the ACFAS heel pain guideline [1] or the APTA guideline [2] included patient’s views and preferences in developing the CPG.

Q6 The target users of the guideline are clearly defined.

From a podiatric physician’s viewpoint it is important to consider whether the target user of the guideline specifically lists podiatry.

The ACFAS heel pain guideline [1] does not define target users of the guideline.

The authors of the APTA guideline [2] define the target users as orthopedic physical therapy clinicians, students, residents, academic instructors, clinical instructors, fellows and interns. Podiatric physicians are not mentioned in the APTA guideline as authors, reviewers or intended recipients.

Q7 The guideline has been piloted among targeted users.

The ACFAS heel pain guideline [1] has not been piloted among targeted users. The APTA guideline [2] authors provide a detailed and comprehensive explanation of the review process. The guideline was reviewed by multiple varied healthcare practitioners for feedback prior to being finalized.

Rigor of development

This domain relates to the process to gather and synthesize the evidence, the methods to formulate the recommendations and to update them. The answers to these questions will help the podiatric physician in determining the internal validity of the CPG.

Q8 Systematic methods were used to search for the evidence.

An earlier publication has covered this topic in some detail. [9] The ACFAS heel pain guideline [1] provided no information regarding the search strategy in the development of the CPG. The authors of the APTA guideline [2] discussed why a systematic search could not be utilized in the development of the CPG.

Q9 The criteria for selecting the evidence are clearly described.

The ACFAS heel pain guideline [1] does not provide any criteria for selecting the evidence used in the development of the CPG. The APTA guideline [2] in the methods section described the criteria which were used to select the evidence used in the CPG.

Q15 The recommendations are specific and unambiguous.

The ACFAS heel pain guideline [1] does make recommendations but they are not unambiguous and not specific. In contrast the APTA guideline [2] makes specific and unambiguous recommendations.

Q16 The different options for management of the condition are clearly presented.

The ACFAS heel pain guideline [1] and the APTA guideline [2] both describe different options for management in its CPG.

Q17 The key recommendations are easily identifiable.

The ACFAS heel pain guideline [1] and does not make the key recommendations easily identifiable however; the APTA guideline [2] does.

Q18 The guideline is supported with tools for application.

The entire document is usually large and cumbersome and if a more condensed easily accessed version is not produced for clinicians and patients it is unlikely that it will be utilized as effectively. [13]

The ACFAS heel pain guideline [1] does not provide any tools for application. The APTA guideline [2] does provide a single page listing the recommendations with the grade and strength of the evidence at the beginning of the publication. This allows the recommendations to be easily used by interested parties.

Application

The questions in this domain pertain to the likely organizational, behavioral and cost implications of applying the CPG.

Q19The potential organizational barriers in applying the recommendations have been discussed.

Organizational barriers may limit the usefulness and application of the CPG. A recent article provides an in-depth discussion regarding this topic. [14]

Neither the ACFAS heel pain guideline [1] nor the APTA guideline [2] discussed the potential organizational barriers in applying the CPG.

Q20 The possible cost implications of applying the recommendations have been considered.

Given the rapidly changing dynamics the health care policy of the United States it would be short sighted to not consider the cost implications of recommendations if data were available. [15]

Neither the ACFAS heel pain guideline [1] nor the APTA guideline [2] discussed the cost implications of applying the recommendations.

Q21 The guideline presents key review criteria for monitoring and/or audit purposes.

The ACFAS heel pain guideline [1] does not provide any information concerning criteria for monitoring and/or audit purposes. The APTA guideline [2] recommends the use of validated self-reported instruments to monitor response to treatment and gives several examples.

Editorial independence

Q22 The guideline is editorially independent from the funding body.

The ACFAS heel pain guideline [1] does not discuss who has funded the study but explicit in the document is that it is authored by a committee from the ACFAS. It is not clear if the development of the guideline is editorially independent from the ACFAS. The APTA guideline [2] is authored by members of the orthopedic section of the APTA. It is not clear who has funded the guideline and if the authors are editorially independent of the APTA.

Q23 Conflict of interest of guideline members have been recorded.

It has been shown quite clearly that industry sponsored studies are likely to provide pro industry results. [16] It is important for guideline developers to provide information to the users regarding how conflict of interest was dealt with when found. [17] It is thought that the most common source of bias in CPG’s is financial. [18] In a survey of physician authors of CPG’s 87% had some form of interaction with the pharmaceutical industry. [19]

Neither the ACFAS heel pain guideline [1] nor the APTA guideline [2] discussed a conflict of interest process.

Response scale

Each of the 23 items of the AGREE instrument are individually evaluated on a four-point scale. [5] The scale measures the extent to which the item has been fulfilled. The higher the number the greater the AGREE criteria have been met by the authors of the guideline. Comparing the two different guidelines (Table 2) the guideline produced by ACFAS scored lower using the AGREE instrument when compared to the APTA guideline.

Table 2 Results of comparison ACFAS / APTA guidelines using the AGREE instrument.

In a review of CPG’s published by specialty societies the authors found that 88% did not report information regarding the search strategy and 82% did not report recommendations specifically linked to the quality and grade of the evidence used.20 This is consistent with the results of the CPG produced by the ACFAS. Neither guideline scored well in the domains of applicability and editorial independence. This is consistent with other reviews of CPGs21,22 which found that applicability and editorial independence domains were rated lowest using the AGREE instrument.

Conclusion

Older clinical practice guidelines are characterized by narrative reviews and expert opinions without explicit evaluation of the best evidence available.23 Based upon the results of the AGREE instrument the ACFAS clinical practice guideline follows an older expert based format. The APTA clinical practice guideline follows a more contemporary approach to the development of clinical practice guidelines. It is characterized by its adherence to evidence-based principles. APTA guidelines contain clear explicit actionable recommendations which are linked to evidence which has been evaluated for grade and strength. The APTA guideline does not provide comprehensive recommendations for medical treatment and no recommendations for surgical treatment of heel pain thus limiting its relevance to practicing podiatric physicians.

References

1. Thomas J, Christensen J, Kravitz S, Mendicino R, Schuberth J, Vanore J, Scott Weil L, Zlotoff H, Couture S: Clinical practice guideline heel pain panel: The diagnosis and treatment of heel Pain. JFAS 40: 329 – 340, 2001.
2. McPoil T, Martin R, Cornwall M, Wukich D, Irrgang J, Godges J: Heel pain – Plantar fasciitis: Clinical practice guidelines linked to the international classification of function, disability, and health from the orthopaedic section of the American Physical Therapy Association. J Orthop Sports Phys Ther 38: 629 – 648, 2008.
3. COGS http://gem.med.yale.edu/cogs/ Accessed 7/15/2009
4. GRADE http://www.gradeworkinggroup.org/ Accessed 7/15/09.
5. AGREE http://www.agreecollaboration.org/ Accessed 7/15/2009.
6. Turlik M: Introduction to evidence-based medicine. Foot and Ankle Online Journal 2: 2009.
7. Fretheim A, Schünemann H, Oxman A: Improving the use of research evidence in guideline development: Group composition and consultation process. Health Research Policy and Systems 4:15, 2006.
8. Schünemann H, Fretheim A, Oxman A: Improving the use of research evidence in guideline development: Integrating values and consumer involvement. Health research policy and systems 4: 22, 2006.
9. Turlik, M. Evaluation of a review article. Foot and Ankle Online Journal. 2: 2009.
10. Making group decisions and reaching consensus. http://www.nice.org.uk/niceMedia/pdf/GDM_Chapter9.pdfaccessed Accessed 7/20/2009.
11. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ, GRADE Working Group:. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 336: 924 – 26, 2008
12. Shekelle PG, Ortiz E, Rhodes S, Morton SC, Eccles MP, Grimshaw JM, Woolf SH: Validity of the agency for healthcare research and quality clinical practice guidelines how quickly do guidelines become outdated? JAMA 286: 1461 – 1467, 2001.
13. Trevena L, Davey H, Barratt A, Butow P, Caldwell P: A systematic review on communicating with patients about evidence. J Evaluation Clinical Practice. 12: 13 – 23, 2006.
14. Shiffman R, Dixon J, Brandt C, Essaihi1 A, Hsiao A, Michel G, O’Connell R: The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation. BMC Medical Informatics and Decision Making 5: 23, 2005.
15. Edejer T: Improving the use of research evidence in guideline development: Incorporating considerations of costeffectiveness, affordability and resource implications. Health Research Policy Systems 4: 23, 2006.
16. Turlik M. Special considerations when reviewing industry sponsored studies. Foot and Ankle Online Journal. 2: 2009.
17. Boyd E, Bero L: Improving the use of research evidence in guideline development: Managing conflicts of interests. Health Research Policy and Systems 4: 16, 2006.
18. Allan S. Detsky A: Sources of bias for authors of clinical practice guidelines. CMAJ 175 (9): 1033, 2006.
19. Choudry N, Stelfox H, Detsky A: Relationships Between Authors of Clinical Practice Guidelines and the Pharmaceutical Industry. JAMA: 287:612 – 617, 2002.
20. Grilli R, Magrini N, Penna A, Mura G, Liberati A: Practice guidelines developed by specialty societies: the need for a critical appraisal. Lancet 355: 103 – 106, 2000.
21. Cates J, Young D, Bowerman D, Porter R: An independent AGREE Evaluation of the occupational medicine practice guidelines. The Spine J 6: 72 – 77 , 2006.
22. Hurdowar A, Graham I, Bayley M, Harrison M, Wood-Dauphinee S, Bhogal S: Quality of stroke rehabilitation clinical practice guidelines. J Evaluation Clinical Practice 13: 657 – 664, 2007.
23. Poolman R, Verheyen C, Kerkhoffs G, Bhandari M, Schünemann H: From evidence to action:
Understanding clinical practice guidelines. Acta Orthopaedics 80: 113 – 118, 2009.


Address correspondence to: Michael Turlik, DPM
Email: mat@evidencebasedpodiatricmedicine.com

1 Private practice, Macedonia, Ohio.

© The Foot and Ankle Online Journal, 2009