Primary Care

Sports Medicine



Douglas B. McKeag, M.D., M.S.

Professor of Family Practice

Coordinator of Sports Medicine

College of Human Medicine

MSU Team Physician

Michigan State University


David O. Hough, M.D.

Professor of Family Practice

Director of Sports Medicine

College of Human Medicine

MSU Team Physician

Michigan State University



Eric D. Zemper, Ph.D., President,

Exercise Research Associates of Oregon

Eugene, Oregon




Brown &




Unit 2: Epidemiology of Athletic Injuries

This unit will introduce some fundamental concepts of epidemiology, the basic science of preventive medicine, and its application to sports medicine, specifically the epidemiology of athletic injuries. The word "epidemiology" is comprised of three Greek root terms: epi (meaning "upon"), demos ("people"), and logos ("study"). Therefore, epidemiology is the study of what is upon, or befalls, a people or population. A more formal definition is that provided by Duncan (12):

"Epidemiology is the study of the distribution and determinants of the varying rates of diseases, injuries, or other health states in human populations."

The basic method of studying and determining these distributions and determinants is comparing groups within a population (the sick and the well; the injured and the non-injured). Doing an epidemiological study is a lot like being a detective, using logic to discover cause and effect rela-tionships for illnesses or other medical conditions in a population. In many ways it is similar to diagnosing an illness, but it is done with a large population rather than with an individual patient.

The initial development of the theory and methods of epidemiology focused on applications to communicable diseases. However, in recent years epidemiologic theory and methodologies have been applied to a broader range of subject areas, including athletic injuries.

Duncan (12) lists seven major uses for epidemiological data:


· Identifying the causes of disease.

· Completing the clinical picture of a disease.

· Allowing identification of syndromes.

· Determining the effectiveness of therapeutic and preventive measures.

· Providing the means to monitor the health of a community or region; i.e., input for rational health planning.

· Quantifying risks (health hazard appraisals).

· Providing an overview of long-term disease trends.


For our purposes in athletic medicine, epidemiological data can be used to:


· Identify causes of injuries.

· Provide a more accurate picture of clinical reality. Clusters of injuries (and the resulting media attention they often generate) give a distorted view of reality; on the other hand, data may reveal a previously unsuspected injury problem.

· Determine the effectiveness of preventive measures (on a local or national scale), whether they are rule changes, new or modified equipment, or modifications of training techniques.

· Monitor the health of athletes, which will assist in rational medical planning.

· Quantify the risks of various types, frequencies, and intensities of exercise activities.

· Provide an overview of long-term injury trends in specific sports.




The basic tool of epidemiology is the calculation of rates of occurrence of medical cases of interest in a given population. The two most commonly used rates are incidence and prevalence. The prevalence rate includes all cases of the medical condition of interest that exist at the beginning of the study period and all new cases that develop during the study period. Incidence rates include only the newly developed cases. In sports medicine, the incidence rate is predominantly used to study athletic injuries, since it is assumed that all athletes are uninjured at the beginning of the season and it is the incidence of new injuries during the season that is of interest. Therefore, we will deal only with incidence rates here.

The incidence rate is a measure of the rate at which new events (illnesses, injuries, etc.) occur during a specified time in a defined population:

Incidence Rate = (# new events during specified time period x k) ÷ # in the population at risk

The numerator is simply a count of the number of new cases that occur during the study period. The denominator is the total number of people in the population under study who are "at risk" or exposed to the possibility of infection, injury, etc. To provide reasonable numbers that are neither extremely large nor extremely small, and to make comparisons easier, this ratio is transformed to a common metric by multiplying by a convenient multiple of 10 (represented by the constant k in the above equation). If k=1,000 the result would be a rate per 1,000 in the population; if k= 100,000 the result would be a rate per 100,000. For example, suppose 24 cases of measles were reported on a college campus of 34,000 students. A moment's thought will show that stating a rate of 24/34,000 is not the most informative way of presenting this information. The probability of an individual having the disease is not readily apparent, and it is not easy to compare the rate with the five cases that occurred in the population of 630 student-athletes on that campus. The base ratio of 24/34,000 is 0.000706, which is the probability that any one individual has measles. But obviously this is not an easy number to work with. Using k=100,000 we transform this rate to 70.6 cases per 100,000, which is a little more manageable. If we make the same calculation for student-athletes, we get a case rate of 793.7 cases per 100,000. Now it is easier to see that student-athletes had a much higher rate of measles, so immediate preventive measures might be in order for this special population.

Determining the numerator of the case rate equation is usually relatively easy. The most critical part of the calculation is determining the denominator, or the "population at risk." This should include everyone in the population who could be affected by the disease or condition of interest, and should exclude those who could not be affected or are not really a member of the population of interest. For instance, in calculating a case rate for pregnancy, males, females past menopause, and females who have not reached menarche should not be used in the denominator. In calculating a case rate for football injuries during games, only those who actually played and were exposed to the possibility of injury, not the whole team, should be included in the denominator.

In sports medicine, case rates generally are used to present epidemiological information about athletic injuries. These rates are presented most often as injuries per 100 athletes, which is analo-gous to the rate per 100,000 population used for reporting disease rates. However, there is a difference between the continuous exposure of a population to a disease and the discrete exposure of an athlete to injury, which occurs only during practices or games. The number of practices and games varies considerably from one sport to another, and often varies from one team to another, or even from one year to another in a given sport. In addition, not every player participates in every practice and every game, and the number of participants on a team may change considerably as the season progresses. Thus, the common practice of reporting athletic injuries as a rate per 100 participants can lead to questionable conclusions, particularly when results from different sports, or even from different studies of the same sport, are compared. A more precise method is to report case rates per 1,000 athlete-exposures. An athlete-exposure is defined as one athlete participating in one practice or game where there is the possibility of sustaining an athletic injury. If a football team of 100 players has five practices during the week, there are 500 athlete-exposures to the possibility of being injured in practice during that week. If 40 players get into the game on Saturday, the team has 40 athlete-exposures in the game, and the weekly total is 540 athlete-exposures to the possibility of being injured.

Using athlete-exposures as the denominator allows more accurate and precise comparisons of injury rates between sports and in different years. Case rates per 1,000 athlete-exposures was used by the National Athletic Injury/Illness Reporting System (NAIRS) (1) and was adopted and is currently used by the NCAA Injury Surveillance System (31) and the Athletic Injury Monitoring System (32). An even more precise approach would base the exposure rate on the amount of time actually spent in practices or games. This might be possible in small local studies but, in most cases, the amount of record keeping required for a national-scale surveillance system would be prohibitive and impractical for those doing the on-site data recording. Case rates per 1,000 athlete-exposures is believed to be a reasonable compromise that gives a more accurate picture of the epidemiology of athletic injuries than the use of simple rates per 100 athletes.



It has become evident over the past twenty years that there is a need for data on injury rates for various sports and athletic activities. The research literature on the epidemiology of athletic injuries has been growing slowly but steadily as individuals and groups have gathered data on the risks of participating in sports. Most are short-term observations of single sports involving relatively small numbers of individuals or teams in a school or college setting. In a presentation to the American Orthopaedic Society for Sports Medicine in July 1975, Professor Kenneth S. Clarke noted the lack of meaningful and dependable data on athletic injuries (5). Ten years later, in 1985, at a workshop on Epidemiologic and Public Health Aspects of Physical Activity and Exercise, Jeffrey P. Koplan, M.D., made the same observation about the lack of data on athletic injuries and on regular physical activity in general (14). With the increase in participation in organized sports and in fitness activities, participation that is encouraged by the medical community as a public health intervention, people often do not realize that there still is little or no dependable risk data available. Effort is focused on defining the benefits of participation, but little is done to assess risk. This information is needed to make informed decisions about the value of taking part in a particular activity, and to provide information on how injury rates can be reduced.

Among the groups and organizations that need data on athletic injury rates are national sports governing bodies such as the National Federation of State High School Associations (NFSHSA), the National Collegiate Athletic Association (NCAA), the National Association for Intercollegiate Athletics (NAIA) and the National Junior College Athletic Association (NJCAA). They are responsible for segments of the school-college athletic community where a great deal of sports activity takes place. Through their rules committees, they are directly or indirectly responsible for the safety and well-being of athletes under their jurisdiction. Accurate and reliable injury rate data would indicate the types of injuries or situations where injuries occur that could be positively affected by appropriate rule changes, equipment, changes, or suggested changes in coaching technique. These groups now make safety-related changes based primarily on impressions or anecdotal data from coaches, trainers, and others. Obviously, it would be more desirable to have safety-related rule changes based on documented data. It also would be useful to have data showing why a suggested rule change was not adopted; for instance, whether the numbers of a particular type of injury are not as high as was believed, or the injury does not occur during the situations affected by a suggested rule change. (Often, public perception of the extent or seriousness of a problem is greatly inflated by selective attention by the media to a few cases. Responding to resultant public pressures is difficult without data on the actual extent of the problem.) The actions of rule-making committees of major national organizations like the NCAA and the NFSHSA also have an impact on sports at other levels.

Accurate injury data could be used by manufacturers of sporting goods to target areas where new protective equipment could reduce injury rates, where design changes might be needed in existing equipment, or where a particular brand or model is not meeting expectations. In a similar vein, standard-setting groups such as the National Operating Committee on Standards for Athletic Equipment or the American Society for Testing and Materials could use injury rate data to pinpoint areas where implementation of new standards could help in reducing injury rates, or where modification of an existing standard is needed to further cut down equipment-related injuries. All of these organizations would find that data on rates and situations related to specific types of injuries is valuable in preparing legal arguments in athletic injury-related court cases. These lawsuits have become more common, but little reliable naional data is available to either party or the court.

There also is a need for national athletic injury data within the sports medicine community for theoretical purposes, such as providing a data base for epidemiological research. It also is needed for practical purposes like providing physicians, athletic trainers, and administrators with information on the types of injuries that are most likely to occur in given settings and situations. Educators need this data to train coaches, physical educators, physicians, and other medical personnel. Primary care physicians need data on the risks of various types, frequencies, and intensities of exercise when advising patients about the adoption of a more active lifestyle (see Chapter 20).

While all these groups and organizations are interested in reducing the number and severity of sports-related injuries, only rarely has it been possible for them to directly monitor the impact of their efforts to reduce injury rates. The only practical way to do this is through a continuing national data collection system that provides data over a period of years. Data for only one or two years is not sufficient because it provides no basis for making reasonable comparisons (before and after implementation of a change). Sometimes it may take more than one year for a rule or equipment change to produce a noticeable impact on injury rates because there may be local or regional differences in using a piece of equipment or in adopting, interpreting or enforcing a rule change.

Another reason for a national injury data collection system to operate continuously over time is because there are yearly fluctuations in injury rates. Using only one or two years of data can lead to invalid conclusions and faulty decisions. An illustration of this type of problem is found in fatality data among high school pole vaulters collected by Carl Blyth, Ph.D., and Fred Mueller, Ph.D., at the University of North Carolina (17). During the first year they began collecting fatality and catastrophic injury data (a catastrophic injury being defined as a cervical spine injury resulting in permanent paralysis) for youth, high school, college, and professional sports, they recorded four fatalities among high school pole vaulters. Few high school athletes compete in this track and field event, so this data caused considerable concern about safety at the high school level. However, there was no way to know whether these four deaths represented an average year or were an unusually high or low number because there was no previous data for comparison. (Note also that if Blyth and Mueller had not started collecting national fatality data, nobody would have been aware of the problem.) If these four deaths represented a statistical aberration resulting in a much higher number of fatalities than normal, then the need for immediate major action was not so great. The next year no deaths among high school pole vaulters were recorded, much to everyone's relief. But even this second year of data collection did not settle the question of the normal fatality rate. Which year's data, four deaths or no deaths, was more representative? That question cannot be answered until data is collected for several more years. Meanwhile, the realization that there is a potentially major problem with pole vaulting resulted in a closer look at the design and performance characteristics of landing pits and renewed emphasis on proper coaching techniques, particularly how to "bail out" of a bad vault. The mere fact that national fatality and catastrophic injury data collection was begun has had a positive impact on the safety of that one sport.

While fatalities are rare, and yearly fluctuations in the numbers are therefore relatively more noticeable, the same principle applies to common non-fatal injuries. With a representative national sample, injury rates for some types of injuries will tend to be fairly stable from year to year, but there will be enough statistical fluctuation in the rates for many types of injuries to require data collection over several years to establish stable patterns. Decisions on measures to reduce injury rates should be based only on stable long-term data. Besides the expected yearly fluctuations in specific injury rates, there also are potential differences in injury rates at different levels of a given sport (youth sports, high school, college, professional, elite, masters and recreational levels). Therefore, it is desirable to collect data at each of these levels. Unfortunately, little or no data is available at this time for any level other than college and high school.



A major weakness in much of the published literature on athletic injury rates is that the denominator data for the incidence rate equation is poorly defined or has not been determined. This reduces these articles to simple case series reports that have little or no epidemiological value (29). Unless the calculation of rates is based on the population at risk, it is impossible to generalize the results beyond the specific population used in the study This highlights a major problem in current research literature on athletic injury rates: most authors have little or no training in epidemiology, so these articles often are not of any great use on a broader scale in that the information cannot be generalized to other places and situations. For example, Powell et al. (20) did a thorough review of the literature on running injuries through 1985 and found only two published articles and one meeting presentation that met minimal criteria for factors such as definition of injury, selection of subjects, and use of proper denominator data ("population at risk") in calculating injury rates.

Twenty years ago the research literature on the epidemiology of athletic injuries was very sparse; the only continuing study was the yearly football fatality study begun in 1931 and currently conducted by Blyth and Mueller at the University of North Carolina and Dick Schindler of the NFSHSA (18). Since the mid 1960s there has been a slow growth in sports injury rate research as the need for this type of data has become more apparent. Even so, most studies cover only one year (or season), occasionally two (21, 22), and most cover only one sport (16). Nearly all studies have limitations imposed by sample size, covering one school or one city or one geographic area (21, 22, 24). Some studies (6, 11, 17, 25) are limited to injuries of one anatomical site, such as the knee, or one type of injury, such as fatalities or ankle sprains. Getting a clear national perspective by combining results from different studies are greatly hindered by differences in methodologies, such as different definitions of a reportable injury or means of collecting and reporting data. Combining study results would be ill-advised anyway because of the lack of representativeness of the combined data sources.

Still another problem with many studies is the source used to obtain injury data. Some rely on insurance claim forms (9, 10, 13), which has the disadvantage of not representing the true injury rate since not all athletic injuries result in insurance claims. Also, these records seldom contain much detail on the circumstances and mechanisms of injury. Some studies rely on a coach's assessment or recognition of an injury even though we know that, unless coaches have received specific training, they do a poor job of recognizing most treatable injuries (23). Studies that depend on recall of injuries at the end of a season have the obvious problems of inaccuracy and incompleteness of recall.

One ongoing attempt to collect national injury data is the National Electronic Injury Surveillance System conducted by the Consumer Product Safety Commission. This system collects data on product-related injuries from approximately 60 hospital emergency rooms around the country. Athletic injury records are one part of this project (26). However, athletic injury rates based on this data are questionable because not all athletic injuries are treated in an emergency room. Also, those that are treated in the ER would not be recorded if they were not product-related. Injuries from activities like running or swimming probably would go unrecorded because they do not involve a product. There is also a question of defining the population at risk, because we would not know exactly how large a population each emergency room covers. At best, this data tells us the relative proportions of the more serious types of injuries in certain activities.



There are three exceptions to this general picture of a lack of adequate national athletic injury data: the National Athletic Injury/Illness Reporting System (NAIRS) designed by Kenneth S. Clarke, Ph.D., while at Pennsylvania State University; the NCAA Injury Surveillance System (ISS), designed and implemented by Eric D. Zemper, Ph.D., while a member of the NCAA staff; and most recently the Athletic Injury Monitoring System (AIMS), also designed and implemented by Zemper while at the University of Oregon.

NAIRS was intended to be a continuing data collection effort that would provide a rich source of data for epidemiological research on athletic injuries. It incorporated many important features such as longitudinal data collection from a much larger sample than previously had been attempted, standardized definitions and procedures, and the use of case rates per 1,000 athlete-exposures, However, there were concerns about the number and complexity of the data collection forms and the lack of a truly representative national sample. NAIRS stopped collecting high school and college data in 1983 because of chronic funding problems, but it produced by far the best and most comprehensive sports injury data available up to that time and resulted in a number of valuable pieces of research literature (e.g., 1, 2, 3, 7, 8).

In 1982, the NCAA began its own sports injury data collection system (ISS), similar in many ways to NAIRS but using only two basic data collection forms and with a representative national sample of NCAA member schools (19, 31). However, ISS covers only selected NCAA sponsored sports at member schools and there has been no broad dissemination of results. AIMS was begun in 1986 with the intent of covering a wider variety of sports at all levels of participation (32-43).

There are a few specialized data collection systems that focus on fatalities or on specific types of injuries like paralytic or major head and neck injuries (17), but the NCAA's Injury Surveillance System and the Athletic Injury Monitoring System are the only two currently operating national-scale data collection systems dealing with general sports injuries. With the exception of the brief presence of NAIRS and the recent start-ups of ISS and AIMS, both of which are just beginning to publish results in the sports medicine literature, the overall data lack has changed little since Clarke made his observation more than fifteen years ago (5).



Since NAIRS, ISS and AIMS are basically similar in format and use the same definition of a reportable injury (one occurring in a practice or contest that prevents an athlete from participating for one day or more), with data provided by on-site athletic trainers, and injury rates reported as cases per 1,000 athlete-exposures, it is possible to summarize and compare data from these three collection systems. The one sport for which data is available from all three systems is college football. Table 1 summarizes the overall football injury rates over a total of 13 seasons. The cumulative injury rates for ISS and AIMS are essentially the same, whereas that for the earlier NAIRS rate is higher. There are several possible explanations for this difference. As noted earlier, the NAIRS sample was not as representative as the ISS and AIMS samples. Except for two seasons (1988 and 1989), it appears there has been a general downward trend in college football injury rates over the years. This may be due to the major rule changes in the mid to late 1970s that were aimed at reducing the risk of major head and neck injuries (a direct result of data from the annual Blyth and Mueller football fatality studies showing an increase in major injuries during the 1960s). Along with the rule changes came shifts in coaching philosophy and technique, which have had a positive impact on injury risk, as have continuing improvements in protective equipment. Any or all of these factors may have contributed to this difference between the NAIRS data from the 1970s and the ISS/AIMS data from the 1980s.

Table 1. Injury Rates in College Football From Three National Data Collection Systems


Injury Rate



Per 1,000 athlete-exposures











Sources: Buckley (1982); NCAA (1990); Zemper (1989a,d); Zemper (unpublished data).


Based on these data showing 6.6 injuries per 1,000 athlete-exposures for college football, the average college team of 100 players can expect about two time-loss injuries every three times they take the field for a practice or game. (As we will show later, there can be major differences in injury rates between practices and games, particularly for football.) As would be expected, the body parts injured most often in football are the knees, ankles, and shoulder, in that order. The most common types of injuries are ligament sprains, muscle strains, and contusions.

NAIRS and ISS have data from other college sports, and AIMS has data from other levels of participation besides college. Table 2 summarizes the male and female injury rates for sports covered by these systems. The data in this table cover from one to eight seasons, at, least two or three seasons in most cases. Since these are reported in the common metric of rate per 1,000 athlete-exposures, direct comparisons are possible between sports, males and females, and different levels of a sport. The exceptions in this table are injury rates for taekwondo (a Korean full-contact martial art; form), which are competition data only, unlike the other sports which show injury rates for practice and competition combined.

Table 2. Injury Rates for Various Sports From Three National Data Collection Systems



Injury rate/1,000 Athlete-exposures














Basketball - Men's




Basketball - Women's




Cross Country - Men's




Cross Country - Women's




Field Hockey








Gymnastics - Men's




Gymnastics - Women's




Ice Hockey




Lacrosse - Men's




Lacrosse - Women's




Soccer - Men's




Soccer - Women's








Swimming-Diving - Men's




Swimming-Diving - Women's




Tennis - Men's




Tennis - Women's




Track & Field - Men's




Track & Field - Women's




Ultimate Frisbee - Men's




Ultimate Frisbee - Women's




Volleyball - Men's




Volleyball -Women's












Gymnastics - Women's




Taekwondo - Men's (competition only)




Taekwondo - Women's (competition only)




Youth (6-17 years old)




Soccer - Boy's




Soccer - Girl's




Taekwondo - Boy's (competition only)




Taekwondo - Girl's (competition only)




Recreational (45-70 years old)




Running - Men's




Running - Women's




Walking - Men's




Weightlifting - Men's




Sources: Buckley (1982); Caine et al. (1989); NAIRS (unpublished data); NCAA (1990); Watkins (1990); Zemper (1991); Zemper (unpublished data).

From Table 2, we can see that participants in men's wrestling, soccer, football and lacrosse, and women's gymnastics and soccer have the highest overall injury rates. The injury rates for corresponding men's and women's sports generally are similar, the exceptions being the higher rates in cross country and gymnastics for women. Younger female gymnasts in a full-time elite training program are less likely to be injured than older collegiate gymnasts. The injury rates for youth soccer players were considerably lower than at the collegiate level. Injury rates for middle-aged and older recreational athletes were noticeably higher (although the older athlete presumably does not have as much pressure to participate, and may be more willing to take a few days off when injured). Data for the non-collegiate levels must be considered preliminary because these databases are relatively small in comparison with the amount of collegiate data available, but they do indicate the possibility of some interesting trends.

Data across all the sports show the most frequently injured body part is the ankle, followed by the knee and then the shoulder. All are major joints that undergo considerable stress in most sports. Sprains, strains, and contusions are the most frequent types of injuries. Overall, ankle sprains are the most frequently occurring injuries in most sports.

An interesting point that becomes apparent when data are reported in rates per 1,000 athlete-exposures, that is not evident when rates are reported per 100 participants, is the difference in injury risk between practices and competitions. Table 3 breaks down the injury rates for 17 collegiate sports into practice and competition rates, along with their relative rankings within each column. The competition injury rate for Senior (18-30 years old) taekwondo athletes is included for comparison. Also included in the right-hand column of the Table is an indication of the relative risk of injury in practice and in competition; in each case injury risk is higher in competition.

Table 3. Injury Rates in Practices vs Competition in Seventeen College Sports



Injury rate/1,000 Athlete-exposures (Column Rank)





Relative Risk*


2.0 (16)

5.7 (14)


Basketball (M)

4.1 ( 8)

8.9 ( 9)


Basketball (W)

4.2 ( 7)

8.1 (11)


Field Hockey

3.8 (11)

8.4 (10)



4.1 ( 8)

35.6 ( 1)


Gymnastics (M)

4.4 ( 6)

16.5 ( 6)


Gymnastics (W)

7.2 ( 1)

21.5 ( 3)


Ice Hockey

2.5 (15)

16.2 ( 8)


Lacrosse (M)

4.1 ( 8)

16.4 ( 7)


Lacrosse (W)

3.4 (13)

6.3 (13)


Soccer (M)

4.5 ( 5)

19.2 ( 4)


Soccer (W)

5.1 ( 3)

17.0 ( 5)



3.4 (13)

5.1 (17)


Ultimate Frisbee (M)

3.5 (12)

7.0 (12)


Ultimate Frisbee (W)

2.0 (16)

5.6 (15)


Volleyball (W)

4.6 ( 4)

5.3 (16)



4.6 ( 2)

30.8 ( 2)


Taekwondo (M)




Taekwondo (W)




* Relative Risk = higher rate divided by lower rate

Example: Men's lacrosse - 16.4 injuries/1,000 athlete-exposures in games divided by 4.1 injuries/1,000 athlete-exposures in practices equals a relative risk of 4.0; i.e., a men's lacrosse player participating in a game is 4 times as likely to be injured as he would be if her were participating in a practice session.

Sources: NCAA (1990); Watkins (1990); Zemper (unpublished data).

It often is reported that most injuries occur in practices, giving the impression that practices are at least as risky as competitions. Most injuries in a given sport usually do occur during practices, but the actual risk of an individual athlete being injured is much higher in competition. As an example, in college football nearly 60% of the recorded injuries occur in practice (32). However, while the total number of injuries in college football over a season may be higher in practices, the rate of injuries is considerably higher in games, in this case 8.7 times higher (Table 3). In other words, a college football player is nearly nine times as likely to be injured in a game as he is in a practice session. Bear in mind that there are at least five to six times as many practices as games in a football season, and not every player who participates in practice will participate in a game. The most obvious explanation for the difference in risk between practices and games is the continuously higher intensity of play during games.

Football represents the upper extreme in the difference between practice and competition injury rates. At the other end of the spectrum is women's volleyball, where the risk of injury in games is only slightly higher than in practices (Table 3). This is only reasonable considering that, at the collegiate level, volleyball practices often are as intense as the games. The data presented in Table 3 show that most sports at the collegiate level have a competition injury rate about two to four times higher than for practice.



National data collection systems such as ISS and AIMS gather information not just on the number and type of injuries, but also on the circumstances. These data therefore can be used to develop specific recommendations for rule changes, equipment modifications, or changes in training techniques, all aimed at reducing the number and severity of injuries. For example, AIMS has been collecting injury data at national taekwondo competitions for the U.S. Olympic Committee and the U.S. Taekwondo Union, the national governing body for this sport (39-43). One result has been to draw immediate attention and concern to the high rate of cerebral concussions recorded during taekwondo competitions (41-43). The cerebral concussion rate over a two year period for taekwondo compared with AIMS data for college football showed that the rate for taekwondo competition (5.45 cerebral concussions per 1,000 athlete-exposures) is 3.2 times as high as the rate seen in college football games (1.69 cerebral concussions per 1,000 athlete-exposures). Based on time of exposure, taekwondo (1.2 per 1,000 minutes of exposure) has a cerebral concussion rate 9.2 times that of college football games (0.13 per 1,000 minutes of exposure). These rates are essentially the same for Junior (6-17 years old) and Senior (18 and older) taekwondo competitors. Football has one of the higher cerebral concussion rates of any American sport, so these results should immediately raise a red flag indicating a need for measures to reduce the risk of head injuries in taekwondo.

Now that these data have uncovered a previously unsuspected problem with head injuries in taekwondo, the AIMS staff is working with the national governing body to develop recommendations. The primary suggestions include working with the manufacturers of the helmet used in taekwondo to develop a more protective product; changing the rules to require mouthguards, rather than just recommending their use as is currently the case (the data showed that the more severe the cerebral concussion, the less likely the competitor was wearing a mouthguard); establishing and enforcing standards for competition mats (several third degree concussions were observed when competitors fell back and hit their heads on obviously inadequately padded floor mats); and adopting rules similar to those of amateur boxing which require a minimum time period before an athlete is allowed to return to participation after a loss of consciousness from a blow during a bout. That period varies according to age, severity of concussion, and previous history of concussion.

Epidemiological studies of sports injuries may be used to evaluate new protective equipment or monitor the performance of existing equipment, if the study is properly designed to collect the necessary data. An example of this use is an ongoing study of preventive knee braces in college football being conducted by AIMS (34, 36) as a part of general data collection on football injuries. Braces designed to prevent medial collateral ligament injuries from lateral blows to the knee came into widespread use in the 1980s before any studies were performed to see if they actually worked. All we had were anecdotes and a few one- or two-season, one-team studies. There are many variables that could have an impact on the results of any study like this, such things as brand or type of brace, position played, proper placement of brace, whether it was actually being worn at the time of injury, previous history of knee injury, intensity of practices, condition of playing surface, or weather, to name a few.

From an epidemiological perspective, the only way to "control" these numerous variables is to do a large-scale, long-term study with as many teams as possible so that the impact of the uncontrollable and essentially unrecordable variables (proper brace placement, practice intensity, condition of playing surface, weather) will "wash out" in the data collection process. At the same time, the more easily recordable variables (position played, whether the brace was worn at the time of injury, brand or type of brace, previous history) will be recorded in sufficient numbers to provide more reliable results than could ever be possible with a study of a single team or a small number of teams. The results of earlier, small-scale studies were mixed, with some showing that braces reduced the number of MCL injuries and others showing they did not, but the more recent large-scale studies, such as those of Teitz (28) and Zemper (34, 36) show that wearing preventive knee braces appears to have no effect on reducing the number or severity of MCL injuries, or on the time lost due to injury.

A well-controlled smaller-scale study done at the U.S. Military Academy (27) does show some positive effect in reducing MCL injuries by wearing preventive knee braces, but only with defensive players. This indicates that position played may be an important factor. There was no effect on the severity of knee injuries. However, the subjects were cadets playing intramural football rather than larger and heavier intercollegiate players, so the study may indicate a possible size/weight and, therefore, a force threshold involvement. Obviously, much more data must be collected from large-scale epidemiologic studies, as well as biomechanical studies, before complex issues such as this can be resolved.



Although the importance of longitudinal, national-scale epidemiologic data collection to adequately address major sports injury issues has been emphasized here, the small-scale local data collection effort also has a place in sports medicine. A primary care physician who is responsible for medical care of a high school or other local sports program, or who is part of a local sports medicine network (see Chapter 19), is in a good position to track local injury patterns. At a minimum this will require some form of centralized records of all sports injuries treated. Forms like those suggested in Chapter 19 for the records of a local sports medicine network would serve this purpose very well.

An alternative to normal patient files, which would make data compilation much easier, is a brief check-off form describing the athlete, injury, and circumstances. This would be similar to those used by larger data collection systems. These forms could be filled out by the physician, nurse, or athletic trainer at the high school for every sports injury treated, and kept in a single file. As we mentioned earlier, this data is only a case series and cannot be used to make comparisons across sports or with data from other sources. However, they might alert the physician if an unexpected number of injuries of a certain type or ones that happen under specific circumstances are noted.

If comparisons are desired, some form of exposure or "denominator" data is required to calculate injury rates, as discussed earlier. The simplest denominator data to obtain are the number of athletes on the team, so injury rates per 100 athletes can be calculated. The fact that many teams have more athletes at the beginning of the season than at the end presents a problem. The most reasonable solution is to use an average number of athletes if the rate of attrition is fairly stable over the season, or use the number of athletes on the team during the majority of the season if the drop-outs tend to occur at the beginning of the season and then the numbers stabilize as the season progresses.

For reasons presented previously, rates per 100 athletes are not the most accurate way to calculate sports injury rates. With some extra effort and on-site assistance from a student athletic trainer or coach, it is feasible to get data at the local level on the number of athletes participating in practices and competitions, or possibly even the amount of time of participation, so that rates per 1,000 athlete-exposures or rates per 100 hours or 1,000 minutes can be calculated.

In some team sports, the time of exposure in games is relatively easy to estimate, because the games last a specified length of time and involve a specified number of players at any one time. A high school football game will involve four quarters of twelve minutes each, and eleven players from a team are on the field at any given time. Therefore, the amount of exposure time for a single team in a single game will be 528 player-minutes per game (4 quarters/game x 12 minutes/quarter x 11 players). It is more difficult to get data on time of exposure in practices, but it basically means keeping track of the number of players participating in each practice and the length of the practices. When collecting athlete-exposure data, the time element is ignored, and data are recorded only on the number of players at each practice and the number who actually get into the games and are exposed, however briefly, to the possibility of injury (not the number who dress for the game).

Once appropriate denominator data on the population at risk are available, the rate equation presented earlier can be used to calculate injury rates that can be used in comparisons across local sports teams or with data from other sources that are calculated in a similar manner. When comparing local data with injury data from other sources (or for that matter when comparing injury data from any sources), always note any differences in methodologies used (data sources and collection procedures, definition of an injury, type of rate calculated, etc.). If there are any major differences, conclusions drawn from the comparisons may not be valid. Of particular importance are the type of rate calculated and the definition of an injury. Obviously, trying to compare injury rates per 100 athletes with rates per 1,000 athlete-exposures would be meaningless. Less obvious is the need to ensure that the same definition of a reportable injury is being used. If one set of data includes everything seen by the medical staff and another includes only injuries that cause three or more days of time lost from participation, comparisons would be meaningless. The most commonly used definition of a reportable injury is based on time-loss:

A reportable injury is any injury a) occurring in a scheduled practice or competition, b) requiring medical attention, and c) resulting in the athlete being restricted from further normal participation for the remainder of that practice or competition or for the following day or more.

This is the basic definition of a reportable injury used by NAIRS, ISS, and AIMS, and we recommend its use in local data collection systems. By basing the definition on time-loss of one day or more before a return to unrestricted participation, all minor scrapes, bumps, and bruises that do not cause time-loss are eliminated, so they do not overburden the data collection system. (The only exception to this in AIMS is that any mild concussion is reported, even though it may not cause time-loss.).

Rates for specific types of injuries or body parts also can be calculated for local injury data. For example, the total number of knee injuries could be the numerator rather than the total number of all injuries. If game and practice exposure data are available, separate game and practice injury rates can be calculated. Make sure the appropriate denominator is matched with the numerator. If a game injury rate is being calculated, be sure to divide the number of game injuries by the number of game exposures. As with large-scale sports injury data collection systems, the more local data collected over time, the more useful and valuable the information becomes.



Applying the principles of epidemiology to sports injuries is a relatively recent development, and dependable epidemiologic data for sports injuries are still quite scarce, particularly for levels other than high school and college sports (5, 14, 15). In addition, the quality and usefulness of a great deal of the available literature leaves much to be desired, for reasons outlined earlier, and as demonstrated by Powell et al. (20). National-scale data collection systems such as NAIRS, ISS, and AIMS are beginning to make important contributions, and must be a primary focus of activity in the future. However, there will be ample opportunity for contributions from others, such as a primary care physician working with a local sports program. Understanding the basic principles of epidemiology presented in this chapter will allow the primary care physician to be more discriminating in reading the literature, but also will be useful in setting up a system for keeping track of local injury patterns. These efforts may play a role in reducing the number and severity of sports injuries in the local community.



1. Alles, W.F., Powell, J.W., Buckley, W., and Hunt, E.E. The National Athletic Injury/ Illness Reporting System 3-Year Findings of High School and College Football Injuries. J Orthop Sp Phys Therapy 1(2):103-108, 1979.

2. Alles, W.F., Powell, J.W., Buckley, W., and Hunt, E.E. Three Year Summary of NAIHS Football Data. Athletic Training Summer 1980:98-100, 1980.

3. Buckley, WE. Five Year Overview of Sport Injuries: The NAIRS Model. JOPERD June 1982:36-40, 1982.

4. Caine, D., Cochrane, B, Caine, C., and Zemper,E. An Epidemiologic Investigation of Injuries Affecting Young Competitive Female Gymnasts. Am J Sp Med 17(6):811-820, 1989.

5. Clarke, K.S. Premises and Pitfalls of Athletic Injury Surveillance. J Sp Med 3(6):292-295, 1976.

6. Clarke, K.S. A Survey of Sports-Related Spinal Cord Injuries in Schools and Colleges, 1973-1975. J Safety Res 9: 140, 1977.

7. Clarke, K.S., and Buckley, W.E. Women's Injuries in Collegiate Sports.Am J Sp Med 8(3):187-191,1980.

8. Clarke, K.S., and Powell, J.W. Football Helmets and Neurotrauma -An Epidemiological Overview of Three Seasons. Med Sci Sports 11(2):138-145, 1979.

9. Cleavinger, J.D. The Incidence of Injuries to Football Players in Kansas Junior High Schools. Thesis, Univ. of Kansas, 1974.

10. Conant, H.D. The Nature and Frequency of Injuries Occurring to High School Athletes Insured Through the Oregon School Activities Association Mutual Benefit Plan From 1965 to 1968. Dissertation, Univ. of Oregon, 1969.

11. Downs, J.R. Incidence of Facial Trauma in Intercollegiate and Junior Hockey. Phys Sportsmed 7(2):88-92, 1979.

12 Duncan, D.F. Epidemiology: Basis for Disease Prevention and Health Promotion, New York: Macmillan Publ. Co., 1988.

13. Hansen, V.A. The Nature and Incidence of Injuries to Students in Physical Education Classes in Oregon Secondary Schools During the Period From 1964-65 to 1968-69. Dissertation, Univ. of Oregon, 1971.

14. Koplan, J.P., Siscovick, D.S., and Goldbaum, G.M. The Risks of Exercise: A Public Health View of Injuries and Hazards. Public Health Reports 100:189-195, 1985.

15. Kraus, J.F., and Conroy, C. Mortality and Morbidity From Injuries in Sports and Recreation. Ann Rev Publ Health 5:163-192, 1984.

16. Martin, G. L., Costello, D.F., and Fuenning, S.I. The 1970 Intercollegiate Tackle Football Injury Surveillance Report. Report prepared for the Joint Commission on Competitive Safeguards and Medical Aspects of Sports, 1971.

17. Mueller, F.O., and Blyth, C.S. National Center for Catastrophic Sports Injury Research -- Second Annual Report 1982-1984. Univ. of North Carolina, 1985.

18. Mueller, F.O., and Schindler, R.D. Annual Survey of Football Injury Research 1931-1990. Univ. of North Carolina, 1991.

19. National Collegiate Athletic Association. NCAA Injury Surveillance System Reports. Overland Park, KS. 1990.

20. Powell, K.E., Kohl, I-I.W., Casperson, C.J., and Blair, S.N. An Epidemiological Perspective on the Causes of Running Inuries. Phys Sportsmed 14(6):100-114, 1986.

21. Requa, R.K., and Garrick, J.G. Injuries in Interscholastic Track and Field. Phys Sportsmed 9(3):42-49, 1981.

22. Requa, R.K., and Garrick, J.G. Injuries in Interscholastic Wrestling. Phys Sportsmed 9(4):44-51, 1981.

23. Rice, S.G., Schlotfeldt, J.D., and Foley, W.E. The Athletic Health Care and Training Program. West J Med 142:352, 1985.

24. Robey, J.M., Blyth, C.S., and Mueller, F.O. Athletic Injuries: Application of Epidemiologic Methods. JAMA 217(2): 184-189, 1971.

25. Revere, G.D., and Nichols, A.W. Frequency, Associated Factors, and Treatment of Breastroker's Knee in Competitive Swimmer·s. Am J Sp Med 13(2):99-104, 1985.

26. Rutherford, G.W, Miles, R.B., Brown, V.R., and McDonald, B. Overview of Sports-Related Injuries to Persons 5-14 Years of Age. U.S. Consumer Products Safety Commission, 1981.

27. Stitler, M., Ryan, J., Hopkinson, W., Wheeler, J., Santomier, J., Kolb, R., and Polley, D. The Efficacy of a Prophylactic Knee Brace to Reduce Knee Injuries in Football. Am J Sp Med 18(3):310-315, 1990.

28. Teitz, C.C., Hermanson, B.K., Krommal, H.A., and Diehr, P.H. Evaluation of the Use of Braces to Prevent Injury to the Knee in Collegiate Football Players. J Bone Joint Surg 69A(1):3-9, 1987.

29. Walter, S.D., Sutton, J.R., McIntosh, J.M., and Connolly, C. The Aetiology of Sports Injuries: A Review of Methodologies. Sp Med 2:47-58, 1985.

30. Watkins, R,J. An Epidemiological Study: Injury Rates Among Collegiate Ultimate Frisbee Players in the Western United States. Thesis, Univ. of Oregon, 1990.

31. Zemper, E.D. NCAA Injury Surveillance System: Initial Results. Paper presented at the 1984 Olympic Scientific Congress, July 1984, Univ. of Oregon, Eugene, OR, 1984.

32. Zemper, E.D. Injury Rates in a National Sample of College Football Teams: A Two-Year Prospective Study. Phys Sportsmed 17(11):100-113, 1989a.

33. Zemper, E.D. Cerebral Concussion Rates in Various Brands of Football Helmets. Athletic Training 24(2): 133-137, 1989b.

34. Zemper, E.D. A Prospective Study of Prophylactic Knee Braces in a National Sample of American College Football Players. Proceedings of the First I·nternational Olympic Committee World Congress on Sport Sciences, U.S. Olympic Committee, Colorado Springs, pp.202-203, 1989c.

35. Zemper, E.D. A Prospective Study of Injury Rates in a National Sample of American College Football Teams. Proceedings of the First International Olympic Committee World Congress on Sport Sciences, U.S. Olympic Committee, Colorado Springs, pp.194-195, 1989d.

36. Zemper, E.D. A Two-Year Study of Prophylactic Knee Braces in a National Sample of College Football Players. Sports Training, Medicine and Rehabilitation 1:287-296, 1990.

37. Zemper, E.D. Four-Year Study of Weightroom Injuries in a National Sample of College Football Teams. National Strength and Conditioning Association Journa1 12(3):32-34, 1990.

38. Zemper, E.D. Exercise and Injury Patterns in a Sample of Active Middle-Aged Adults. International Congress and Exposition on Sport Medicine and Human Performance, Vancouver, B.C., April 1991. Programme/Abstracts: p. 98, 1991.

39. Zemper, E.D., and Pieter, W. Injury Rates at the 1988 U.S. Olympic Team Trials for Taekwondo. Br J Sp Med 23(3):161-164, l989.

40. Zemper, E.D., and Pieter, W. Injury Rates in Junior and Senior National Taekwondo Competition. Proceedings of the First International Olympic Committee World Congress on Sport Sciences, U.S. Olympic Committee, Colorado Springs. pp. 219-220, 1989.

41. Zernper, E.D., and Pieter, W. The Oregon Taekwondo Research Project -- Part II: Preliminary Injury Research Results. Taekwondo USA Winter 1990.

42. Zemper, E.D., and Pieter, W. Cerebral Concussion Rates in Taekwondo Competition. Med Sci Sp Exer 22(2-Supplement): S130, 1990.

43. Zemper, E.D., and Pieter, W, Two-Year Prospective Study of Injury Rates in National Taekwondo Competition. International Congress and Exposition on Sport Medicine and Human Performance, Vancouver, B.C., April 1991. Programme/Abstracts: p. 99, 1991.