Epidemiology
Epidemiology is the study (or the science of the study) of the patterns, causes, and effects of health and disease conditions in defined populations. It is the cornerstone of public health, and informs policy decisions and evidence-based medicine by identifying risk factors for disease and targets for preventive medicine. Epidemiologists help with study design, collection and statistical analysis of data, and interpretation and dissemination of results (including peer review and occasional systematic review). Epidemiology has helped develop methodology used in clinical research, public health studies and, to a lesser extent, basic research in the biological sciences.[1]
Major areas of epidemiological study include disease etiology, outbreak investigation, disease surveillance and screening, biomonitoring, and comparisons of treatment effects such as in clinical trials. Epidemiologists rely on other scientific disciplines like biology to better understand disease processes, statistics to make efficient use of the data and draw appropriate conclusions, social sciences to better understand proximate and distal causes, and engineering for exposure assessment.
Etymology
Epidemiology, literally meaning "the study of what is upon the people", is derived from Greek epi 'upon, among' demos 'people, district' and logos 'study, word, discourse', suggesting that it applies only to human populations. However, the term is widely used in studies of zoological populations (veterinary epidemiology), although the term "epizoology" is available, and it has also been applied to studies of plant populations (botanical or plant disease epidemiology).[2]
The distinction between "epidemic" and "endemic" was first drawn by Hippocrates,[3] to distinguish between diseases that are "visited upon" a population (epidemic) from those that "reside within" a population (endemic).[4] The term "epidemiology" appears to have first been used to describe the study of epidemics in 1802 by the Spanish physician Villalba in Epidemiología Española.[4] Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic.
The term epidemiology is now widely applied to cover the description and causation of not only epidemic disease, but of disease in general, and even many non-disease health-related conditions, such as high blood pressure and obesity. therefore this epidemiology is based upon how the pattern of the disease cause changes in the function of everyone.
History
The Greek physician Hippocrates is known as the father of medicine, and was the first epidemiologist.[5][6] Hippocrates sought a logic to sickness; he is the first person known to have examined the relationships between the occurrence of disease and environmental influences.[7] Hippocrates believed sickness of the human body to be caused by an imbalance of the four Humors (air, fire, water and earth “atoms”). The cure to the sickness was to remove or add the humor in question to balance the body. This belief led to the application of bloodletting and dieting in medicine.[8] He coined the terms endemic (for diseases usually found in some places but not in others) and epidemic (for diseases that are seen at some times but not others).[9]
Epidemiology is defined as the study of distribution and determinants of health related states in populations and use of this study to address health related problems. One of the earliest theories on the origin of disease was that it was primarily the fault of human luxury. This was expressed by philosophers such as Plato[10] and Rousseau,[11] and social critics like Jonathan Swift.[12]
In the middle of the 16th century, a doctor from Verona named Girolamo Fracastoro was the first to propose a theory that these very small, unseeable, particles that cause disease were alive. They were considered to be able to spread by air, multiply by themselves and to be destroyable by fire. In this way he refuted Galen's miasma theory (poison gas in sick people). In 1543 he wrote a book De contagione et contagiosis morbis, in which he was the first to promote personal and environmental hygiene to prevent disease. The development of a sufficiently powerful microscope by Anton van Leeuwenhoek in 1675 provided visual evidence of living particles consistent with a germ theory of disease.
Another entrepreneur in the medical science Thomas Sydenham (1624-1689), was the first to recognize the differences of the fevers plaguing the Londoners in the later 1600s. His theories on cures of fevers met with much vehemence from traditional practicing doctors at the time. However, he never found the initial cause of the smallpox fever he researched and treated.[8]
John Graunt, a professional haberdasher and serious amateur scientist, published Natural and Political Observations ... upon the Bills of Mortality in 1662. In it, he used analysis of the mortality rolls in London before the Great Plague to present one of the first life tables and report time trends for many diseases, new and old. He provided statistical evidence for many theories on disease, and also refuted many widespread ideas on them.
Modern era
Dr. John Snow is famous for his investigations into the causes of the 19th century cholera epidemics, and is also known as the father of (modern) epidemiology.[13][14] He began with noticing the significantly higher death rates in two areas supplied by Southwark Company. His identification of the Broad Street pump as the cause of the Soho epidemic is considered the classic example of epidemiology. He used chlorine in an attempt to clean the water and had the handle removed, thus ending the outbreak. This has been perceived as a major event in the history of public health and regarded as the founding event of the science of epidemiology, having helped shape public health policies around the world.[15][16] However, Snow’s research and preventive measures to avoid further outbreaks were not fully accepted or put into practice until after his death.
Other pioneers include Danish physician Peter Anton Schleisner, who in 1849 related his work on the prevention of the epidemic of neonatal tetanus on the Vestmanna Islands in Iceland.[17][18] Another important pioneer was Hungarian physician Ignaz Semmelweis, who in 1847 brought down infant mortality at a Vienna hospital by instituting a disinfection procedure. His findings were published in 1850, but his work was ill received by his colleagues, who discontinued the procedure. Disinfection did not become widely practiced until British surgeon Joseph Lister 'discovered' antiseptics in 1865 in light of the work of Louis Pasteur.
In the early 20th century, mathematical methods were introduced into epidemiology by Ronald Ross, Anderson Gray McKendrick and others.[19][20][21]
Another breakthrough was the 1954 publication of the results of a British Doctors Study, led by Richard Doll and Austin Bradford Hill, which lent very strong statistical support to the suspicion that tobacco smoking was linked to lung cancer.
The profession
To date, few universities offer epidemiology as a course of study at the undergraduate level. Many epidemiologists are physicians, or hold graduate degrees such as a Master of Public Health (MPH), Master of Science of Epidemiology (MSc.). Doctorates include the Doctor of Public Health (DrPH), Doctor of Pharmacy (PharmD), Doctor of Philosophy (PhD), Doctor of Science (ScD), Doctor of Podiatric Medicine (DPM), Doctor of Veterinary Medicine (DVM), Doctor of Nursing Practice (DNP), Doctor of Physical Therapy (DPT), or for clinically trained physicians, Doctor of Medicine (MD) and Doctor of Osteopathic Medicine (DO). In the United Kingdom, the title of 'doctor' is by long custom used to refer to general medical practitioners, whose professional degrees are usually those of Bachelor of Medicine and Surgery (MBBS or MBChB).
As public health/health protection practitioners, epidemiologists work in a number of different settings. Some epidemiologists work 'in the field'; i.e., in the community, commonly in a public health/health protection service and are often at the forefront of investigating and combating disease outbreaks. Others work for non-profit organizations, universities, hospitals and larger government entities such as the Centers for Disease Control and Prevention (CDC), the Health Protection Agency, the World Health Organization (WHO), or the Public Health Agency of Canada. Epidemiologists can also work in for-profit organizations such as pharmaceutical and medical device companies in groups such as market research or clinical development.
The practice
Epidemiologists employ a range of study designs from the observational to experimental and generally categorized as descriptive, analytic (aiming to further examine known associations or hypothesized relationships), and experimental (a term often equated with clinical or community trials of treatments and other interventions). In observational studies, nature is allowed to “take its course”, as epidemiologists observe from the sidelines. Controversially, in experimental studies, the epidemiologist is the one in control of all of the factors entering a certain case study.[22] Epidemiological studies are aimed, where possible, at revealing unbiased relationships between exposures such as alcohol or smoking, biological agents, stress, or chemicals to mortality or morbidity. The identification of causal relationships between these exposures and outcomes is an important aspect of epidemiology. Modern epidemiologists use informatics as a tool.
Observational studies have two components: descriptive, or analytical. Descriptive observations pertain to the “who, what, where and when of health-related state occurrence”. However, analytical observations deal more with the ‘how’ of a health-related event.[22]
Experimental epidemiology contains three case types: randomized control trial (often used for new medicine or drug testing), field trial (conducted on those at a high risk of conducting a disease), and community trial (research on social originating diseases).[22]
Unfortunately, many epidemiology studies conducted cause false or misinterpreted information to circulate the public. According to a class taught by professor Madhukar Pai MD, PhD at McGill, “…optimism bias is pervasive, most studies biased or inconclusive or false, most discovered true associations are inflated, fear and panic inducing rather than helpful; media-induced panic, cannot detect small effects; big effects are not to be found anymore”.[23]
The term 'epidemiologic triad' is used to describe the intersection of Host, Agent, and Environment in analyzing an outbreak.
As causal inference
Although epidemiology is sometimes viewed as a collection of statistical tools used to elucidate the associations of exposures to health outcomes, a deeper understanding of this science is that of discovering causal relationships.
It is nearly impossible to say with perfect accuracy how even the most simple physical systems behave beyond the immediate future, much less the complex field of epidemiology, which draws on biology, sociology, mathematics, statistics, anthropology, psychology, and policy; "Correlation does not imply causation" is a common theme for much of the epidemiological literature. For epidemiologists, the key is in the term inference. Epidemiologists use gathered data and a broad range of biomedical and psychosocial theories in an iterative way to generate or expand theory, to test hypotheses, and to make educated, informed assertions about which relationships are causal, and about exactly how they are causal.
Epidemiologists Rothman and Greenland emphasize that the "one cause - one effect" understanding is a simplistic mis-belief. Most outcomes, whether disease or death, are caused by a chain or web consisting of many component causes. Causes can be distinguished as necessary, sufficient or probabilistic conditions. If a necessary condition can be identified and controlled (e.g., antibodies to a disease agent), the harmful outcome can be avoided.
Bradford Hill criteria
In 1965 Austin Bradford Hill proposed a series of considerations to help assess evidence of causation,[24] which have come to be commonly known as the "Bradford Hill criteria". In contrast to the explicit intentions of their author, Hill's considerations are now sometimes taught as a checklist to be implemented for assessing causality.[25] Hill himself said "None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non."[24]
- Strength: A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal.[24]
- Consistency: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.[24]
- Specificity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship.[24]
- Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay).[24]
- Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence.[24]
- Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge).[24]
- Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that "... lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations".[24]
- Experiment: "Occasionally it is possible to appeal to experimental evidence".[24]
- Analogy: The effect of similar factors may be considered.[24]
Legal interpretation
Epidemiological studies can only go to prove that an agent could have caused, but not that it did cause, an effect in any particular case:
"Epidemiology is concerned with the incidence of disease in populations and does not address the question of the cause of an individual's disease. This question, sometimes referred to as specific causation, is beyond the domain of the science of epidemiology. Epidemiology has its limits at the point where an inference is made that the relationship between an agent and a disease is causal (general causation) and where the magnitude of excess risk attributed to the agent has been determined; that is, epidemiology addresses whether an agent can cause a disease, not whether an agent did cause a specific plaintiff's disease."[26]
In United States law, epidemiology alone cannot prove that a causal association does not exist in general. Conversely, it can be (and is in some circumstances) taken by US courts, in an individual case, to justify an inference that a causal association does exist, based upon a balance of probability.
The subdiscipline of forensic epidemiology is directed at the investigation of specific causation of disease or injury in individuals or groups of individuals in instances in which causation is disputed or is unclear, for presentation in legal settings.
Advocacy
As a public health discipline, epidemiological evidence is often used to advocate both personal measures like diet change and corporate measures like removal of junk food advertising, with study findings disseminated from the general public to help people make informed decisions about their health. Epidemiological tools have proved effective in identifying risk factors for diseases like cholera, lung cancer, and cardiovascular disease.[24]
Population-based health management
Epidemiological practice and the results of epidemiological analysis make a significant contribution to emerging population-based health management frameworks.
Population-based health management encompasses the ability to:
- Assess the health states and health needs of a target population;
- Implement and evaluate interventions that are designed to improve the health of that population; and
- Efficiently and effectively provide care for members of that population in a way that is consistent with the community's cultural, policy and health resource values.
Modern population-based health management is complex, requiring a multiple set of skills (medical, political, technological, mathematical etc.) of which epidemiological practice and analysis is a core component, that is unified with management science to provide efficient and effective health care and health guidance to a population. This task requires the forward looking ability of modern risk management approaches that transform health risk factors, incidence, prevalence and mortality statistics (derived from epidemiological analysis) into management metrics that not only guide how a health system responds to current population health issues, but also how a health system can be managed to better respond to future potential population health issues.[27]
Examples of organizations that use population-based health management that leverage the work and results of epidemiological practice include Canadian Strategy for Cancer Control, Health Canada Tobacco Control Programs, Rick Hansen Foundation, Canadian Tobacco Control Research Initiative.[28][29][30]
Each of these organizations use a population-based health management framework called Life at Risk that combines epidemiological quantitative analysis with demographics, health agency operational research and economics to perform:
- Population Life Impacts Simulations: Measurement of the future potential impact of disease upon the population with respect to new disease cases, prevalence, premature death as well as potential years of life lost from disability and death;
- Labour Force Life Impacts Simulations: Measurement of the future potential impact of disease upon the labour force with respect to new disease cases, prevalence, premature death and potential years of life lost from disability and death;
- Economic Impacts of Disease Simulations: Measurement of the future potential impact of disease upon private sector disposable income impacts (wages, corporate profits, private health care costs) and public sector disposable income impacts (personal income tax, corporate income tax, consumption taxes, publicly funded health care costs).
Types of studies
Case series
Case-series may refer to the qualititative study of the experience of a single patient, or small group of patients with a similar diagnosis, or to a statistical technique comparing periods during which patients are exposed to some factor with the potential to produce illness with periods when they are unexposed.
The former type of study is purely descriptive and cannot be used to make inferences about the general population of patients with that disease. These types of studies, in which an astute clinician identifies an unusual feature of a disease or a patient's history, may lead to formulation of a new hypothesis. Using the data from the series, analytic studies could be done to investigate possible causal factors. These can include case control studies or prospective studies. A case control study would involve matching comparable controls without the disease to the cases in the series. A prospective study would involve following the case series over time to evaluate the disease's natural history.[31]
The latter type, more formally described as self-controlled case-series studies, divide individual patient follow-up time into exposed and unexposed periods and use fixed-effects Poisson regression processes to compare the incidence rate of a given outcome between exposed and unexposed periods. This technique has been extensively used in the study of adverse reactions to vaccination, and has been shown in some circumstances to provide statistical power comparable to that available in cohort studies.
Case control studies
Case control studies select subjects based on their disease status. A group of individuals that are disease positive (the "case" group) is compared with a group of disease negative individuals (the "control" group). The control group should ideally come from the same population that gave rise to the cases. The case control study looks back through time at potential exposures that both groups (cases and controls) may have encountered. A 2x2 table is constructed, displaying exposed cases (A), exposed controls (B), unexposed cases (C) and unexposed controls (D). The statistic generated to measure association is the odds ratio (OR), which is the ratio of the odds of exposure in the cases (A/C) to the odds of exposure in the controls (B/D), i.e. OR = (AD/BC).
..... | Cases | Controls |
---|---|---|
Exposed | A | B |
Unexposed | C | D |
If the OR is clearly greater than 1, then the conclusion is "those with the disease are more likely to have been exposed," whereas if it is close to 1 then the exposure and disease are not likely associated. If the OR is far less than one, then this suggests that the exposure is a protective factor in the causation of the disease. Case control studies are usually faster and more cost effective than cohort studies, but are sensitive to bias (such as recall bias and selection bias). The main challenge is to identify the appropriate control group; the distribution of exposure among the control group should be representative of the distribution in the population that gave rise to the cases. This can be achieved by drawing a random sample from the original population at risk. This has as a consequence that the control group can contain people with the disease under study when the disease has a high attack rate in a population.
A major drawback for case control studies is that, in order to be considered to be statistically significant, the minimum number of cases required at the 95% confidence interval is related to the odds ratio by the equation:
total cases = (a+c) = (1.96)^2×(1+N)×(1÷ln(OR))^2×((OR+2√OR+1)÷√OR)≈15.5×(1+N)×(1÷ln(OR))^2
where N = the ratio of cases to controls. As the odds ratio approached 1, approaches 0; rendering case control studies all but useless for low odds ratios. For instance, for an odds ratio of 1.5 and cases = controls, the table shown above would look like this:
..... | Cases | Controls |
---|---|---|
Exposed | 103 | 84 |
Unexposed | 84 | 103 |
For an odds ratio of 1.1:
..... | Cases | Controls |
---|---|---|
Exposed | 1732 | 1652 |
Unexposed | 1652 | 1732 |
Cohort studies
Cohort studies select subjects based on their exposure status. The study subjects should be at risk of the outcome under investigation at the beginning of the cohort study; this usually means that they should be disease free when the cohort study starts. The cohort is followed through time to assess their later outcome status. An example of a cohort study would be the investigation of a cohort of smokers and non-smokers over time to estimate the incidence of lung cancer. The same 2x2 table is constructed as with the case control study. However, the point estimate generated is the Relative Risk (RR), which is the probability of disease for a person in the exposed group, Pe = A / (A+B) over the probability of disease for a person in the unexposed group, Pu = C / (C+D), i.e. RR = Pe / Pu.
..... | Case | Non case | Total |
---|---|---|---|
Exposed | A | B | (A+B) |
Unexposed | C | D | (C+D) |
As with the OR, a RR greater than 1 shows association, where the conclusion can be read "those with the exposure were more likely to develop disease."
Prospective studies have many benefits over case control studies. The RR is a more powerful effect measure than the OR, as the OR is just an estimation of the RR, since true incidence cannot be calculated in a case control study where subjects are selected based on disease status. Temporality can be established in a prospective study, and confounders are more easily controlled for. However, they are more costly, and there is a greater chance of losing subjects to follow-up based on the long time period over which the cohort is followed.
Cohort studies also are limited by the same equation for number of cases as for cohort studies, but, if the base incidence rate in the study population is very low, the number of cases required is reduced by ½.
Outbreak investigation
- For information on investigation of infectious disease outbreaks, please see outbreak investigation.
Validity: precision and bias
Different fields in epidemiology have different levels of validity. One way to assess the validity of findings is the ratio of false-positives (claimed effects that are not correct) to false-negatives (studies which fail to support a true effect). To take the field off genetic epidemiology, candidate-gene studies produced over 100 false-positive findings for each false-negative. By contrast genome-wide association appear close to the reverse, with only one false positive for every 100 or more false-negatives.[32] This ratio has improved over time in genetic epidemiology as the field has adopted stringent criteria. By contrast other epidemiological fields have not required such rigorous reporting and are much less reliable as a result [32]
Random error
Random error is the result of fluctuations around a true value because of sampling variability. Random error is just that: random. It can occur during data collection, coding, transfer, or analysis. Examples of random error include: poorly worded questions, a misunderstanding in interpreting an individual answer from a particular respondent, or a typographical error during coding. Random error affects measurement in a transient, inconsistent manner and it is impossible to correct for random error.
There is random error in all sampling procedures. This is called sampling error.
Precision in epidemiological variables is a measure of random error. Precision is also inversely related to random error, so that to reduce random error is to increase precision. Confidence intervals are computed to demonstrate the precision of relative risk estimates. The narrower the confidence interval, the more precise the relative risk estimate.
There are two basic ways to reduce random error in an epidemiological study. The first is to increase the sample size of the study. In other words, add more subjects to your study. The second is to reduce the variability in measurement in the study. This might be accomplished by using a more precise measuring device or by increasing the number of measurements.
Note, that if sample size or number of measurements are increased, or a more precise measuring tool is purchased, the costs of the study are usually increased. There is usually an uneasy balance between the need for adequate precision and the practical issue of study cost.
Systematic error
A systematic error or bias occurs when there is a difference between the true value (in the population) and the observed value (in the study) from any cause other than sampling variability. An example of systematic error is if, unknown to you, the pulse oximeter you are using is set incorrectly and adds two points to the true value each time a measurement is taken. The measuring device could be precise but not accurate. Because the error happens in every instance, it is systematic. Conclusions you draw based on that data will still be incorrect. But the error can be reproduced in the future (e.g., by using the same mis-set instrument).
A mistake in coding that affects all responses for that particular question is another example of a systematic error.
The validity of a study is dependent on the degree of systematic error. Validity is usually separated into two components:
- Internal validity is dependent on the amount of error in measurements, including exposure, disease, and the associations between these variables. Good internal validity implies a lack of error in measurement and suggests that inferences may be drawn at least as they pertain to the subjects under study.
- External validity pertains to the process of generalizing the findings of the study to the population from which the sample was drawn (or even beyond that population to a more universal statement). This requires an understanding of which conditions are relevant (or irrelevant) to the generalization. Internal validity is clearly a prerequisite for external validity.
Three types of bias
Selection bias
Selection bias is one of three types of bias that can threaten the validity of a study. Selection bias occurs when study subjects are selected or become part of the study as a result of a third, unmeasured variable which is associated with both the exposure and outcome of interest.[33] For instance, it has repeatedly been noted that cigarette smokers and non smokers tend to differ in their study participation rates. (Sackett D cites the example of Seltzer et al., in which 85% of non smokers and 67% of smokers returned mailed questionnaires)[34] It is important to note that such a difference in response will not lead to bias if it is not also associated with a systematic difference in outcome between the two response groups.
Information bias
Information bias is bias arising from systematic error in the assessment of a variable.[35] An example of this is recall bias. A typical example is again provided by Sackett in his discussion of a study examining the effect of specific exposures on fetal health: "in questioning mothers whose recent pregnancies had ended in fetal death or malformation (cases) and a matched group of mothers whose pregnancies ended normally (controls) it was found that 28% of the former, but only 20% of the latter, reported exposure to drugs which could not be substantiated either in earlier prospective interviews or in other health records".[34] In this example, recall bias probably occurred as a result of women who had had miscarriages having an apparent tendency to better recall and therefore report previous exposures.
Confounding
Confounding has traditionally been defined as bias arising from the co-occurrence or mixing of effects of extraneous factors, referred to as confounders, with the main effect(s) of interest.[35][36] A more recent definition of confounding invokes the notion of counterfactual effects.[36] According to this view, when one observes an outcome of interest, say Y=1 (as opposed to Y=0), in a given population A which is entirely exposed (i.e. exposure X=1 for every unit of the population) the risk of this event will be RA1. The counterfactual or unobserved risk RA0 corresponds to the risk which would have been observed if these same individuals had been unexposed (i.e. X=0 for every unit of the population). The true effect of exposure therefore is: RA1 - RA0 (if one is interested in risk differences) or RA1/RA0 (if one is interested in relative risk). Since the counterfactual risk RA0 is unobservable we approximate it using a second population B and we actually measure the following relations: RA1 - RB0 or RA1/RB0. In this situation, confounding occurs when RA0 ≠ RB0.[36] (NB: Example assumes binary outcome and exposure variables.)
Some epidemiologists prefer to think of confounding separately from common categorizations of bias since, unlike selection and information bias, confounding stems from real causal effects.[37]
Journals
A list of journals:[38]
Areas
By physiology/disease:
|
By methodological approach:
|
See also
- Age adjustment
- Biostatistics
- Centers for Disease Control and Prevention in the United States
- Centre for Research on the Epidemiology of Disasters (CRED)
- Computational epidemiology
- Demographic Transition
- Disease diffusion mapping
- E-epidemiology
- Epi Info software program
- Epidemic model
- Epidemiological methods
- Epidemiological Transition
- Essence (Electronic Surveillance System for the Early Notification of Community-based Epidemics)
- European Centre for Disease Prevention and Control
- Hispanic paradox
- International Society for Pharmacoepidemiology
- Landscape epidemiology
- Mathematical modelling in epidemiology
- Mendelian randomization
- Modifiable Areal Unit Problem
- OpenEpi software program
- Palaeoepidemiology
- Population groups in biomedicine
- Spatiotemporal Epidemiological Modeler (STEM)
- Study of Health in Pomerania
- Syndemic
- Thousand Families Study, Newcastle upon Tyne
- Umeå Centre for Global Health Research
- Whitehall Study
- Winpepi Computer Programs for Epidemiologists
References
Notes
- ^ Miquel Porta (2008). A Dictionary of Epidemiology. Oxford University Press. pp. 10–11. ISBN 978-0-19-531450-2. Retrieved 11 July 2012.
- ^ Nutter, Jr., F.W. (1999). "Understanding the interrelationships between botanical, human, and veterinary epidemiology: the Ys and Rs of it all". Ecosys Health. 5 (3): 131–40. doi:10.1046/j.1526-0992.1999.09922.x.
- ^ Hippocrates. (~200BC). Airs, Waters, Places.
- ^ a b Carol Buck, Alvaro Llopis, Enrique Nájera, Milton Terris. (1998). The Challenge of Epidemiology: Issues and Selected Readings. Scientific Publication No. 505. Pan American Health Organization. Washington, DC. p3.
- ^ Alfredo Morabia (2004). A history of epidemiologic methods and concepts. Birkhäuser. p. 93. ISBN 3-7643-6818-7.
- ^ Historical Developments in Epidemiology. Chapter 2. Jones & Bartlett Learning LLC.
- ^ Ray M. Merrill (2010). Introduction to Epidemiology. Jones & Bartlett Learning. p. 24. ISBN 0-7637-6622-4.
- ^ a b Merril, Ray M., PhD, MPH. “An Introduction to Epidemiology, Fifth Edition”. Chapter 2: Historic Developments in Epidemiology. Jones and Bartlett Publishing, 2010. Web. 17 Sept. 2012.
- ^ "Changing Concepts: Background to Epidemiology" (PDF). Duncan & Associates. Retrieved 2008-02-03.
- ^ Plato. "The Republic". The Internet Classic Archive. Retrieved 2008-02-03.
- ^ "A Dissertation on the Origin and Foundation of the Inequality of Mankind". Constitution Society.
- ^ Swift, Jonathan. "Gulliver's Travels: Part IV. A Voyage to the Country of the Houyhnhnms". Retrieved 2008-02-03.
- ^ Doctor John Snow Blames Water Pollution for Cholera Epidemic, by David Vachon UCLA Department of Epidemlology, School of Public Health May & June, 2005
- ^ John Snow, Father of Epidemiology NPR Talk of the Nation. September 24, 2004
- ^ The Importance of Snow. Gro Harlem Brundtland, M.D., M.P.H.former Director-General, World Health Organization. Geneva, Switzerland Talk, Washington, D.C., 28 October 1998
- ^ John Snow, Inc. and JSI Research & Training Institute, Inc.[dead link ]
- ^ Ólöf Garðarsdóttir; Loftur Guttormsson (June 2008). "An isolated case of early medical intervention. The battle against neonatal tetanus in the island of Vestmannaeyjar (Iceland) during the 19th century" (PDF). Instituto de Economía y Geografía. Retrieved 2011-04-19.
- ^ Ólöf Garðarsdóttir; Loftur Guttormsson (25 August 2009). "Public health measures against neonatal tetanus on the island of Vestmannaeyjar (Iceland) during the 19th century". The History of the Family. 14 (3): 266–79. doi:10.1016/j.hisfam.2009.08.004.[verification needed]
- ^ Statisticians of the centuries. By C. C. Heyde, Eugene Senet
- ^ Anderson Gray McKendrick
- ^ Statistical methods in epidemiology: Karl Pearson, Ronald Ross, Major Greenwood and Austin Bradford Hill, 1900 – 1945. Trust Centre for the History of Medicine at UCL, London
- ^ a b c "Principles of Epidemiology." Key Concepts in Public Health. London: Sage UK, 2009. Credo Reference. 1 Aug. 2011. Web. 30 Sept. 2012.
- ^ Pai, Madhukar, MD, PhD. “Epidemiology: The Big Picture”. McGill University, 2008. Web. 17 Sept. 2012.
- ^ a b c d e f g h i j k l Hill, Austin Bradford (1965). "The Environment and Disease: Association or Causation?". Proceedings of the Royal Society of Medicine. 58 (5): 295–300. PMC 1898525. PMID 14283879.
- ^ Phillips, Carl V. (2004). "The missed lessons of Sir Austin Bradford Hill". Epidemiologic Perspectives and Innovations. 1 (3): 3. doi:10.1186/1742-5573-1-3. PMC 524370. PMID 15507128.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|month=
ignored (help)CS1 maint: unflagged free DOI (link) - ^ Green, Michael D. Reference Guide on Epidemiology (PDF). Federal Judicial Centre. Retrieved 2008-02-03.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Neil Myburgh. "Measuring Health and Disease I: Introduction to Epidemiology". Retrieved 16 December 2011.
{{cite web}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Smetanin, P. (2005). Interdisciplinary Cancer Risk Management: Canadian Life and Economic Impacts. 1st International Cancer Control Congress.
{{cite conference}}
: External link in
(help); Unknown parameter|conferenceurl=
|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|conferenceurl=
ignored (|conference-url=
suggested) (help); Unknown parameter|month=
ignored (help) - ^ Smetanin, P. (2006). A Population-Based Risk Management Framework for Cancer Control (PDF). The International Union Against Cancer Conference.
{{cite conference}}
: External link in
(help); Unknown parameter|conferenceurl=
|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|conferenceurl=
ignored (|conference-url=
suggested) (help); Unknown parameter|month=
ignored (help) - ^ Smetanin, P. (2005). Selected Canadian Life and Economic Forecast Impacts of Lung Cancer (PDF). 11th World Conference on Lung Cancer.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|month=
ignored (help) - ^ Hennekens, Charles H. (1987). Mayrent, Sherry L. (Ed.) (ed.). Epidemiology in Medicine. Lippincott, Williams and Wilkins. ISBN 978-0-316-35636-7.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 21490505, please use {{cite journal}} with
|pmid= 21490505
instead. - ^ Hernẚndez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology 2004; vol. 15:615-25F%20bg/Sackett%20DL%201979%20bias%20in%20analytic%20research.pdf] 24
- ^ a b [1] 24
- ^ a b Special:BookSources/0195135547 21
- ^ a b c [2] 22
- ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 15308962, please use {{cite journal}} with
|pmid=15308962
instead. - ^ "Epidemiologic Inquiry: Impact Factors of leading epidemiology journals". Epidemiologic.org. Retrieved 2008-02-03.
Bibliography
- Clayton, David and Michael Hills (1993) Statistical Models in Epidemiology Oxford University Press. ISBN 0-19-852221-5
- Last JM (2001). "A dictionary of epidemiology", 4th edn, Oxford: Oxford University Press. 5th. edn (2008), edited by Miquel Porta [3]
- Morabia, Alfredo. ed. (2004) A History of Epidemiologic Methods and Concepts. Basel, Birkhauser Verlag. Part I. [4] [5]
- Smetanin P., Kobak P., Moyer C., Maley O (2005) "The Risk Management of Tobacco Control Research Policy Programs" The World Conference on Tobacco OR Health Conference, July 12–15, 2006 in Washington DC.
- Szklo MM & Nieto FJ (2002). "Epidemiology: beyond the basics", Aspen Publishers, Inc.
- Rothman, Kenneth, Sander Greenland and Timothy Lash (2008). "Modern Epidemiology", 3rd Edition, Lippincott Williams & Wilkins. ISBN 0-7817-5564-6, ISBN 978-0-7817-5564-1
- Rothman, Kenneth (2002). "Epidemiology. An introduction", Oxford University Press. ISBN 0-19-513554-7, ISBN 978-0-19-513554-1
- Olsen J, Christensen K, Murray J, Ekbom A. An Introduction to Epidemiology for Health Professionals. New York: Springer Science+Business Media; 2010. e-ISBN 978-1-4419-1497-2
External links
- The Health Protection Agency
- The Collection of Biostatistics Research Archive
- European Epidemiological Federation
- 'Epidemiology for the Uninitiated' by D. Coggon, G. Rose, D.J.P. Barker, British Medical Journal
- Epidem.com - Epidemiology (peer reviewed scientific journal that publishes original research on epidemiologic topics)
- 'Epidemiology' - In: Philip S. Brachman, Medical Microbiology (fourth edition), US National Center for Biotechnology Information
- Monash Virtual Laboratory - Simulations of epidemic spread across a landscape
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health
- Centre for Research on the Epidemiology of Disasters – A WHO collaborating centre
- People's Epidemiology Library